The last mile or last kilometer is a phrase widely used in the telecommunications, cable television and internet industries to refer to the final leg of the telecommunications networks that deliver telecommunication services to retail end-users (customers). More specifically, the last mile describes the portion of the telecommunications network chain that physically reaches the end-user's premises. Examples are the copper wire subscriber lines connecting landline telephones to the local telephone exchange; coaxial cable service drops carrying cable television signals from utility poles to subscribers' homes, and cell towers linking local cell phones to the cellular network. The word "mile" is used metaphorically; the length of the last mile link may be more or less than a mile. Because the last mile of a network to the user is conversely the first mile from the user's premises to the outside world when the user is sending data, the term first mile is also alternatively used.
The last mile is typically the speed bottleneck in communication networks; its bandwidth effectively limits the amount of data that can be delivered to the customer. This is because retail telecommunication networks have the topology of "trees", with relatively few high capacity "trunk" communication channels branching out to feed many final mile "twigs". The final mile links, being the most numerous and thus the most expensive part of the system, as well as having to interface with a wide variety of user equipment, are the most difficult to upgrade to new technology. For example, telephone trunklines that carry phone calls between switching centers are made of modern optical fiber, but the last mile is typically twisted pair wires, a technology which has essentially remained unchanged for over a century since the original laying of copper phone cables.
To resolve, or at least mitigate, the problems involved with attempting to provide enhanced services over the last mile, some firms have been mixing networks for decades. One example is fixed wireless access, where a wireless network is used instead of wires to connect a stationary terminal to the wireline network. Various solutions are being developed which are seen as an alternative to the last mile of standard incumbent local exchange carriers. These include WiMAX and broadband over power lines.
In recent years, usage of the term "last mile" has expanded outside the communications industries, to include other distribution networks that deliver goods to customers, such as the pipes that deliver water and natural gas to customer premises, and the final legs of mail and package delivery services.
The increasing worldwide demand for rapid, low-latency and high-volume communication of information to homes and businesses has made economical information distribution and delivery increasingly important. As demand has escalated, particularly fueled by the widespread adoption of the Internet, the need for economical high-speed access by end-users located at millions of locations has ballooned as well.
As requirements have changed, the existing systems and networks that were initially pressed into service for this purpose have proven to be inadequate. To date, although a number of approaches have been tried, no single clear solution to the 'last mile problem' has emerged.
As expressed by Shannon's equation for channel information capacity, the omnipresence of noise in information systems sets a minimum signal-to-noise ratio (shortened as S/N) requirement in a channel, even when adequate spectral bandwidth is available. Since the integral of the rate of information transfer with respect to time is information quantity, this requirement leads to a corresponding minimum energy per bit. The problem of sending any given amount of information across a channel can therefore be viewed in terms of sending sufficient Information-Carrying Energy (ICE). For this reason the concept of an ICE 'pipe' or 'conduit' is relevant and useful for examining existing systems.
The distribution of information to a great number of widely separated end-users can be compared to the distribution of many other resources. Some familiar analogies are:
All of these have in common conduits that carry a relatively small amount of a resource a short distance to a very large number of physically separated endpoints. Also common are conduits supporting more voluminous flow, which combine and carry many individual portions over much greater distances. The shorter, lower-volume conduits, which individually serve only one or a small fraction of the endpoints, may have far greater combined length than the larger capacity ones. These common attributes are shown to the right.
The high-capacity conduits in these systems tend to also have in common the ability to efficiently transfer the resource over a long distance. Only a small fraction of the resource being transferred is wasted, lost, or misdirected. The same cannot necessarily be said of lower-capacity conduits.
One reason has to do with the efficiency of scale. Conduits that are located closer to the endpoint, or end-user, do not individually have as many users supporting them. Even though they are smaller, each has the overhead of an "installation" obtaining and maintaining a suitable path over which the resource can flow. The funding and resources supporting these smaller conduits tend to come from the immediate locale.
This can have the advantage of a "small-government model". That is, the management and resources for these conduits is provided by local entities and therefore can be optimized to achieve the best solutions in the immediate environment and also to make best use of local resources. However, the lower operating efficiencies and relatively greater installation expenses, compared with the transfer capacities, can cause these smaller conduits, as a whole, to be the most expensive and difficult part of the complete distribution system.
These characteristics have been displayed in the birth, growth, and funding of the Internet. The earliest inter-computer communication tended to be accomplished with direct wireline connections between individual computers. These grew into clusters of small local area networks (LAN). The TCP/IP suite of protocols was born out of the need to connect several of these LANs together, particularly as related to common projects among the United States Department of Defense, industry and some academic institutions.
ARPANET came into being to further these interests. In addition to providing a way for multiple computers and users to share a common inter-LAN connection, the TCP/IP protocols provided a standardized way for dissimilar computers and operating systems to exchange information over this inter-network. The funding and support for the connections among LANs could be spread over one or even several LANs.
As each new LAN, or subnet, was added, the new subnet's constituents enjoyed access to the greater network. At the same time the new subnet enabled access to any network or networks with which it was already networked. Thus the growth became a mutually inclusive or "win-win" event.
In general, economy of scale makes an increase in capacity of a conduit less expensive as the capacity is increased. There is an overhead associated with the creation of any conduit. This overhead is not repeated as capacity is increased within the potential of the technology being utilized.
As the Internet has grown in size, by some estimates doubling in the number of users every eighteen months, economy of scale has resulted in increasingly large information conduits providing the longest distance and highest capacity backbone connections. In recent years, the capacity of fiber-optic communication, aided by a supporting industry, has resulted in an expansion of raw capacity, so much so that in the United States a large amount of installed fiber infrastructure is not being used because it is currently excess capacity "dark fiber".
This excess backbone capacity exists in spite of the trend of increasing per-user data rates and overall quantity of data. Initially, only the inter-LAN connections were high speed. End-users used existing telephone lines and modems, which were capable of data rates of only a few hundred bit/s. Now almost all end users enjoy access at 100 or more times those early rates.
Before considering the characteristics of existing last-mile information delivery mechanisms, it is important to further examine what makes information conduits effective. As the Shannon-Hartley theorem shows, it is the combination of bandwidth and signal-to-noise ratio which determines the maximum information rate of a channel. The product of the average information rate and time yields total information transfer. In the presence of noise, this corresponds to some amount of transferred information-carrying energy (ICE). Therefore, the economics of information transfer may be viewed in terms of the economics of the transfer of ICE.
Effective last-mile conduits must:
In addition to these factors, a good solution to the last-mile problem must provide each user:
Wired systems provide guided conduits for Information-Carrying Energy (ICE). They all have some degree of shielding, which limits their susceptibility to external noise sources. These transmission lines have losses which are proportional to length. Without the addition of periodic amplification, there is some maximum length beyond which all of these systems fail to deliver an adequate S/N ratio to support information flow. Dielectric optical fiber systems support heavier flow at higher cost.
Traditional wired local area networking systems require copper coaxial cable or a twisted pair to be run between or among two or more of the nodes in the network. Common systems operate at 100 Mbit/s, and newer ones also support 1000 Mbit/s or more. While length may be limited by collision detection and avoidance requirements, signal loss and reflections over these lines also define a maximum distance. The decrease in information capacity made available to an individual user is roughly proportional to the number of users sharing a LAN.
In the late 20th century, improvements in the use of existing copper telephone lines increased their capabilities if maximum line length is controlled. With support for higher transmission bandwidth and improved modulation, these digital subscriber line schemes have increased capability 20-50 times as compared to the previous voiceband systems. These methods are not based on altering the fundamental physical properties and limitations of the medium, which, apart from the introduction of twisted pairs, are no different today than when the first telephone exchange was opened in 1877 by the Bell Telephone Company.
The history and long life of copper-based communications infrastructure is both a testament to the ability to derive new value from simple concepts through technological innovation - and a warning that copper communications infrastructure is beginning to offer diminishing returns for continued investment. However one of the largest costs associated with maintaining an ageing copper infrastructure is that of truck roll - sending engineers to physically test, repair, replace and provide new copper connections, and this cost is particularly prevalent in providing rural broadband service over copper. New technologies such as G.Fast and VDSL2 offer viable high speed solutions to rural broadband provision over existing copper. In light of this many companies have developed automated cross connects (cabinet based automated distribution frames) to eliminate the uncertainty and cost associated with maintaining broadband services over existing copper, these systems usually incorporate some form of automated switching and some include test functionality allowing an ISP representative to complete operations previously requiring a site visit (truck roll) from the central office via a web interface. In many countries the last mile link which connects landline business telephone customers to the local telephone exchange is often an ISDN30 which can carry 30 simultaneous telephone calls.
Community antenna television systems, also known as cable television, have been expanded to provide bidirectional communication over existing physical cables. However, they are by nature shared systems and the spectrum available for reverse information flow and achievable S/N are limited. As was done for initial unidirectional TV communication, cable loss is mitigated through the use of periodic amplifiers within the system. These factors set an upper limit on per-user information capacity, particularly when many users share a common section of cable or access network.
Fiber offers high information capacity and after the turn of the 21st century became the deployed medium of choice ("Fiber to the x") given its scalability in the face of the increasing bandwidth requirements of modern applications.
In 2004, according to Richard Lynch, Executive Vice President and Chief Technology Officer of the telecom giant Verizon, the company saw the world moving toward vastly higher bandwidth applications as consumers loved everything broadband had to offer and eagerly devoured as much as they could get, including two-way, user-generated content. Copper and coaxial networks would not - in fact, could not - satisfy these demands, which precipitated Verizon's aggressive move into fiber-to-the-home via FiOS.
Fiber is a future-proof technology that meets the needs of today's users, but unlike other copper-based and wireless last-mile mediums, also has the capacity for years to come, by upgrading the end-point optics and electronics without changing the fiber infrastructure. The fiber itself is installed on existing pole or conduit infrastructure and most of the cost is in labor, providing good regional economic stimulus in the deployment phase and providing a critical foundation for future regional commerce.
Fixed copper lines have been subject to theft due to the value of copper, but optical fibers make unattractive targets. Optical fibers cannot be converted into anything else, whereas copper can be recycled without loss.
Mobile CDN coined the term the 'mobile mile' to categorize the last mile connection when a wireless system is used to reach the customer. In contrast to wired delivery systems, wireless systems use unguided waves to transmit ICE. They all tend to be unshielded and have a greater degree of susceptibility to unwanted signal and noise sources.
Because these waves are not guided but diverge, in free space these systems are attenuated following an inverse-square law, inversely proportional to distance squared. Losses thus increase more slowly with increasing length than for wired systems, whose loss increases exponentially. In a free space environment, beyond a given length, the losses in a wireless system are lower than those in a wired system.
In practice, the presence of atmosphere, and especially obstructions caused by terrain, buildings and foliage can greatly increase the loss above the free space value. Reflection, refraction and diffraction of waves can also alter their transmission characteristics and require specialized systems to accommodate the accompanying distortions.
Wireless systems have an advantage over wired systems in last mile applications in not requiring lines to be installed. However, they also have a disadvantage in that their unguided nature makes them more susceptible to unwanted noise and signals. Spectral reuse can therefore be limited.
Visible and infrared light waves are much shorter than radio frequency waves. Their use to transmit data is referred to as free-space optical communication. Being short, light waves can be focused or collimated with a small lens/antenna, and to a much higher degree than radio waves. Thus, a receiving device can recover a greater portion of the transmitted signal.
Also, because of the high frequency, a high data transfer rate may be available. However, in practical last mile environments, obstructions and de-steering of these beams, and absorption by elements of the atmosphere including fog and rain, particularly over longer paths, can greatly restrict their use for last-mile wireless communications. Longer (redder) waves suffer less obstruction but may carry lower data rates. See RONJA.
Radio frequencies (RF), from low frequencies through the microwave region, have wavelengths much longer than visible light. Although this means that it is not possible to focus the beams nearly as tightly as for light, it also means that the aperture or "capture area" of even the simplest, omnidirectional antenna is significantly larger than that of a lens in any feasible optical system. This characteristic results in greatly increased attenuation or "path loss" for systems that are not highly directional.
Actually, the term path loss is something of a misnomer because no energy is lost on a free-space path. Rather, it is merely not received by the receiving antenna. The apparent reduction in transmission, as frequency is increased, is an artifact of the change in the aperture of a given type of antenna.
Relative to the last-mile problem, these longer wavelengths have an advantage over light waves when omnidirectional or sectored transmissions are considered. The larger aperture of radio antennas results in much greater signal levels for a given path length and therefore higher information capacity. On the other hand, the lower carrier frequencies are not able to support the high information bandwidths, which are required by Shannon's equation when the practical limits of S/N have been reached.
For the above reasons, wireless radio systems are optimal for lower-information-capacity broadcast communications delivered over longer paths. For high-information capacity, highly-directive point-to-point over short ranges, wireless light-wave systems are the most useful.
Historically, most high-information-capacity broadcast has used lower frequencies, generally no higher than the UHF television region, with television itself being a prime example. Terrestrial television has generally been limited to the region above 50 MHz where sufficient information bandwidth is available, and below 1,000 MHz, due to problems associated with increased path loss, as mentioned above.
Two-way communication systems have primarily been limited to lower-information-capacity applications, such as audio, facsimile, or radioteletype. For the most part, higher-capacity systems, such as two-way video communications or terrestrial microwave telephone and data trunks, have been limited and confined to UHF or microwave and to point-point paths.
Higher capacity systems such as third-generation cellular telephone systems require a large infrastructure of more closely spaced cell sites in order to maintain communications within typical environments, where path losses are much greater than in free space and which also require omnidirectional access by the users.
For information delivery to end users, satellite systems, by nature, have relatively long path lengths, even for low earth-orbiting satellites. They are also very expensive to deploy and therefore each satellite must serve many users. Additionally, the very long paths of geostationary satellites cause information latency that makes many real-time applications unfeasible.
As a solution to the last-mile problem, satellite systems have application and sharing limitations. The ICE which they transmit must be spread over a relatively large geographical area. This causes the received signal to be relatively small, unless very large or directional terrestrial antennas are used. A parallel problem exists when a satellite is receiving.
In that case, the satellite system must have a very great information capacity in order to accommodate a multitude of sharing users and each user must have large antenna, with attendant directivity and pointing requirements, in order to obtain even modest information-rate transfer. These requirements render high-information-capacity, bi-directional information systems uneconomical. This is one reason why the Iridium satellite system was not more successful.
For terrestrial and satellite systems, economical, high-capacity, last-mile communications requires point-to-point transmission systems. Except for extremely small geographic areas, broadcast systems are only able to deliver high S/N ratios at low frequencies where there is not sufficient spectrum to support the large information capacity needed by a large number of users. Although complete "flooding" of a region can be accomplished, such systems have the fundamental characteristic that most of the radiated ICE never reaches a user and is wasted.
As information requirements increase, broadcast wireless mesh systems (also sometimes referred to as microcells or nano-cells) which are small enough to provide adequate information distribution to and from a relatively small number of local users require a prohibitively large number of broadcast locations or points of presence along with a large amount of excess capacity to make up for the wasted energy.
Recently a new type of information transport midway between wired and wireless systems has been discovered. Called E-Line, it uses a single central conductor but no outer conductor or shield. The energy is transported in a plane wave which, unlike radio does not diverge, whereas like radio it has no outer guiding structure.
This system exhibits a combination of the attributes of wired and wireless systems and can support high information capacity utilizing existing power lines over a broad range of frequencies from RF through microwave.
Aggregation is a method of bonding multiple lines to achieve a faster, more reliable connection. Some companies[weasel words] believe that ADSL aggregation (or "bonding") is the solution to the UK's last mile problem.