ATM is not the panacea that you might believe from the hype about it. Here's why:
ATM cells are fixed in size at 53 bytes, of which 5 bytes are header, leaving 48 bytes for user payload. This is a 9.4% overhead penalty. This translates to an immediate loss of 14.6 megabits per second out of an OC-3 (155Mb/s) link.
In addition, since ATM uses fixed sized cells (and therefore packets from an IP, IPX or AppleTalk network will be sliced up into small chunks to fit into the cells), there will almost always be an unfilled cell at the end of each cell stream that makes up a packet for a computer network. Quantifying the amount of waste from this is difficult, because it depends on the average size of packets used in a given network.
This overhead is often derisively referred to by computer network professionals as "the cell tax."
Just as an aside, you might be wondering, "Why 53 bytes? That's a prime number!"
It was a compromise. The Computer Networking People involved in the specification of ATM wanted to use 64 byte cells, to move data with low overhead. The Telephony People wanted to use 32 byte cells, to be able to subdivide their network bandwidth on a very fine grain. Naturally, they split the difference (48 bytes) and then added the 5 byte header, resulting in a cell size that pleases no one.
Did I mention that there is an Adaptation Layer between the ATM header and your packet data? Dear me, more overhead...
ATM is a complex standard. It is hard to get an implementation correct (lots of specification means lots of potential for misinterpretation), and the complexity makes the implementation expensive - lots of parts required, and the resulting high price of ATM network interfaces keeps the volume low, so the manufacturing of ATM interfaces will likely never benefit from decent economies of scale in production that are needed to make ATM cheap and widely available.
One can observe that FDDI is similarly complex and correspondingly expensive, as compared with 100 megabit per second (Fast) Ethernet. FDDI came on the scene years earlier, but never saw deployment beyond backbone networks, principally because it never got cheap enough (even when adapted to typical building wire plant as CDDI, in addition to fibre) to deploy to the desktops on a mass basis.
With gigabit per second Ethernet on the near-term horizon, it is unlikely that ATM (like FDDI) will ever be deployed in quantities sufficient to make it cheap enough to reach all desktops like Ethernet (or Fast Ethernet, or Gigabit Ethernet).
In theory, ATM switches are supposed to lose no more than one cell in a billion or so. This figure is based on the reliability of fibre networks, and of properly designed chips to do the switching.
In practice, ATM switch manufacturers are finding network congestion a more difficult problem to deal with than the theory would lead one to believe; the typical cell loss rates under congestion are alarming, because the switch manufacturers have not put in sufficient buffering in their switches to deal with a full "delay x bandwidth" product.
The worst part is the effect of the cell loss pattern. If a switch can arrange to lose a whole lot of cells in a cluster, it's not so bad. However, if the switch loses single cells sporadically, there is a nasty effect: the packets that those lost cells are part of are no longer valid (there's a piece missing and the ATM Segmentation And Reassembly (SAR) layer will not be able to reconstitute the whole packet), but the remaining pieces of the broken packets will get delivered all the way to their destinations anyway, wasting bandwidth that could have been used by valid data.
In short, it's better to lose whole packets, than tiny chunks of packets. So why slice the packets up into cells in the first place?
Much is made of ATM's putative ability to do bandwidth reservation and "isochronous" service (guaranteed delivery of bits on each tick of a clock). This is the capability that people who want to send so-called "multimedia" data through computer networks demand. However, I'd claim they haven't thought the issue through carefully enough. It can be summed up as follows:
"If the bandwidth is not there, you lose. If the bandwidth is there, what did you need reservations for in the first place?"
In short, the ability to do "multimedia" data across a network is more a matter of network engineering (did the backbone get provisioned properly with enough bandwidth?) than the capability for any given protocol suite to support bandwidth reservation. Adding reservations to a situation where there isn't enough bandwidth only serves to attempt to pick winners and losers in the bandwidth allocation competition.
The requirement for Constant Bit Rate (CBR) Switched Virtual Circuit (SVC) (and Permanent Virtual Circuit (PVC)) is also predicated on the relative expense of adding buffering to a receiver to smooth out jitter (non-uniform delivery times of data in a stream of multimedia data). RAM is cheap (and getting cheaper; gotta love Moore's Law), so it's silly to design network switches driven by the assumption that it's expensive.
The Internet Engineering Task Force (IETF) has decided to work on a protocol called RSVP that will give hints (not a guarantee) to the routers that a particular application would like to have bounded jitter in the delivery of packets for a particular protocol. This work is in progress.
For as long as there have been computer networks, corporate MIS managers in charge of both computer networks and telephone networks have wished for a single, integrated infrastructure in the hopes that it would lead to lower costs. This dream was the driving force behind the "data over voice" products in the last decade.
ATM speaks to this dream, strongly. Imagine: telephone calls and computer networks intermixed on a single, ATM-based, backbone!
This is, I believe, one of the big drivers of interest in ATM: thousands of beleaguered MIS managers wishing for the moon.
The other property that makes ATM apparently attractive is "scalable" bandwidth: today, dig up the ground and lay down fibre, and put OC-3 (155Mb/s) equipment on the ends. Later, when the requirements have increased and the equipment has improved, replace your OC-3 equipment, with OC-24 (1.2Gb/s) equipment, without replacing the fibre infrastructure inbetween. Poof! The network is ten times faster.
The key point is, one can get this benefit without using ATM (i.e. "scalable bandwidth" is not a property of ATM). Scalable bandwidth is actually a property of SONET/SDH which underlies ATM on fibre networks. The smart network manager can win just by using pure SONET/SDH bit-pipes, without ATM switching overhead on top.
The telephone companies have for years reaped enormous profits from their networks by charging for time and distance (the longer one calls, or the farther one calls, the more they charge). The economics of the network has always been the same: a big capital expense to put in the infrastructure in the first place, and then the TelCo can sit back and rake in the profits.
The Internet, built on fixed-price leased lines purchased from the TelCo's, has changed the expectations of the world. On the Internet, there is no extra charge for sending more. There is no extra charge for sending farther. It's all flat-rate priced. This is a wonderful thing: no watching the clock tick as you access that web page. No unpleasant visits from the finance department of your company because you added a new service to your Internet server. The bills are completely predictable.
The TelCo's are horrified by this turn of events: unless they get into Internet services themselves, they are relegated to a thin-margin, commodity-priced bit-pipe market. No more economic profits from monopoly operation of the world's telecommunications infrastructure. They tell two big lies about the Internet:
Truth: The Internet is built on TelCo leased lines, which the TelCo's reap a handsome profit from. The problem with this market is that it is a commodity - in principle, anyone can run a wire or fibre from point A to point B, and sell bandwidth, and that will cause the prices to come down, and the profit margins to thin out over time. The TelCo's just want more money; in particular, they want the money from the next level up: the profits of the ISPs - a "value added" service.
What with the growth of the Internet and the multi-billion dollar mergers involving TelCo's, you don't hear this lie very much any more.
Truth: at any given moment, an Internet user can only use the bandwidth on his link into the Internet, be that a modem (33.6Kb/s), ISDN (64 or 128Kb/s), a T-1 leased line (1.544Mb/s), and so on. The more bandwidth you want, the more you pay, but you pay based on the width of the pipe you buy, not on how much of that pipe you use. It has never been possible for any user to have unlimited bandwidth at any time!
What they're really complaining about here is their inability to provision their backbones properly, since the Internet has a different, less predictable usage pattern than voice phone calls. They've been reaping economic monopoly profits for decades, and now that it's time to pay the piper, they're whining about the money they need to invest in network improvements, and they want all of us to eat the cost of those required backbone network upgrades.
Of course, they might be able to get back to the old usage-priced model if they can just get the Internet to convert from the leased lines over to a switched system again (just like the old PSTN). Enter ATM.
ATM is a stalking horse for per-time, per-byte, per-packet charges for Internet service.
If one must buy bit pipes from the Telephone Companies (and almost everyone does), then insist on using raw SONET or SDH, which is what underlies ATM on fibre. You lose the overhead and other ATM nastiness, while getting a higher percentage of the actual bandwidth of the link that you buy.
Don't be suckered by ATM technology or by those who hype it.