Note: This page is best viewed with Microsoft Internet Explore. |
|
&
ATM is a common network protocol intended to standardize the way information is transmitted globally. ATM also promises to provide much higher bandwidth to support evolving multimedia requirements such as video and telephony over the Internet. An ATM network consists of a set of ATM switches interconnecting point-to-point ATM links or interfaces. ATM switches support two kinds of interfaces: user-network interfaces (UNI) and network-node interfaces (NNI). UNIs connect ATM end-systems (hosts, routers, etc.) to an ATM switch, while an NNI generally connects two ATM switches. ATM does not provide redundant physical links like FDDI, with its dual attached stations. Consequently, any end-system requiring a redundant connection to an ATM network will need to support two separate UNIs, and either operate one link in a standby mode, or perform local connection-level load sharing between the links. For this reason, the connection between a private ATM switch and a public ATM switch is a UNI -- known as a Public UNI -- since these switches do not typically exchange NNI information. ATM networks are fundamentally connection oriented. This means that a virtual circuit needs to be set up across the ATM network prior to any data transfer. ATM circuits consist of two types: virtual paths and virtual channels. A virtual path is a bundle of virtual channels, all of which are switched transparently across the ATM network. The ATM network is not a standalone network. Rather, it is interconnected with other networks such as the public switched telephone network and maybe with mobile networks (cellular). When examining ATM for quality of service, keep in mind this multi-network image. The basic operation of an ATM switch is very simple. First, it receives a cell across a link (UNI or NNI). Next, the ATM switch looks up the connection value in a local translation table to determine the outgoing port (or ports) of the connection and the next link. Finally, the ATM switch retransmits the cell on that outgoing link with the appropriate connection identifiers. With ATM, information is divided into 53-byte fixed length cells, transported to their destination, and re-assembled. (As will be described in a moment, this process is similar to, but not to be confused with, a packet-switched network.) Being fixed length allows the information to be transported in a predictable manner. This predictability accommodates different traffic types on the same network. The cell is broken into two main sections, the header and the payload. The payload (48 bytes) is the portion which carries the actual information (either voice, data, or video). The 5 byte header is the addressing mechanism. It is the multi-channel nature of ATM that provides the dynamically controlled bandwidth to which most people attribute its potential success. Point-to-point connections, which connect two ATM end-systems, can be unidirectional or bidirectional. Point-to-multi point connections connect a single source end-system (known as the root node) to multiple destination end-systems (known as leaves). Cell replication when the connection splits into two or more branches. Such connections are unidirectional, permitting the root to transmit to the leaves, but not the leaves to transmit to the root, or to each other. What is notably missing from these types of ATM connections is a parallel to the multicasting or broadcasting capability found in most LAN technologies such as Ethernet or Token Ring. In such technologies, multicasting allows multiple end systems to both receive and transmit data to/from other multiple systems. Such capabilities are easy to implement in shared media technologies such as LANs, where all nodes on a LAN process all packets. The parallel function in ATM to a multicast LAN group would be a bidirectional multi point-to-multi point connection. Unfortunately, no such capability exists in today's ATM environment. Despite these early handicaps, there are several advantages that make ATM a reasonable investment. First, ATM provides one network for all traffic types-voice, data, video. Next, because of its high speed, ATM allows the creation and expansion of new applications such as multimedia to the desktop. Because ATM is not based on a specific standard, it is compatible with most of today's network protocols. And ATM can be transported over twisted pair, coax and fiber optics. Efforts within the standards organizations continue to assure that networks will be able to benefit from ATM incrementally by upgrading only those portions of the network that are needed based on new application requirements and business needs. ATM is evolving into a standard protocol for local, public, and private wide area services. This standardization will make network management easier by using the same technology for all levels of the network. The information systems and telecommunications industries are not yet standardized on ATM. However, these industries are beginning to focus on ATM as possibly the networking standard of the near future. ATM has been designed from the onset to be scalable and flexible in geographic distance, number of users, access and trunk bandwidths ranging from Megabits to Gigabits. The benefits of ATM are the following:
The high-level benefits delivered through ATM services deployed on ATM technology using international ATM standards can be summarized as follows:
ATM technologies, standards, and services are being applied in a wide range of networking environments, as described briefly below: ATM Technologies Standards, and Services
ATM technology will play a central role in the evolution of current workgroup, campus and enterprise networks. ATM delivers important advantages over existing LAN and WAN technologies, including the promise of scalable bandwidths at unprecedented price and performance points for bulk data transfer. However, these benefits come at a price. Contrary to common opinion, ATM is a very complex technology. While the structure of ATM cells and cell switching do facilitate the development of hardware intensive, high performance ATM switches, the deployment of ATM networks requires the development of a highly complex, software intensive, protocol infrastructure. This infrastructure is required to both allow individual ATM switches to be linked into a network, and for such networks to internetwork with the huge installed base of existing local and wide area networks. The ATM Forum (standards group) recently released new signaling capabilities as part of its UNI 4.0 specification. UNI 4.0 will add support for, among other things, leaf-initiated joins to a multi point connection (similar to current LAN capabilities). While some would like to use this to allow for true multi point-to-multi point connections, note that signaling support for such connections does not imply the availability of required equipment. At the time of this writing, it is not clear that UNI 4.0 will have any better solution for multicast with ATM than what exists today. Addressing schemes, routing formats, and protocol interfacing pose challenges for the ATM Forum as efforts continue to integrate this standard across all networks. Security presents another challenge within an ATM network. It is not at all clear, for example, just how, or whether, it might be possible to implement firewalls in an ATM environment. The problem is that once an ATM connection is set up, no intermediate devices interpret or process the information sent down that connection. Once a connection is set up between two end nodes, any data can be transmitted through that connection without visibility to network administrators. While firewalls or other security mechanisms could be implemented in the end systems, it is not likely to be a practical solution. Whether the current state of ATM standardization meets your particular needs should be the topic of further research. Point-to-point communications for established media formats is supported today. Broad distribution of multiple formats through tiered networks is not yet available.
Frame relay is an interface used for wide-area networking. It is used to reduce the cost of connecting remote sites, in any application that would typically use expensive leased circuits. The more locations you have, the greater your savings. With the right carrier service, frame relay network links may be quickly adjusted to meet changes to applications and network topology. Traditional high-speed WANs are built on high-capacity leased lines, which often take months to order and install. Because network managers must react quickly when a new application, organizational unit, business partner or location is needed, leased-line networks often present a roadblock. Using frame relay, backbone changes can be quickly programmed by the carrier, saving the long installation delays and high costs associated with running physical circuits. WAN designers also have more flexibility when using frame relay. Whereas physical circuits are typically sized in fractions of 56 or 64 kilobits, virtual circuits may be defined with finer granularity, often in fractions of 4 kilobits. Virtual circuits are directional, allowing the send and receive path to be of different sizes if necessary. Frame relay is a form of statistical multiplexing, a method of dynamically allocating transmission bandwidth to more efficiently share high-cost links. In this way, a single access line can support connections to many remote sites. Frame relay is first and foremost an interface , a method of multiplexing traffic to be submitted to a WAN. Carriers build frame relay networks using switches from vendors such as Cascade Communications, Cisco Systems/StrataCom, Northern Telecom, Newbridge Networks or Bay Networks. As a customer, your devices see only the switch interface, and are blind to the inner workings of the carrier network, which may be built on very high-speed technologies such as T1, T3, Sonet and/or ATM.
As of this writing, all major carrier networks implement permanent virtual circuits (PVCs). These circuits are established via contract with the carrier and typically are build on a flat-rate basis. Although switched virtual circuits (SVCs) have standards support and are provided by the major frame relay backbone switch vendors, they have not been widely implemented in customer equipment or carrier networks. Your carrier programs its frame relay switches to allow your traffic to pass through the network. These are the essential parameters you must understand:
What do you pay for? Port speed is typically the most costly parameter to increase, though access rates can jump dramatically if new local loops are involved, such as a move from 56 Kbps to T1. Individual PVC fees are next costly. Once a PVC is established, the additional cost to increase CIR is typically small and can be done in small (4 Kbps) increments. It's important to know that the carrier's backbone is shared by many users and possibly multiple services. To keep you (and everybody else) from sending more data than the network can hold, frames sent above your contracted rate may be marked as Discard Eligible (DE). DE bits are set by the carrier network, not your equipment. If your equipment receives DE-marked frames, this indicates that data sent at this rate in the future may get dropped. This may be an early indicator of traffic rates that you didn't plan for in the design of your frame relay WAN. Frame relay equipment notices congestion when it sees frames marked with the Forward Error Correction Notification (FECN) and Backward Error Correction (BECN) bits. These merely indicate an overload within the carrier network, and are only of value in monitoring the carrier's health. You might expect your equipment to notify end stations to stop sending data to keep additional frames from being discarded or hitting a congested network. In practice, however, this doesn't happen: Most routers, bridges and frame relay access devices (FRADs) do nothing when these bits get set. Instead, they expect higher layer protocols, such as TCP/IP, to know how to react implicitly to the packet loss. Broadcasts over PVC-based networks such as frame relay create special problems. If the broadcast must go to multiple remote sites through PVCs as signed to a single access channel, your equipment is forced to pump it out over each DLCI in turn. As noted elsewhere, you'll want to minimize the use of broadcasts whenever you can.
Frame relay was initially designed to provide transport for the delay of insensitive data sent by higher level applications capable of recovering lost or dropped frames. Frame relay has no inherent frame correction mechanism; errored frames are simply discarded and higher layer protocols determine what frames need to be re-sent. Since the frames vary in length and traverse the buffers and matrices of the switches, delay across the network is of a non-determinative nature. While prioritization mechanisms are sometimes employed to provide differential services, to date there are no true quality of service (QoS) levels available from frame relay networks based on standard metrics. The Frame Relay Forum and ITU-T (International Telecommunications Union-Telecommunications Standardization Sector) translation are attempting to address this issue during the coming year. In contrast, ATM was built from the ground up to provide differential QoS levels (e.g., constant bit rate, variable bit rate, available bit rate, unspecified bit rate) with consistent characteristics (at least for the constant bit rate and real-time variable bit rate service levels). Utilizing a fixed-length payload of 48 bytes and a 5-byte header, ATM switches are able to provide highly determinative service classes each optimized for specific types of data and applications. The question then is which technology to use for a given network? Frame relay is historically very good at transporting data which is not highly dependent upon precise delivery intervals. Examples of this are typical client-server database queries, e-mail and file transfers, and broadcast video applications. ATM has typically been tagged as the transport of choice for delay-sensitive information such as interactive video and voice. However, recent enhancements to the basic frame relay service have tended to blur some of the distinctions between the two. FRF.11 has defined standards for carrying voice over frame relay. FRF.12 has defined fragmentation issues to create a more determinative delay pattern to the frames by chopping up the large data frames into smaller pieces to better match the size of the voice frames; thus it is able to minimize frame delay variation through the network. So, the choice of which technology to use where depends on the character of the predominant traffic. If your network will be transporting a high degree of voice traffic and/or near-broadcast quality video, ATM is most likely the better choice due to its ability to support constant bit rate (CBR) traffic as well as inherent broader bandwidth (currently up to OC-12, or 622 Mbps, with plans to extend to OC-48). If the dominant traffic type is non-delay-sensitive data (which could be voice, video, fax, imaging, multimedia, file transfer, or e-mail), frame relay is a better choice even if bandwidth in the range of 45 Mbps is required. Frame has less overhead than ATM, is more readily understood by most networking professionals, and is easier to install, and frame equipment and services are generally less expensive than ATM devices and services. Beyond providing a means for frame-based user equipment to access ATM networks at the user-to-network interface, an important consideration is how to interwork frame relay networks with ATM networks, and thereby interwork the network users. This leads to two frame relay/ATM interworking scenarios; Network Interworking and Service Interworking. These two interworking functions provide a means by which the two technologies can interoperate. Simply stated, Network Interworking provides a transport between two frame relay devices (or entities). Service Interworking enables an ATM user to transparently interwork with a frame relay user, and neither one knows that the far end uses a different technology.
The Network Interworking function facilitates the transparent transport of frame relay user traffic and frame relay PVC signaling (sometimes called LMI protocol) traffic over ATM. This is sometimes referred to as tunneling. This means that multiprotocol encapsulation (and other higher layer procedures) are transported transparently as they would over leased lines. An important application for this interworking function (IWF) is connecting two frame relay networks over an ATM backbone network.
As shown in the figure above, the ATM network is used in place of a transmission facility (leased line) to connect the two frame relay networks. The Network IWF can be external to the networks as shown, but is more likely to be integrated into the ATM network switch or frame relay switch. Each frame relay PVC can be carried over an ATM PVC, or all of the frame relay PVCs can be multiplexed onto a single ATM PVC. This method of connecting frame relay networks may provide economic savings when compared to leased lines. This is especially true when the frame relay NNI is operating at a low percentage utilization. Network Interworking also includes a scenario in which an ATM host computer emulates frame relay in the service specific convergence sublayer.
The Service IWF does not transport traffic transparently. It functions more like a protocol converter in that it facilitates communication between dissimilar equipment.
As shown in the figure above, a frame relay user sends traffic on a PVC through the frame relay network to the Service IWF which then maps it to an ATM PVC. The frame relay PVC address-to-ATM PVC address mapping and other options are configured by the network management system associated with the IWF. Again, the Service IWF can be external to the networks as shown, but is more likely to be integrated into the ATM network switch or frame relay switch. Note that in the case of Service Interworking, there is always one ATM PVC per frame relay PVC.
Frame relay PVC status signaling is converted to ATM OAM cells. Likewise, OAM cells are converted to frame relay status signaling. Therefore, if a failure occurs in one network, the user of the other network will be notified. Other indications such as congestion indication and discard eligible/cell loss priority are also mapped between networks per PVC. The Service IWF Maps the frame relay DLCI to the ATM VPI/VCI, the FECN bit maps to the PT field in which congestion indication is encoded, and the DE bit maps to the CLP bit. The frame relay multiprotocol encapsulation procedures (RFC-1490[10]) are not identical to the ATM multiprotocol encapsulation procedures (RFC-1483[8] and RFC-1577[9]). When providing frame relay/ATM Service Interworking for multiprotocol routers, it is necessary for the IWF to convert the multiprotocol protocol data unit headers from frame relay to ATM and vice versa. This header processing can be turned on or off per PVC as some applications do not require it. Applications of frame relay and ATM technologies overlap. Making a choice between the technologies must be driven by business considerations. This includes consideration of:
The above considerations are key elements to the selection of network technology. An informed business decision can be made when the above considerations are properly weighed and evaluated. Frame relay and ATM each have fundamentally unique characteristics. One cannot provide all the features of the other. Therefore, the use of both technologies will continue to grow to keep pace with the applications for which they are best suited. ATM Internetworking Access at the Customer Premises International Engineering Consortium George C. Sackett & Christopher Y. Metz, (1996) ATM and Multiprotocol Networking, p. cm.--McGraw-Hill series on computer communications
|