Storage Technology In Depth – DWDM

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The explosive growth of the Internet and enterprise business applications is placing tremendous demands on global enterprise and service provider networks. Mission-critical applications such as e-business, customer relationship management, storage networking, and emerging applications such as streaming media, are affecting all parts of the network from access to metropolitan- (MANs) and wide-area networks (WANs). These technology challenges are affecting every industry from financial services, healthcare, and education to telecommunication service providers.

As business services become critical to their daily lives, consumers expect instant, uninterrupted access to corporate systems and data. At the same time, unprecedented growth in storage requirements is forcing companies to reassess how and where to meet this steadily increasing demand. New storage-area networking (SAN) and network-attached storage (NAS) technology has emerged to address this issue. These technologies allow enterprises to scale their storage capabilities, providing extended geographical access while improving overall manageability of their storage resources.

Carrier deployment of fiber-optic cables in the metro area laid the groundwork for dramatic dark-fiber and high-bandwidth availability. Network connections once handled by T1 and T3 facilities, now require Fibre Channel, Enterprise Systems Connection (ESCON), Gigabit Ethernet, and in the future 10-Gigabit Ethernet, to satisfy the demand. This demand, coupled with advances in optical technology such as dense wavelength-division multiplexing (DWDM), has dramatically increased transmission capacity and reduced costs, making it economically attractive for carriers to offer dark-fiber and high-bandwidth services in the metro market.

With the preceding in mind, this article discusses storage consolidation used by storage service providers (SSP) over metropolitan area networks (MAN) using dense wave division multiplexing (DWDM). It also includes an explanation of why this technology is needed, what the advantages are, possible impact on the storage environment and some of the barriers to implementation.

So what really is DWDM? Let’s take a look.

What Is DWDM?

Short for Dense Wavelength Division Multiplexing, DWDM is an optical technology used to increase bandwidth over existing fiber optic backbones. More specifically, it is multiplexing using close spectral spacing of individual optical carriers (wavelengths) to take advantage of desirable transmission characteristics (e.g., minimum dispersion or attenuation) within a given fiber, while reducing the total fiber count needed to provide a given amount of information-carrying capacity.

DWDM works by combining and transmitting multiple signals simultaneously at different wavelengths on the same fiber. In effect, one fiber is transformed into multiple virtual fibers. So, if you were to multiplex eight optical carrier (OC) -48 signals into one fiber, you would increase the carrying capacity of that fiber from 2.5 Gb/s to 20 Gb/s. Currently, because of DWDM, single fibers have been able to transmit data at speeds up to 400Gb/s. As vendors add more channels to each fiber, terabit capacity is on its way.

A key advantage to DWDM is that it’s protocol and bit-rate independent. DWDM-based networks can transmit data in Internet protocol (IP), Asynchronous Transfer Mode (ATM), Synchronous Optical Network/ Synchronous Digital Hierarchy (SONET /SDH), Ethernet, and handle bit-rates between 100 Mb/s and 2.5 Gb/s. Therefore, DWDM-based networks can carry different types of traffic at different speeds over an optical channel. From a QoS (Quality of Service) stand point, DWDM-based networks create a lower cost way to quickly respond to customers’ bandwidth demands and protocol changes.

Dense Wave Division Multiplexers (DWDM) Devices

DWDM devices are used for multiplexing multiple 1 Gbit/sec (or higher) channels on a single fiber. These optical multiplexers are transparent to the underlying protocols, which means that enterprises can use a single DWDM device to transfer Gigabit Ethernet, Gigabit Fibre Channel, ESCON, and SONET on a single fibereach with its own wavelength.

Enterprises can configure DWDM devices as point-to-point configurations or cumulative point-to-point configurations to form a ring. Most DWDM devices support immediate automatic failover to a redundant physical link if the main link is inaccessible. In a ring topology, only a single link is needed between nodes – if a link fails, the light is switched to the reverse direction to reach its target. Certain types of DWDM equipment can add and drop wavelengths – enabling wavelength routing in or out of a ring at 70 km to more than 160 km.

DWDM equipment is available in two basic classes – edge class (for enterprise access) and core class (for carriers). For the edge class, DWDM devices are usually smaller and less expensive, and provide fewer channels.

With the preceding in mind, an enterprise can connect two sites over 50 km by using the dual Inter-Switch Links (ISLs, a connection between two switches using the E_Port (an expansion port connecting two switches to make a Fabric)). The ISLs that lie between the switch and DWDM devices provide greater bandwidth (2 Gbits/sec instead of 1 Gbit/sec), but are not required. The DWDM devices can have a hot standby-protected link that is automatically invoked if the main link fails. The protected link should reside on a separate physical path.

For the core class, the DWDM equipment is larger and more expensive, and provides more channels. This DWDM equipment enables ring configurations and provides add and drop capabilities (later in the article an example of service provisioning across four sites is briefly discussed).

MAN-Based Applications

Many types of applications can benefit from a MAN-based SAN configuration. The most common applications include those for remote storage centralization (such as a service provider model), centralized remote backup, and business continuity.

For example, an optical DWDM ring topology which provides redundant paths, has the ability to fail over from a disconnected path to an alternate path. Let’s say Site B has a 70 km connection (primary path) to Site C. When that connection goes offline, Site B uses the alternate path (other direction) over DWDM to restore Site B’s connection to Site C. This path (let’s say from Site- B to A to C) spans 100 km (50 km plus 50 km). Because of the extended buffering at the Fibre Channel switch E_Ports, the primary and alternate paths provide nearly the same level of data access performance during testing.

Storage Centralization Over A SAN/DWDM Infrastructure

Enterprises can centralize storage across a campus or a geographically dispersed environment, or even remotely outsource the work to a Storage Services Provider (SSP). You can also have an SSP configuration where a designated site (Site C, the SSP) provides storage to multiple sites over MAN-based SANs in heterogeneous environments.

In this example, Sites A and B subscribe to Site C (the SSP). Here, zoning (a feature in Fabric switches or hubs that allows segmentation of a node by physical port, name, or address) can also be used to isolate heterogeneous fabrics, thereby controlling the amount of storage each customer site can access. Two fabric zones (one for Site A and the other for Site B) isolate storage for the two sites.

Centralized Backup Over A SAN/DWDM Infrastructure

Centralized remote backup enables multiple sites to back up data to a single shared tape library by using fabric zoning. Sites A and B can share the tape library provided by Site C, which allows the tape library into both sites’ respective zones. As a result, each site can perform data backup with any tape device in the library.

Business Continuity Over A SAN/DWDM Infrastructure

A business continuity solution provides synchronous data mirroring to a remote location. In the event of a disaster, a redundant system can take over for the main system and access the mirrored data. This solution also facilitates the recovery from the redundant remote system back to the main system after it is operational again. Two sites can utilize this type of solution concurrently.

For example, Sites A and B are the primary sites (running different operating systems), and Site C is the remote business continuance site for both Sites A and B. If either Site A or B goes down, it can fail over to Site C.

A Multinode DWDM Configuration

Finally, as previously mentioned, let’s briefly look at a multinode DWDM configuration that spans four sites (DWDM 1, 2, 3 and 4) and provisions optical services. For example, let’s say that there are four switches, with each switch’s E_Ports connected over a DWDM channel that includes dual paths for transmitting and receiving. Each path has its own wavelength. The DWDM passthrough feature enables non-contiguous sites to connect over an intermediate site as if they were directly connected. The only additional overhead of the passthrough is the minimal latency (5 usec/km) of the second link. The passthrough has no overhead since it is a passive device.

Each of the links can operate in protected mode, which provides a redundant path in the event of a link failure. In most cases, link failures are automatically detected within 50 msec. In this case, the two wavelengths of the failed link reverse directions and reach the target port at the opposite side of the ring. If the link between DWDM 1 and 4 fails, the transmitted wavelength from 4 to 1 would reverse direction and reach 1 through 3 and 2. The transmitted wavelength from 1 to 4 would also reverse direction and reach 4 through 2 and 3.

Calculating the distance between nodes in a ring depends on the implementation of the protected path scheme. For instance, if the link between DWDM 2 and 3 fails, the path from 1 to 3 would be 1 to 2, back from 2 to 1 (due to the failed link), 1 to 4, and finally 4 to 3. This illustrates the need to utilize the entire ring circumference (and more, in a configuration with over four nodes) for failover.

Another way to calculate distance between nodes is to set up the protected path in advance (in the reverse direction) so the distance is limited to the number of hops between the two nodes. In either case, the maximum distance between nodes determines the maximum optical reach. An example of this specification is 80 to 100 km for a maximum distance between nodes and 160 to 400 km for maximum ring size.

But, suppose, you have a couple of SAN switches: one at location X and one at location Z. The sites are approximately 5K apart.. You attempted to link the two switches using a dedicated dark fiber across a DWDM system. Each of the line cards you are using is specified for 850nm fiber. It failed. And, you were told that it failed because you were using short haul gigabit interface converters (GBICs (a transceiver that converts electric currents (digital highs and lows) to optical signals, and optical signals to digital electric currents)). But, the GBICs and band cards are rated for 850nm so it shouldn’t be an issue.

The questions you need to ask here are: Is there some configuration feature that you need to set? Or, is there anything else you can do to establish these links across DWDM? The answers to these two questions follow next.

Troubling Cables Across A DWDM

Having the correct cable connections will make or break your link to a remote site. To help ensure you have the right configuration, lets now take a look at the specs for fiber connections, including the maximum distance each can have between devices.

The Problem

The problem it appears, is related to how your connections are set up. The correct specifications for fiber connections are as follows:

  • Shortwave (850 mm) GBIC using 50u cables have a maximum distance or 500 meters between end devices.
  • Longwave (1300 mm) GBIC using 9u cables have a maximum distance of 10 kilometers between end devices without an extender/repeater.
  • By using DWDM on dark fiber (9u cables), the distance can be extended to up to 100 kilometers between end devices.

The Solution

So, for you to create an extended fabric between your sites at 5Km apart, your configuration should look like this:

  • Host to switch=50u Multi-Mode Cable to short wave GBIC (850mm).
  • Switch host port uses short wave GBIC (850mm).
  • Switch to DWDM= 50u Multi-Mode cable to short wave GBIC (850mm).
  • DWDM to remote site connection=long wave GBIC (1300mm) to 9u single-mode cable (dark Fibre).

Thus, your line cards have short wave multi-mode connections. That’s fine for the local connections, but you need longwave single mode connections for the link between the sites.

Finally, you would actually have the same setup on the remote side. You may also need the SAN extension license for your switches, as that pumps up the number of available buffer credits between the switches. The bottom line here is, that all of these distances should continue to increase as fiber optic technology advances.

Summary And Conclusions

DWDM is ideal for reliable metro area connectivity between two data centers, while SONET provides high Time Division Multiplexing (TDM) bandwidth over longer distances. Both technologies provide excellent transport options for remotely replicated data over Fibre Channel or Fibre Channel Over Internet Protocol (FCIP).

Finally, by internetworking SANs over distance across MANs using DWDM, enterprises can implement a highly reliable environment; allowing enterprises to replicate business-critical data to remote locations; and, supporting business continuance applications such as data mirroring, data replication, electronic tape vaulting, and remote server clustering. These business continuance applications, and associated storage-area networking (SAN) technology, such as Fibre Channel and ESCON, requires a fault-tolerant, high-bandwidth, and low-latency network. For synchronous mirroring, the low latency of a DWDM optical network is critical to avoid a negative impact on application performance.


About the Author :John Vacca is an information technology consultant and author. Since 1982, John has authored 36 technical books including The Essential Guide To Storage Area Networks, published by Prentice Hall. John was the computer security official for NASA’s space station program (Freedom) and the International Space Station Program, from 1988 until his early retirement from NASA in 1995. John can be reached at jvacca@hti.net.



»


See All Articles by Columnist
John Vacca

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.