The Basics of SAN Implementation, Part I Page 4
Network Engineering and SAN/Storage Implementation
Many enterprises have been faced with the choice of turning the SAN fabric implementation over to the existing network engineering or network administration team. There are a few cases when traditional network administration should be considered:
- The IP network segments of the LAN/WAN infrastructure should be managed by existing network management teams when a NAS or IP storage architecture is being considered for pooled storage.
- The IP stack on the host server, network interface cards (NIC), Ethernet cabling, routers, hubs, and bridges should all be managed by the IP network team(s) for span and control.
- Only the NAS appliance or NFS/CIFS servers and associated storage systems should be administered by a separate storage team.
- The network group will often be responsible for the DWDM or ATM converters involved with the asynchronous segment of the SAN extension when storage wide area networks (SWANs) are being considered in the storage topology.
Except for the cases previously mentioned, most organizations determine that FC based topologies and the associated storage arrays will be administered and managed by a net-new group within the IT organization. Network engineering has not, in the past, been accountable for SCSI connections to disk. With that in mind, a fundamental change from SCSI to FC does not provide justifiable reason to give this segment to the network group.
Furthermore, closely knit functions that may be applied with host software (HBA persistent binding, switch zoning, or LUN allocation mapping in the storage array) are the administrative functions of LUN security, zoning, and volume management. Also, end-to-end support will be difficult to trace and may cause accountability and coordination nightmares, if the chain of control is segmented with various groups or organizations that are responsible for different segments.
The network group must also be willing to take control of up-stream (HBA, world wide names (WWN), and software agents) and downstream (storage management) functions that go with the SAN, if they want control of the FC fabric switches and hardware. These are indivisible functions that need to be fully viewed and controlled by a single entity. Also, enterprises with best practices in SAN implementation and organization should assign the SAN fabric and storage management to a "net-new team" that is separate, but closely connected to servers, applications, and network teams.
SAN Implementation Team Success Metrics
The burden of proof for "expected" SAN or NAS features may often lie within the organization, as well as the technology itself, as a new organization is created to support a new technology. There are several compelling reasons to justify or promote pooled storage architectures. Some of the more prominent ones are listed below:
- Asset utilization improvement.
- Backup/restore improvements, reducing backup window times.
- Data valued and protected through data replication, mirroring.
- Disaster recovery improvements.
- Higher availability of data.
- Manageability, improved total cost of ownership (TCO).
- Meeting SAN and storage projects on time and on budget.
- Performance improvements for FC storage I/O.
- Reduced headcount per terabyte (TB) of storage, improved server-to-administrator ratios.
- Space reclamation and data center consolidation.
- Storage growth management and scalability.
Server Consolidation, Storage Consolidation
Reduction of Windows NT or UNIX servers through consolidation or pooled storage should see the system administration (SA) staff yield a measurable drop over some time period. Since the data management workload will be transferred from the SAs to storage administrators, it is theorized that with the separation of servers and disk, a higher ratio of SAs-per-server can be realized. Before and after the SAN implementation, time-based server inventory and the associated staff levels could calculate the full-time equivalent (FTE) system administrators.