Whether fragmentation affects storage area networks (SANs) is a matter of debate, but users nonetheless claim performance benefits from using defragmentation solutions in their data storage environments (see Does Fragmentation Hurt Storage Network Performance?).
Synectics Group of Allentown, Pa., has a 2TB HP MSA1000 and two Dell EqualLogic SANs, one 4TB and the other 3.5TB. According to Kenneth Bucci, a technical support specialist at the firm, customer complaints about performance ended when the company added Diskeeper’s defragmentation solution. In the case of the HP arrays, Diskeeper is running on the servers attached to the SANs. For the EqualLogic arrays, it runs inside the virtual machines running on those boxes.
Bucci said he had observed a steady deterioration of performance over time. On the HP arrays, some of the servers were 90 percent fragmented. On average on each of these machines, he reported having to clean up about 7,000 fragments daily.
In his environment, a lot of temporary files are being created and deleted all day long. This creates a ton of fragmented files, whether they are on VMs or physical file servers. He said he experienced an 80 percent application performance increase from defragmentation of his drives.
Boeing Defrags Windows
Aerospace giant Boeing is another company running defragmentation in a SAN environment. James Moore, an operations and maintenance specialist at Boeing, has more than a dozen Windows servers running SQL Server, Windows Server 2003, and various business applications. Data is stored on some EMC disk arrays and an HP StorageWorks EVA SAN. Moore found more than 13,000 fragments on one Windows machine when he installed the defrag utility. From his experience, no noticeable resources are consumed in running it.
“I can run all of the servers at once without any issues, but if I stop Diskeeper even for a very short time, everything really slows down in a hurry,” said James Moore, operations and maintenance at Boeing.
Thomas Memorial Hospital in South Charleston, W.V., relies mainly on an iSCSI SAN due to cost and ease of set up. But it also has one Fibre Channel (FC) array (HP StorageWorks EVA SAN), which is required for one of its applications. All of its iSCSI arrays have 14 drives; some have 250GB hard drives while others have 760GB hard drives. All are setup with RAID 50 and two hot swappable drives. These serve more than 200 Windows servers, a few Linux boxes, one OpenVMS and one AS/400 system.
Many of these systems had gone years without ever having been defragmented. The first hint of trouble came from an application running a massive Oracle database. Query times on these servers were getting longer. Adding more RAM and another processor gave some relief, but performance continued to struggle. When Matthew Barnes, a hospital systems administrator, loaded some defrag trialware on one server, he was shocked at the number of fragments.
“We have seen much better performance on data transfers,” said Barnes. “Our servers are able to search the databases for images much easier and more fluently.”
A lot of data at the hospital is moving constantly. Medical records, X-rays, MRIs, CTs and ultrasounds are being stored on the SAN and are accessed heavily by doctors and nurses. The high-resolution images consume a lot of drive space. If performance slows down much, IT starts to receive calls.
“iSCSI can only run as fast as your network, so you need to be running defragmentation software,” said Barnes. “Keeping the SAN at optimal performance means better I/O — you don’t even realize it is attached using an Ethernet cable.”
How Defrag Works with Storage
While these examples serve as anecdotal evidence of the value of defragging a SAN, some storage administrators still express concern over the software potentially interfering with SAN controllers, device drivers or RAID algorithms. The following tale by a storage administrator at a telecom company gives more details as to how defrag works in conjunction with data storage systems.
The unnamed administrator has an HP EVA6000 SAN, but this could apply in general to just about any product on the market. EVA has its own optimized way of writing to its disks. The contention from some is that defragmentation software might interfere with or even break EVA’s method of optimized writing, thus the occasional comment to avoid defrag on such equipment.
Here’s how defrag works: EVA sees the disks and writes to them in its own predetermined fashion. This occurs at the hardware level. From a hardware perspective, the data on those disks is all nicely arranged to provide the user with the best possible performance. But this doesn’t take into account the software side. The Windows Server OS that is connected to EVA sees the disks from its own point of view. That data is represented as one drive and this logical drive is either contiguous or non-contiguous. If a defragmentation tool runs and reorganizes data in a contiguous fashion, it becomes optimized for the OS at a software level.
“While EVA may have moved some data around at the prompting of the defragmentation software, the EVA disk controllers are doing the moving of the data in the same optimized writing patterns that it does for itself like any other write job,” said the telecom administrator. “The EVA writing patterns are what they are — they are not changed by any software package.”
The bottom line is that the data is optimized by and for the EVA at the hardware level anytime a write occurs. By adding defragmentation into the mix, the data is rearranged from a software perspective. This makes data access more efficient for both hardware and OS.
The admin gives the example of deteriorating job completion rates on a SQL Server. Using the defrag tool native to Windows, the LUNs on the server were analyzed. All LUNs/drives were reported as being severely fragmented. One pass of defragmentation eliminated these fragments. Job completion times returned from 54 minutes down to their normal range of 30 minutes.
Follow Enterprise Storage Forum on Twitter