A Trip Down the Data Path: I/O and Performance Page 4


Want the latest storage insights?

Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure

Why Does this Matter

All of these issues are important because they affect the performance of the system, the CPU overhead and the performance of the underlying I/O system. In my second column I discussed the details on how I/O technology has changed over the last ~30 years and some of the hardware realities that are not really changeable.

Given the hardware realities, the bottom line is that you must either make large I/O requests to efficiently use the hardware or use many disk drives to allow for many disks to be performing an average seek and average latency for I/O requests.


Making Larger Requests

When using C library calls, you can change the size of your library buffer. This is accomplished by using the setvbuf(3) function after the file has been opened or created with the fopen(2) call. You must use setvbuf(3) after the file has been opened but before any read and/or writing to the file.

The library buffer size should be a multiple of 512 bytes and should be exactly 8 bytes more than that multiple of 512 bytes. (Note: Even though Microsoft supports buffer sizes smaller than 512 bytes, these sizes should only be used for devices with smaller physical hardware units. Most current disks support 512 bytes.) The 8 bytes are required for the hash table for each library buffer and, if not used, will significantly reduce your I/O performance and increase your system overhead for I/O as the I/O is not being done on 512 byte boundaries (hardware sector).

For example, if you want to set your library buffer size to 64 KB, you would set it to:



The following example shows how to use setvbuf(3) with a 256 KB buffer:



main ()
char *buf;
FILE *fp;
fp = fopen ("data.fil", "r");
buf = valloc (262144);
setvbuf (fp, buf, _IOFBF, 262144+8);

Page 5: Making Larger Requests (Continued)


Submit a Comment


People are discussing this article with 0 comment(s)