Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
Fast Forward to the Early 1990s
Fast forward to the early 1990s when there was an effort to develop mainframe-like statistics for Unix by a group that started under Unix System Laboratories. The group was called the Performance Management Working Group (PMWG) and was headed by Shane McCarron. This group then moved under XOpen and actually published a standard, but since it wasn't adopted by any hardware vendors, the work could technically be considered a failure.
At the same time, a number of software vendors were writing data collectors to collect performance statistics (any and all) from various vendors' operating systems. Combining this with statistics from Oracle and early web servers, some vendors actually wrote drivers to track everything done in the kernel. My experience with these is they provided much needed statistics but with an extremely high overhead given the single-threaded implementation of these drivers.
The problem with modeling Unix systems is that they did not have bell-shaped distributions like mainframes did. Sometimes the distribution was multi-modal, which presented a problem for the standard modeling techniques used on mainframes based on queuing theory. Here is some good reading if you're interested:
http://www.cs.uml.edu/~giam/Mikkeli/ (Lectures 1-10)
This was especially true for networks, so some of the early products developed were for networking only, such as COMNET (which is now owned by Compuware). The methodology most often chosen for modeling by these vendors was discrete event simulation. This method allows the representation of events and the interactions within the system. It did not depend on a normal distribution of these events. Discrete event simulation has been used to model everything from productions lines at fast food restaurants to computer chips, including RAID controllers hardware and software.