Successfully Modeling and Simulating Systems, Part 2

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Choosing a modeling package and obtaining the training necessary to conduct modeling are not easy tasks. Modeling is hard work even when you’ve done it before. Part of the problem is knowing exactly what to model; deciding what is important to model is a critical part of the modeling process. This month we are going to cover modeling methods and issues.

Before we start, I want to share a few of the excellent web sites available that provide a great deal of background on both model simulation and simulation software:

Most of the tools available today are GUI-based. Some people have aversions (especially experts who love to write their own solvers) to GUI-based tools. Here are the most common reasons for going non-GUI:

  1. You often do not know what is going underneath the GUI, so model validation can be more difficult at times. This is usually only an issue for complex models.
  2. GUI interfaces are often clumsy and slow for large, multiple step models. However, a GUI can be useful when developing models for visually inspecting the paths and options in the models. In general, large models are hard to validate given the size and complexity.
  3. GUIs are, by necessity, constrained and not sufficiently specific enough for some kinds of modeling work. This is not necessarily the case for many of the kinds of simulations we do, but in some fields it is true. On the other hand, a number of programming languages also have similar constraints.

Many of the available modeling tools can be used one day to model a RAID controller and the next day to model an assembly line at your favorite fast food restaurant. From a marketing and sales point of view for vendors, to have a commercially viable product you must have a GUI, and it better be pretty good.

Page 2: Starting the Process

Starting the Process

The modeling process begins with creative people. They real key to modeling is the people — the package, though important, is secondary. No matter what package is used, those responsible for it need to be able to abstract the hardware and/or software being modeled and use the modeling tools to represent this abstraction.

This is the hardest part of the modeling process. You must decide exactly what is important and what’s not as important from a modeling perspective for the hardware and/or software that you plan to model. Very often, the more technical you are, the harder it is to model, as everything seems too important to leave out, but for most modeling projects this is simply not an option, as you cannot model the entire hardware and software process end-to-end.

The other important decision that you must make is determining the expected level of accuracy. This level of accuracy is often a tradeoff between time, money, and management expectations. You need to sit down with management and answer:

  1. What is an acceptable level of accuracy?
  2. What is the budget?
  3. Can a model be created with this level of accuracy and the proposed budget?

Most importantly, everyone needs to agree on what happens if you expect an 80% level of accuracy and management buys hardware and software assuming 100% accuracy. This leaves a 20% gap, which means someone will be pointing fingers.

What I have often seen in this process is that if you can model to 90% on most large systems, then the additional 10% can be made up in just buying more hardware. Of course, there are systems that do not meet this criteria. I have worked systems that had to be modeled to 99% accuracy, as they were designed around hardware that was running at near-peak operational rate. Therefore, accurate modeling at the 99% level was required to meet the concerns the customer had at running a near-peak hardware rate.

I have to admit that I am often too caught up in the inner workings of the hardware to be a good modeler for the 80% level. I always want to add every piece of hardware and software to the model with the justification that it could be important in case XYZ. I often get lost in the details and forget the big picture. What I have found is that I am not a good modeler unless you want very accurate models. That is why I say it is a rare person that can model accurately at 80% model reliability. That person has to know exactly what is important and how to abstract the important, somewhat important, occasionally important, and unimportant parts of the hardware and software.

Once the hardware and software is modeled at the desired level of accuracy, it is on to the next step in the modeling process.

Page 3: Phase Two

Phase Two

The second part of the modeling processes is ensuring that the model is accurate. As part of this process, the modeler needs to develop and/or work with someone to develop a methodology for testing the model. This process is commonly known as model calibration. The person developing the tests must have a detailed understanding of how the hardware and/or software work. Take a simple host bus adapter (HBA) as an example. If someone wanted to create a generalized model of a Fibre Channel HBA, they would have to take into account:

  1. 1 Gbit or 2 Gbit interface
  2. Latency through the HBA based on distance to the switch
  3. The number of commands that can be queued in the HBA
  4. How the commands are processed:
    1. Sorted or non-sorted
    2. Number of buffer credits needing writes
    3. Size of the command queue
  5. Number of targets that will be processed in the HBA (LUNs the HBA will be writing to)
  6. Failover issues

As you can see, for a fairly simple piece of hardware you have a great deal of work ahead of you just in modeling the HBA. Add to this the issues to consider for the software stack to the HBA, which include:

  1. Application and type of I/O being done
  2. C library if applicable
  3. System calls
  4. Operating System settings and tunables
  5. File System cache, layout settings, and tunables
  6. Volume manager layout settings and tunables

You can easily see that the complexity of what to test and how to test the various components of the model can be difficult and requires significant expertise.

Developing the tests that can measure each of the individual activities is often very difficult given the interactions that happen within the system between the various hardware and software components parts and services. In most cases, you cannot test a single device by itself; for example, when testing an HBA, even using just the raw device, you will also be testing a disk or RAID.

What I have found works best are reducing the variables and using different hardware. So, for example, when testing an HBA, it might be wise to set up a LUN about the same size as the RAID cache, and tune the cache to try to keep all of the data in cache. This way you reduce the effect of the disks and the RAID performance, and will likely have a more accurate representation of the HBA performance as a result. You can also change the RAID to another RAID type to see if the performance is different.

So in general, it is best to start with the characterization of the smallest and simplest pieces of hardware and software, and then move on to the more complex parts. The key to any characterization project is an understanding in reasonable detail of how the component being characterized works. In most cases, you really do not completely know how it is going to work, so you can expect to have a few surprises from the model, and you can expect to gain a great deal of insight into the inner workings of the project.

Page 4: Phase Three – Model Maintenance

Phase Three – Model Maintenance

After all of the hard work involved in building a model and ensuring that it works correctly and accurately predicts the system, you will often find that the system will be:

  1. Upgraded with new hardware or software that needs to be characterized
  2. Additional and/or modified workload requirements will be added to the system

Both of these situations are quite common, and if you were successful in building the model in the first place, management will hopefully ask you to revise the model rather than making one of the typical requests:

  1. Use your engineering judgment and just go buy the new hardware
  2. Just install the hardware/software — the vendor says it will work
  3. Throw a dart and pick something

The steps to model revision are the same as developing the model, except that you have many of the components already characterized and you already have a track record for success.

Conclusions

In my 22 years in the computer industry I have seen modeling used infrequently, but always to great success. Scientific modeling is part of our everyday life and is used in almost everything we eat, drive, and fly, and every medicine and vaccine we take, with the exception of the one thing that makes it all possible — the computer systems that these scientific models run on.


»


See All Articles by Columnist
Henry Newman

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.