Enterprise SSD DWPD: Calculating in Advance

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

First, a some background on Enterprise SSD DWPD.

Drive writes per day (DWPD) means how many times you can completely rewrite all the data on your SSDs within a 24-hour period. All enterprise and most consumer SSD vendors I could find online give specifications for how many DWPD can be done on their SSDs, or the drive endurance in GB.

Either way, you can calculate how much data you can write and still be within the vendor’s warranty, but how do you determine in advance before you buy the SSDs how much data you are going to write a day?

Because if you underestimate your data writes, you can potentially have all your storage fail at nearly the same time, which we all know will cause data loss. Let’s break down the components to see how much data you write per day. The way I see it there are three inputs:

1.  User applications

2.  System applications

3.  Storage device and system overhead

User Applications

If your storage system is running on, say, a RAID storage target, you can use iostat or sar to monitor the amount of data written to the target. Most warranties for SSDs — and for that matter, disk drives — are five years. Seems pretty simple, but things change over time. Take the following example. Let’s say you have a 3 TB SSD that supports 1 DWPD and your application load from iostat and sar show that you are writing only 1 TB per day, tracking data over a one-month time period. You can buy SSDs with far more than 1 DWPD, but I am just using this as an example for understanding the math.

So for a simple case where nothing changes, you would have this:

Month

TB Per Day

Total TBs of Writes left on the drive

1

30

5448

2

30

5418

3

30

5388

60

30

3678

So after 60 months, you have lots of available full device writes a day left on the device, as your workload did not change.

Now on the other hand, what if your write workload increased four percent a month every month for the next five years?

Even though your architecture was designed to be significantly below the required write threshold, with only 1 TB a day be written and the device designed to support 3 TB a day, you’d be surprised what compounding that four percent will do over time. You will see from the below table that it would have a dramatic impact on the total number of TBs that are available to be written, and over a five-year period you will write more than the SSD can support.

Month

TB Per Day

Total TBs of Writes left on the drive the Drive

1

30.00

5448

10

42.70

5118

20

63.21

4585

30

93.56

3795

40

138.49

2627

50

205.00

898

51

213.20

685

52

221.73

463

53

230.60

232

54

239.82

-7

55

249.41

-257

56

259.39

-516

57

269.77

-786

58

280.56

-1066

59

291.78

-1358

60

303.45

-1662

After 54 months, you have written more than the drive can support. If your percentage increase goes from four percent a month to say six percent a month, SSD will likely fail and you are out of warranty at month 43, which is only about three and a half years. Remember your starting point was only a third of the full drive writes per day.   

Month

TB Per Day

Total TB Writes left on the drive the Drive

1

30.00

5448

10

50.68

5083

20

90.77

4374

21

96.21

4278

22

101.99

4176

23

108.11

4068

24

114.59

3954

25

121.47

3832

26

128.76

3703

40

291.11

835

41

308.57

527

42

327.09

199

43

346.71

-147

System Applications

The system applications, such as the operating system and logs, might have an impact also. You might monitor your system and not see much going on, but you might load a new operating system, or have in increase in logging requirements or some other operation which requires an increase in the amount of data that will written.

For system applications, it is very likely that data must be monitored over a significantly longer timeframe, as the impact on system applications and logs is often very dependent on the activities in the system. In an SELinux environment, where everything is logged, it becomes very important to understand the number of users and their activities.

Storage device and system overhead

Looking at the iostat and sar data can give you an idea of how much data is moving from the servers to the storage target, but in a RAID environment, that is not the whole story for some RAID levels. With RAID-1 (mirrors), you are not going to have to read-modify and write issues, which is common on RAID5/6 implementations when you are not aligned with the internal RAID stripe.

So knowing how much data was written to the storage target is likely the amount written if it is aligned, but you might be writing more data depending on the RAID system and allocation if it is not aligned. In addition, if there is an SSD failure you are going to have to rebuild, which most likely will take one full drive write away from the new drive.

Final Thoughts

Developing an architectural plan for SSDs that is going to last five years, which is generally the warranty for enterprise SSDs, is significantly more complicated given that you have full device write specifications.  It should be noted that after looking around on the web for a bit, most disk drive manufacturers are also specifying hard drives in TB per year, which can easily be translated into full device writes.

So the world is changing, and for the most part, the information available and capacity management tools are not up to the task. This is not a surprise, given that we have been solving problems with by throwing hardware at them since the mid-1990s rather than spending the time and effort to develop architectural plans based on the manufacturer’s specifications.

People have been saying for almost 20 years that it is cheaper to buy hardware than it is to monitor systems and plan for the future. That might be the case, but data loss might depend on monitoring your systems, because in a RAID world the devices will likely fail at nearly the same time. I have always been a proponent of monitoring systems, capacity planning and the like, and it might be time to reconsider this because your capacity planning tools are going to be needed in an ever more complex system architectural world.

Also see:

What is the Best Enterprise SSD Format?

Top 10 Enterprise SSD Tips and Trends

 

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.