Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
Enterprise data storage management is easier said than done. The problem is, storage managers have a lot going on. Managing systems, dealing with IT and end-users, and everyday firefighting take up days and weeks, leaving little time to do proactive tasks like optimizing the storage environment.
However, if you take consistent time to plan and optimize your storage management, you’ll improve your storage environment and get back the time you’re losing.
Start with the 7 major storage domains that you are responsible for:
Then read the 7 tips below to prioritize your efforts and make it easier to plan, and get it done.
For Tier 0, go with an all-flash array or a mixed media array with solid state disk at Tier 0. SSDs are dropping in price and increasing in capacity and are the best high-performance tier for high IOPs. Tier 1 can stay on the same all-flash array. If you have a mixed array, then a combination of SSDs and enterprise HDDs will deliver fast Tier 1 performance at a higher cost.
Your nearline, active archive, and cold data tiers can be disk, tape, and/or the cloud. For example, automate data tiering from Tiers 0 and 1 to Tier 2 nearline disk, Tier 3 active archives to on-premise tape or cloud cool data tiers, and Tier 4 data to off-site tape or cloud cold data tiers.
This is a lot of tiering, so use automated tiering tools to free up your time, release aging workloads that slow down production environments, and save money on storage purchases.
2. Availability and Reliability: Upgrade old hardware and consider DRaaS for cloud failover and failback.
You can stretch out the life of legacy hardware, but eventually it’s going to fail. At worst, monitor performance and troubleshooting so a failure won’t be a disaster. At best, replace the thing with modern storage systems. Try for systems that give you central management consoles, such as integrated systems from the same vendor or software-defined storage.
Also consider investing in failover services with a Disaster Recovery as a Service provider. DRaaS isn’t the cheapest service in the world but losing critical application availability for hours and days is going to cost a lot more in money, time, and reputation – yours and your company’s.
Smaller companies can create a single content repository by storing data on a single array, but this won’t work for the enterprise. What enterprise storage managers can do is use software tools to discover data on different devices and manage it as a virtual content repository. Search, eDiscovery, management, and governance tools operate in the virtual repository.
To do this, you will need to A) search for data on the network and edge devices, and B) defensibly delete outdated files, and move the rest to the repository. Use enterprise search tools to locate both visible and dark data, define it by metadata and content, and apply bulk actions such as delete or move.
Additional usage cases are big data analysis and compliance. Invest in analytics tools for big data that identify and analyze structured and unstructured data. If you need to investigate company data for compliance, use pattern recognition toolsets that recognize suspicious communications in email, messages, transcribed phone conversations, and social media.
You can save money with smart purchase negotiating, but don’t stop there. We already talked about data tiering, which not only protects performance but also saves money by matching the value of data to appropriate storage tiers.
Storing aging data in the cloud can save significant money. Savings aren’t automatic – you need to watch your restore costs. But for aging data, using the cloud for cool and cold storage tiers can save money on long-term storage. Look for data storage services that index data for searchability for extra compliance value.
Virtualization is another popular technology to save money and management time on storage environments. It’s by no means a pure cost play -- virtualized environments still require hardware and software purchases, and training and optimization take time. But an efficient virtualized environment saves money by requiring less management time, fewer hardware purchases, and reduced power costs.
Many end-users, including some storage administrators, assume that their cloud provider takes full responsibility for securing their data. However, most cloud providers are clear that they use a shared responsibility model for data security.
Cloud providers are responsible for securing their infrastructure with physical and cyber security measures. If there is a security incident, then the provider will inform affected customers. However, the data owner – which is your business, not your provider – shares responsibility for securing data stored on the cloud.
Configure your cloud storage for security, such as adding encryption to data in-transit and at-rest. Practice strong authentication such as customizing AD by user and role and using multi-factor authentication. Enforce industry and corporate governance policies on the cloud.
Work with your cloud provider: some of these security measures may be covered in your agreement, and you can add additional security measures to your SLA.
SDS decouples storage management from the underlying physical assets. Storage devices still matter: you need reliable devices that can interface with your SDS management layer. Given that, SDS can handle file, block, or object data; and all types of applications and workloads.
SDS is for production environments, so you will need to work with server admins to plan for workloads and application storage needs. If you already own VMware or Hyper-V, you will probably have an easier time of it by staying with VMware or Microsoft. Strongly consider SDS solutions that offer familiar storage APIs for applications instead of requiring developers to modify the application.
7. Data Protection: Be sure that you are backing up your cloud data because your cloud provider isn’t.
One of the biggest threats to data out there is simple backup – or rather, the lack thereof.
If you are like 86% of companies that have moved data to the cloud, you have a problem: your cloud provider probably isn’t backing up your data past a few days. For example, popular SaaS applications like Office 365 has very limited backup functionality. Microsoft has different backup policies for different Office applications, but its longest backup agreement is 30 days for SharePoint Online. The need is not restricted to SaaS; many PaaS and IaaS users also entrust long-term data backup to their cloud provider – only to find out that was a really bad idea.
Your best bet is to partner with cloud backup providers who backup directly from your cloud data stores, ideally using the same cloud user account. The best services do more than simple long-term backup: they also add indexing and searchability for backup and archives.
To be sure, no one optimizes their storage environment overnight, but consistent effort with the right priorities and plans will get it done in a reasonable timeframe. It’s worth it: optimizing the environment will benefit the whole data center, end-users, the business – and you.