Storage Security Basics

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Given the emphasis administrators and corporate managers place on IT security, it’s hard to imagine an environment in which security implementations are not a primary concern. As such, many of today’s network IT administrators carefully consider all aspects of security when deploying and managing their networks.

However, despite all the well-documented threats and media attention, there is no shortage of networks that are still operating with minimal and poorly implemented security measures. This can be due to lack of knowledge about the real risks to data security, unaddressed vulnerabilities, and sometimes to a false sense of security due to reliance on inadequate security strategies.

Storage networking technology has enjoyed strong growth in recent years, but security concerns and threats facing networked data have grown equally fast. Today, there are many potential threats that are targeted at storage networks, including data modification, destruction and theft, DoS attacks, malware, hardware theft and unauthorized access, among others. In order for a SAN to be secure, each of these threats must be individually addressed. Fortunately, many of the security practices and protocols used to address traditional network vulnerabilities also help ensure the availability of storage networks by reducing common security threats.

Developing a strong security strategy is an integral part of SAN administration. In this and the next Storage Basics articles, we will look at general security concepts and principles, the protocols behind these concepts, and how they fit into an overall SAN security strategy.

At the ground floor of any security strategy are some basic security concepts, including authentication, authorization, encryption (confidentiality), integrity, accountability and access control. We’ll start with access control.

Access Control

Access control is a cornerstone concept when designing a secure network environment. Access control is all about controlling who can and cannot access a network, a resource, a folder or file.

In order to effectively secure such resources, you must carefully consider and control the level of access granted to each network user and then deploy strategies to ensure that only required users actually have resource access. It is a fundamental concept, and the foundation for a strong and secure network environment.

There are several types of access control strategies, including mandatory access control (MAC), discretionary access control (DAC), and role-based access control (RBAC).

MAC represents the tightest form of access control. In this strategy, security policies prevent the creator of any information from controlling who can access or modify their data. Instead, administrators or managers maintain control over who can access and modify data, systems and resources. Mandatory access control systems are commonly used in highly secure network environments such as military installation or financial or medical institutions.

MAC secures information and resources by assigning sensitivity labels on objects and comparing this to the level of sensitivity a user is assigned. This label is a kind of confidentiality stamp; when a label is placed on a file it describes the level of security required to access that specific file and will only permit access by files, users and resources with a similar or lesser security label.

MAC assigns a security level to all information, and places security clearance to each network user to ensure that all users only have access to that data for which they have security clearance. For example, users may be assigned a security label such as top secret or confidential, and data and resources are classified accordingly. MAC restricts access to objects based on a comparable sensitivity between the user-assigned levels and the object-assigned levels.

Discretionary Access Control (DAC) is not forced by the administrator or the operating system policy; instead, access is controlled by an object’s owner. In a DAC model, if a user creates a folder, that user decides who will have access to that folder.

DAC is associated with an access control list (ACL). The ACL maintains information on the rights a user has to a particular system object, such as a file, directory or network resource. Each object has a security attribute that identifies its access control list and the list has an entry for each system user with associated access privileges. The most common privileges include the ability to read a file (or all the files in a directory), to write to the file or files, and to execute the file (if it is an executable file or program).

Microsoft Windows 2000/2003/XP, Linux, UNIX and MAC OS X are among the operating systems that use access control lists, although the list is implemented differently by each operating system. In Windows NT/2000/2003, an ACL is associated with each system object. Each ACL has one or more access control entries (ACEs) consisting of the name of a user or group of users. The user can also be a role name, such as “secretary” or “research.” For each of these users, groups, or roles, the access privileges are stated in a string of bits called an access mask. The system administrator or the object owner typically creates the access control list for an object.

In a role-based access control (RBAC) configuration, access decisions are determined by the roles that individual users have as part of an organization. In any organization network users are assigned specific roles such as marketers, sales people, managers, secretaries and so on. Users with similar roles are grouped together, and access control is determined by the role those users have on the network. Role-based access requires a thorough understanding of how a particular organization operates, the number of users and their exact function in that organization.

Access rights are grouped by role name, and the use of resources is restricted to individuals authorized to assume the associated role. For example, within a school system, the role of teacher can include access to certain data, including test banks, research material, memos and related material. School administrators may have access to employee records, financial data, planning projects and more.

When a user is associated with a role, the user should be assigned only those privileges necessary to do their job. This is a general security principal known as the “least privilege” concept and applies to all access control methods. In a role-based scenario, when someone is hired for an organization, their role is clearly defined: teacher, secretary, sales, marketing, manager, etc. A new account is created for the user and then placed in a group with those with the same role within the organization. Individual permissions do not need to be set; rather, the level of access control is inherited from the group in which they are placed. As an example, if a new teacher is hired for a school, the user account is placed in the Teachers Group. Once in the group, the new employee will inherit the same level of access as those already in the Teachers Group.

Role-based access control is actually a form of MAC, since access is dictated by an administrator and the criteria for object access in not in the hands of the owner.

Authentication, Authorization and Accountability

Poor user authentication and authorization are one of the most common weaknesses in networks, and storage area networks are no different.

Poor user authentication and authorization are important concepts in network security. Authentication refers to the process by which you verify that someone is who they claim they are. This traditionally involves a username and a password, but can include any other method of demonstrating identity, such as a smart card, biometrics, voice recognition, fingerprints, and so on. Authentication is a significant consideration for network and system security and an important part of maintaining secure access control. Authentication security is controlled through policies and protocols. In an IP LAN/WAN environment, CHAP, EAP and MS-CHAP are examples of authentication protocols. There are also authentication protocols unique to a SAN environment, including both a secret key design with DH-CHAP authentication and public authentication with FCAP (Fibre Channel Authentication Protocol). Specific authentication protocols will be discussed in the next Storage Basics article.

Authorization refers to the process of determining if a user, once identified and authenticated, is allowed to have access to a particular resource. This is usually determined by finding out if that person is a part of a particular group that provides the correct permissions, rights or required level of security clearance to access a resource.

Accountability refers to the tracking mechanisms used to keep a record of events on a system. One tool often used for this purpose is known as auditing. Auditing is the process of monitoring occurrences and keeping a log of what has occurred on a system. It is largely up to the administrator what types of events should be tracked and which should not. By tracking events on a system, it is hoped that attempts to access the network or otherwise compromise data will be recorded and prevented.

Confidentiality and Integrity

In any security strategy, protocols are needed to prevent data from being read by intruders (confidentiality) and others to determine if data has been tampered with during transit (integrity).

To prevent data from being read, encryption is used. Encryption takes raw data and scrambles it in such a way that it is unreadable without the key. If the correct key is not available, the stolen data maintains its confidentiality. As an example, within IPSec, the Encapsulating Security Payload (ESP) protocol can encrypt data sent over Fibre Channel links. Regular Ethernet communications can also use IPSec encryption or other protocols such as the Secure Sockets Layer (SSL) protocol. All encryption protocols are designed to make intercepted data unreadable to ensure confidentiality.

Integrity refers to the checking of data to ensure that data has not been tampered with or modified in any way. As an example, during the IPSec key exchange process, initial negotiations use one of two integrity verification methods, the message digest 5 (MD5) or Secure Hash Algorithm (SHA), to ensure that data has not been tampered with during the process.

Hopefully this article has given you some understanding of fundamental security concepts. In the next Storage Basics article, we will identify the protocols, processes and procedures used to implement these concepts in a SAN environment.

For more storage features, visit Enterprise Storage Forum Special Reports

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.