10 Cloud Deployments Worth a Look

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Compiling this list was a little like trying to name the top 10 classic rock songs of all time. Nobody is going to agree with your selection, as there are so many good ones to choose from, and it largely depends on personal preferences. But here are a the cloud deployments I believe are the most interesting out there:

1. Internal Cloud to Boost Sales

EMC isn’t just hyping the cloud. It is actively pursuing it in its internal IT organization in the form of cloud-based services for internal users. It has created an EMC hardware/software, Dell server and VMware-based infrastructure, and it offers users a virtual desktop via its cloud.

Its overall intention, however, is not to provide all services and applications via the cloud. Its model is to offer support from the bare metal up to the OS. That gives users a platform on which to build or have hosted whatever applications they desire, said Chris Asing, senior manager of Cloud Services at EMC IT. He calls this Infrastructure as a Service (IaaS).

“EMC ITs first self-service IaaS offering for all of EMC is called Cloud9 Sandbox,” he said. “Any person at EMC can rapidly create 10 VMs and share content in the sandbox with friends, their department or all of EMC,” said Asing. “This also enables our sales engineers to configure software, such as Greenplum, to build demonstration environments as proofs of concept in a few clicks without having to engage IT.”

2. Building a Smarter Cloud

IBM may be guilty of over-hype with its “building a smarter planet” ad campaign. But it is certainly making headway with a smarter cloud. While most clouds started as low-end commodity services, IBM’s intent is to provide an enterprise-class, secure cloud.

The company is clearly seeking to seduce more large-scale customers with an appealing vision of high availability, excellent performance, top-level security, isolation of data/apps and other features that can typically be provided only via an expensive internal data center.

“This level of choice and control translates into capabilities customized to businesses’ needs and priorities and can enable organizations to get what they and their partners and customers need, as they need it -– from advanced analytics and business applications to IT infrastructure like virtual servers and storage or access to tools for testing software code,” said Charles King, an analyst with Pund-IT. “All services will be securely deployed from IBM’s global network of cloud data centers.”

Several other services are part of this SmartCloud rollout, including IBM Workload Deployer, Social Business in the Cloud, IBM SAP Managed Application Services and Lotus Domino Utility Server for LotusLive.

3. Clouding the Desktop

Manufacturing giant Applied Materials used the cloud to eliminate high-end workstations from the desktop. The philosophy of Deputy CIO Jay Kerley was simple. Why have big-ticket hardware and software real-estate in every cubicle? Instead, the company virtualized the hardware and key applications. Users need just a simple network connection and screen locally as well as a tiny desktop blade backed up by a graphics processing unit (GPU) for those who must view complex graphical images.

“Our users can now share computer aided design (CAD) files and applications across a global network of manufacturing sites,” said Kerley

.

4. Cloud DR

There are plenty of cloud-based disaster recovery (DR) solutions out there. Jeff Rountree, Global Network Manager of the Pump Solutions Group prompted an interesting one. That company uses the cloud for backup and DR purposes. Riverbed Whitewater accelerators act as a backup target for his Symantec Backup Exec software, with the Whitewater box also encrypting, deduplicating and compressing data as it is transmitted to AT&T, which acts as the actual cloud server provider.

This setup allows Rountree to keep cloud costs down, as he doesn’t store multiple copies of documents on the cloud where you are typically charged on a per-GB basis. Backup windows have been cut in half, and he can recover data in the event of an emergency much faster. As one copy of everything is retained onsite in the Riverbed box, he can also replicate remotely and locally with the cloud provider. Cloud storage management is done on the Whitewater appliance.

“The benefits of the cloud are cost reduction, no more tape restores, more flexible DR and saving up to two hours per day in administrative overhead,” said Rountree.

Other providers are coming out with strong cloud DR technologies. Tom Trainer, director of product marketing at open source storage software provider Gluster, calls attention to the recent outage at Amazon Web Services (AWS) to highlight the need for disaster recovery (DR) in the cloud.

“As cloud computing becomes more ubiquitous, so must cloud user DR practices become more frequently employed and well implemented,” he said. For an AWS-based cloud, for example, Gluster N-Way Replication supports synchronous data replication between AWS Availability Zones. He also recommends asynchronous replication to provide protection across larger distances.

5. Cloud Health

You might think hospitals would be reluctant to embrace the cloud, yet they are among the first wave of innovators. Banner Health, for example, has set up a simulation hospital for training medical personnel and built a digital image system for rapid sharing of diagnostic information captured by x-rays and MRIs.

The Banner Simulation Medical Center in Mesa, Ariz. is a 55,000 square foot facility. Its “patients” are 71 computerized mannequins used to train more than 1,800 nurses annually. The health care provider has also added computer-based simulations, which can be accessed through a gaming console, to evaluate surgical skills and train doctors on new techniques and help improve manual dexterity.

On the digital imaging side is its Picture Archiving and Communication Systems (PACS) to capture, transmit, display and store digital images captured by x-ray, MRI, CT/CAT Scan, nuclear medicine and ultrasound equipment. Images that used to take hours or days to access became available within minutes. These images can be massive –- 10 MB up to 5 GB. The volume is high as well — one Banner facility in Phoenix generates 2 million images per month.

This whole set up is underpinned by the cloud. Banner uses NetApp’s (formerly Bycast’s) StorageGRID object storage software to manage the PACS images. Following a six-month trial of StorageGRID, Banner put in a 300 TB cloud-based storage grid at its main data center in Arizona (with plans to scale to 1.2 PB) and a 70 TB grid at its secondary data center in Greeley, Colo. Both data centers used HP Medical Archive Solutions (MAS) with HP StorageWorks Modular Smart Arrays integrated with HP ProLiant DL380 servers. The data migrates between tiers depending on policies set by the different departments. For example, images for a patient who is currently undergoing treatment would be in Tier 1 storage, but would later migrate to a lower tier.

On the network side, Cisco’s Wide Area Application Services (WAAS) is used, which includes Cisco 281 Integrated Service Routers (ISR) with Cisco WAE 512 appliances at remote sites and a Cisco 7200 Series Router and WAE 7326 appliances in Greeley to accelerate application and file performance and minimize WAN bandwidth usage. To monitor and troubleshoot potential cloud issues, Banner uses Plixer’s Scrutinizer NetFlow and sFlow Analyzer to pull data from the Cisco boxes. Scrutinizer allows IT to view the top talkers, applications and protocols, and make sure the right traffic has priority within the cloud.

6. Cloud Convergence

With so many areas of IT having succumbed to convergence, it makes sense that the coming together of storage and networking would be accelerated by the cloud.

“To support cloud computing,” says Tom Nolle, president of CIMI in Voorhees, N.J., “you would have to look for more robust connectivity within data centers and higher capacity between them, and the networks have to be more available and reliable.”

Accordingly, Brocade’s Virtual Cluster Switching (VCS) Layer 2 Ethernet technology (Brocade’s Virtual Cluster Switching Layer 2 Ethernet) has three components: Ethernet Fabric, Distributed Intelligence and Logical Chassis. Any switch in the system can communicate with any other switch, as the Ethernet Fabric acts as a single logical switch, connecting all servers and devices. Rather than having to individually manage each switch at the top of a rack or in a blade chassis, each physical switch is managed as if it were a port module in a chassis. More than 1,000 ports can be self-aggregated into a single logical chassis, without manual configuration or having to install separate aggregation switches. The fabric can have as few as 48 ports, or it can scale up into the thousands of ports, and still look to the rest of the network as just another Layer 2 switch. This enables a flatter, faster network. This VCS technology is incorporated into Brocade’s VDX family of data center switches primarily to facilitate cloud computing.

7. Cloud Backup

Zmanda has upgraded is cloud backup solution. Known as ZCB 4.0, it can store backups directly on the cloud; on local disks; or a hybrid backup, which is a combination of the two. Better application support allows users to backup selective data stores in Microsoft Exchange and Microsoft SQL Server; and to perform differential backups of Microsoft SharePoint. Zmanda’s ZCB Global Dashboard is a web-based interface that monitors the backup activity and cloud storage used across multiple ZCB installations, said Chander Kant, Zmanda’s CEO.

8. Multi-Site, Low-Cost Replication

Panzura Alto Cloud Controllers provide options to replicate data across multiple locations within a single cloud provider, multiple cloud providers, or a combination of public and private cloud storage repositories.

“Combining the off-site and professionally managed infrastructure of a reputable cloud provider with the ease-of-use of a high-performance cloud controller makes DR simpler and more cost-effective, while at the same time ensuring that critical data is available when and where needed, even under the worst of circumstances,” said Howard Dratler, CEO of Panzura.

9. Faster Restores

Jeff Bell, director of marketing at Zetta, said that traditional offsite storage choices are too slow to meet enterprise recovery requirements.

“Tape is very slow and unreliable,” he said. “Online backup services require lengthy data retransmission before restoration is complete.”

Zetta, he said, maintains a mountable copy of data ready for instant access directly from the Zetta data center or for rapid replication back to the customer’s location. No tapes or data reformatting is required. Data is always online and accessible for validation. Continuous scrubbing detects and corrects any storage errors to eliminate failed restores.

10. Cloud Development

Application development is also moving to the cloud. eXo has developed a platform as a service (Paas) offering called eXo Cloud IDE that makes it easy to deploy Java applications in the cloud. The code lives in the cloud, accessible from a computer with Internet access. As a result, moving an app from development to production is faster.

This approach is available for VMware’s Cloud Foundry PaaS. It helps developers to create Java, Spring, Ruby and other types of applications and deploy to Cloud Foundry in minutes, all in the cloud.

Drew Robb is a freelance writer specializing in technology and engineering. Currently living in California, he is originally from Scotland, where he received a degree in geology and geography from the University of Strathclyde. He is the author of Server Disk Management in a Windows Environment (CRC Press).

Follow Enterprise Storage Forum on Twitter

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.