Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
6. Locate Your DR Site Strategically – But Be Aware of Latency
As noted earlier in this article, a failover site does you no good if it gets hit by the same disaster that strikes your primary site. "If you are on the East Coast and you are by the water, you should not use a facility that's thirty miles away," explained Wynn. "You should be looking at a couple of hundred miles or maybe another state to your west that's centrally located. If you have the capability and the dataset to do so, maybe look at something on the other side of the United States."
The difficulty with using a failover site that's located far away is that latency can become an issue. Wynn advised that IT departments carefully consider their SLAs and think about how quickly IT operations need to be back up and running in case of a traumatic event. When considering a failover site or a cloud-based solution, they'll need to find a balance between optimum safety and minimal latency.
7. Expect Some Surprises
One of Hull's clients, Prometheus Global Media, has a data center in New York. "They had power generators on the roof, as it turns out, so they were prepared," he recalled.
However, despite their foresight, Prometheus did run into a few unforeseen snags. In order to run, their generators needed fuel, of course. But when the power went down, the elevators weren't working, making it a lot tougher to get the fuel to the roof where it was needed.
Hull, who advises data centers on disaster preparedness, among other topics, said that he had never considered the fact that so many electrical lines run through New York's subway tunnels and what that would mean in the case of a flood. He was surprised when Con Edison turned off the power to his home, which doubles as his workspace, in advance of the storm. The torrent of saltwater also caused an unexpected explosion at a Con Edison substation near his apartment on 14th St.
IT managers at companies of all sizes will need to be flexible and react to unexpected events like these as they occur.
8. Who’s In Charge? Coordinate and Communicate
The IT pros who dealt with Sandy also noted that during the initial response period, it was critical that employees knew who was in charge of a company's response and that people with recovery know-how were able to stay in touch with each other.
"Recovering from major outages like that is about coordination and communication -– keeping the lines of communication open so that all the people with all the knowledge are able to take action, they don't have their hands tied," Hull said.
9. Don't Expect to Recover Right Away
Having been on the scene at numerous disasters, Hillis has observed that people often don't understand that things will be vastly different after a major event. As a result, businesses won't be able to recover as quickly as they might think.
For example, he said that business owners imagine that if their building is destroyed, they'll be able to rebuild within a few months – after all, it only takes three months to put up a structure. But after a disaster, local governments often stop issuing building permits for 60 to 90 days, and even when permit issuance does resume, there is often so much work that contractors are overwhelmed. It can be impossible to even find a construction crew. He said companies often need longer-term plans for dealing with the disaster.
In the aftermath of Sandy, FalconStor had employees off work or working remotely for a couple of weeks, despite the fact that their building sustained no notable damage. Although the company headquarters was operational, many employees had a difficult time getting in to work for a couple of weeks because of power outages and gas shortages in areas near where they live. When employees did make it in to the office, FalconStor ended the work day early so that employees could get home before dark.
10. Getting By With Help from Your Friends – And Strangers
"The small businesses think that they built their businesses by themselves, so they can recover by themselves," Hillis said. "But that's just not the reality."
All up and down the East Coast, individuals and small businesses found themselves relying on the generosity of friends, neighbors and strangers in order to make it through the experience. Fortunately, most people were more than willing to assist those in need.
As a Texan, Hillis came to the East Coast with some preconceived notions about "tough" New Yorkers, but he was amazed to see how people within the tech industry reached out to help each other.
"I've seen guys take public transportation three hours each way to go help somebody," recounted Hillis. "These are the IT guys who are helping us up here. They would take a train, they would take a bus, and then they would walk a mile and a half, whatever, just to come down and work with us for six or eight hours and then turn around and leave at 7:00 and redo the whole thing backwards. I can't say enough good things about the people up here. They're fantastic."
The ITDRC had about 50 volunteers working in New York and New Jersey, and Hillis said they were working with "hundreds" of other IT professionals from NY Tech Meetup to get individuals and small businesses the technical assistance they needed. They set up PCs, WiFi access points and other communications equipment at disaster recovery centers so that residents and small businesses could have access to the technology and connectivity they needed. They also provided equipment and installation assistance directly to small business owners.
Hillis concluded his comments on lessons from the hurricane with a plea: "We would love to have people involved. We would love to have their support financially and in kind as well. It's going to take quite some time to recover." Those within the IT industry who would like more information about assisting with disaster recovery can visit the ITDRC's website for more information.