Create an Account
username: password:
 
  MemeStreams Logo

MemeStreams Discussion

search


This page contains all of the posts and discussion on MemeStreams referencing the following web page: August 7, 2005 - MemeStreams Outage. You can find discussions on MemeStreams as you surf the web, even if you aren't a MemeStreams member, using the Threads Bookmarklet.

August 7, 2005 - MemeStreams Outage
by Rattle at 10:46 pm EDT, Aug 7, 2005

MemeStreams experienced an outage today for eight hours due to a transformer explosion at our primary data-center located next to the landmark Bell-South "bat-building" in Nashville, Tennessee. Pictures taken by Industrial Memetics staff of NES workers responding to the situation can be seen here and here.


 
RE: August 7, 2005 - MemeStreams Outage
by Shannon at 12:21 am EDT, Aug 8, 2005

Rattle wrote:
MemeStreams experienced an outage today for eight hours due to a transformer explosion at our primary data-center located next to the landmark Bell-South "bat-building" in Nashville, Tennessee. Pictures taken by Industrial Memetics staff of NES workers responding to the situation can be seen here and here.

In charge one week... And he breaks it.


 
RE: August 7, 2005 - MemeStreams Outage
by Elonka at 11:45 am EDT, Aug 8, 2005

Pictures taken by Industrial Memetics staff of NES workers responding to the situation can be seen here and here.

(sigh) It's always a bummer when pics I want to see are buried behind a password. :/


  
RE: August 7, 2005 - MemeStreams Outage
by Catonic at 12:06 pm EDT, Aug 8, 2005

Elonka wrote:

Pictures taken by Industrial Memetics staff of NES workers responding to the situation can be seen here and here.

http://www.nrep.net

probably out of the price range though.


  
RE: August 7, 2005 - MemeStreams Outage
by Rattle at 1:39 pm EDT, Aug 8, 2005

Elonka wrote:

(sigh) It's always a bummer when pics I want to see are buried behind a password. :/

I wasn't aware that Flickr blocked viewing photos if you didn't have an account...


 
RE: August 7, 2005 - MemeStreams Outage
by flynn23 at 4:08 pm EDT, Aug 8, 2005

Rattle wrote:
MemeStreams experienced an outage today for eight hours due to a transformer explosion at our primary data-center located next to the landmark Bell-South "bat-building" in Nashville, Tennessee. Pictures taken by Industrial Memetics staff of NES workers responding to the situation can be seen here and here.

Despite the fact that the inFlow DC was taken down because an *external* transformer to the building caught on fire is sufficient enough to worry about, but then I think to myself that it could've been worse - the sprinklers(!) inside the DC could've went off too!


  
What do Fire Chiefs need to know about Data Centers?
by Rattle at 7:15 pm EDT, Aug 9, 2005

flynn23 wrote:

Despite the fact that the inFlow DC was taken down because an *external* transformer to the building caught on fire is sufficient enough to worry about, but then I think to myself that it could've been worse - the sprinklers(!) inside the DC could've went off too!

I would love to get the full story behind the handling of this outage. Coming from the perspective of trade craft, I'm highly interested. From what I was able to figure out by talking to people, the IDC's backup systems all functioned after the transformer exploded, but the Nashville Fire Chief told them to pull the plug on the facility. The one fellow I spoke to at Inflow didn't want to give me any details, and I didn't really press him on it. After I got the information I required, I just wished him luck and told him I hope the rest of their day goes better.

I'm guessing that Inflow does not have much equipment/sq footage. In some cases, taking a machine room filled with equipment and pulling the plug is about the best way you could come up with to create a fire hazard. Computer, storage, and networking equipment, when tightly packed, have an amazing quality for holding energy in the form of heat. Once the air stops flowing, regardless of if the equipment is powered down, the temperature can rise rapidly. From what I've been told in my past DC experience, exceeding 120F isn't that unheard of. This has caused DCs to burn down before. This is the type of thing I wonder if the Fire Chief was aware of.

Do you know if their fire suppression system is water or gas? That matters when determining how/if to pull the plug. The equipment can sit and cool down if the environment is (or can be) flooded with gas. If its water, the best way to handle it is to kill the equipment and keep the cooling going, unless of course there is an actual fire within the facility.

I'm guessing Inflow's Disaster Plan does not connect with the city in the way it should. A transformer blowing up should not lead to a DC being shutdown unless its pouring smoke into the facility or something that poses a direct life safety threat.

The Fire Chief's job is to be paranoid, so I can understand the call he made. It just begs a larger question: What do Fire Chiefs need to know about Data-Centers when it comes to these types of decisions? If every time something goes wrong such as an external transformer blowing up, it requires the facility to be shut down, it would be impossible to build a carrier class facility capable of getting past 4 nines without its success grounded on luck rather than design..

While pondering that question, you could watch this music video of the Nashville FDP-EMS doing a training exercise.


   
RE: What do Fire Chiefs need to know about Data Centers?
by Catonic at 9:04 pm EDT, Aug 9, 2005

Rattle wrote:
I'm guessing Inflow's Disaster Plan does not connect with the city in the way it should. A transformer blowing up should not lead to a DC being shutdown unless its pouring smoke into the facility or something that poses a direct life safety threat.

I want to see this Disaster Plan. Does it conform to NIST 800-34?


   
RE: What do Fire Chiefs need to know about Data Centers?
by flynn23 at 12:23 pm EDT, Aug 10, 2005

Rattle wrote:

flynn23 wrote:

Despite the fact that the inFlow DC was taken down because an *external* transformer to the building caught on fire is sufficient enough to worry about, but then I think to myself that it could've been worse - the sprinklers(!) inside the DC could've went off too!

I would love to get the full story behind the handling of this outage. Coming from the perspective of trade craft, I'm highly interested. From what I was able to figure out by talking to people, the IDC's backup systems all functioned after the transformer exploded, but the Nashville Fire Chief told them to pull the plug on the facility. The one fellow I spoke to at Inflow didn't want to give me any details, and I didn't really press him on it. After I got the information I required, I just wished him luck and told him I hope the rest of their day goes better.

I'm guessing that Inflow does not have much equipment/sq footage. In some cases, taking a machine room filled with equipment and pulling the plug is about the best way you could come up with to create a fire hazard. Computer, storage, and networking equipment, when tightly packed, have an amazing quality for holding energy in the form of heat. Once the air stops flowing, regardless of if the equipment is powered down, the temperature can rise rapidly. From what I've been told in my past DC experience, exceeding 120F isn't that unheard of. This has caused DCs to burn down before. This is the type of thing I wonder if the Fire Chief was aware of.

Do you know if their fire suppression system is water or gas? That matters when determining how/if to pull the plug. The equipment can sit and cool down if the environment is (or can be) flooded with gas. If its water, the best way to handle it is to kill the equipment and keep the cooling going, unless of course there is an actual fire within the facility.

I'm guessing Inflow's Disaster Plan does not connect with the city in the way it should. A transformer blowing up should not lead to a DC being shutdown unless its pouring smoke into the facility or something that poses a direct life safety threat.

The Fire Chief's job is to be paranoid, so I can understand the call he made. It just begs a larger question: What do Fire Chiefs need to know about Data-Centers when it comes to these types of decisions? If every time something goes wrong such as an external transformer blowing up, it requires the facility to be shut down, it would be impossible to build a carrier class facility capable of getting past 4 nines without its success grounded on luck rather than design..

While pondering that question, you could watch this music video of the Nashville FDP-EMS doing a training exercise.

Those are all good points. They certainly need to do a post-mortum to coordinate better with NFD on future events. Transformers fail all the time, so I can see this being a reoccuring event that they need to circumvent. Your point about just killing power and the heat that can be generated is probably the most salient. That DC is near capacity, and I'm sure that the cooling is just barely keeping it operational. Simply killing it would raise the internal machine temps to pretty high levels very quickly. And being that it's in a multi-tenant building, you can't just open the doors for some fresh air.

The facility has water fire suppression, which could also be dangerous. If the power was not killed before the system was activated (and in the right way, since switching power supplies still have some juice after main feeds are gone), then dumping water on the equipment could cause another fire.

Who did you talk to over there?


There is a redundant post from fractal not displayed in this view.
 
 
Powered By Industrial Memetics