Touring the PhoenixNAP Data Center

10

PhoenixNAP Cooling Infrastructure

On the other side of the wall from the data center floor is a hallway with large 80-Ton Stulz Computer Room Air Handler (CRAH ) units. These are what circulate air through the facility and keep the racks cool. The heat exchange happens in these hallways, but the real cooling happens outside.

PhoenixNAP STULZ CRAH Unit
PhoenixNAP STULZ CRAH Unit

Outside the facility, there are chillers that take the water heated by the equipment exhaust and then chill the water for re-circulation. This facility has a yard enclosed by high steel-reinforced walls that houses the power and cooling equipment. First, we are going into the main chiller plant, but one can see that as the second floor is built-out and servers rise in power, that there is a concrete pad already poured for an additional chiller plant in the photo below.

PhoenixNAP Large Concrete Pad In Front Of Chiller Plant
PhoenixNAP Large Concrete Pad In Front Of Chiller Plant

Inside the chiller plant, there are two 700-ton chillers and two 720-ton fully-magnetic chillers.

PhoenixNAP Chillers 1
PhoenixNAP Chillers 1

These chillers take the 55F water that comes from the data center (after a few steps we will get to later) and return water to the data center at 44F.

PhoenixNAP Chillers 2
PhoenixNAP Chillers 2

Outside of the main chiller plant hut, there are stages for media fill cooling towers (a method to increase water to air heat exchange.)

PhoenixNAP Outdoor Cooling Towers And Tanks
PhoenixNAP Outdoor Cooling Towers And Tanks

Each chiller has an 8500-gallon tower that will keep the water chilled for approximately 20-25 minutes in the event of an outage so the cool water supply is not lost.

PhoenixNAP Outdoor Above 1
PhoenixNAP Outdoor Above 1

Feeding the facility, there are two water mains coming in from different directions that ensure that there is a water supply even if one feed is shut off.

PhoenixNAP Water Pipe And Training Fixture
PhoenixNAP Water Pipe And Training Fixture

As a quick aside, if you are wondering what the fixture is on the right side of the photo above where there are cables that go to nowhere, the answer may surprise you. PhoenixNAP uses its own maintenance staff instead of relying on outside contractors. This area is actually a small portion of the training facility next to that team’s offices and lockers. The various fixtures are there to show, train, plan, and test before doing something in the production floor.

There are also two 75,000 gallon water tanks on the campus for backup capacity. One can see those just behind the generators in the distance below.

PhoenixNAP Outdoor Generators And Water Tanks
PhoenixNAP Outdoor Generators And Water Tanks

Phoenix is not known for having the best water. Joe who filmed the video is from Alaska where I personally think the best US water comes from due to it usually coming from glacial melt. Phoenix water is the kind that one can taste that there are minerals and such in the water which is why many households filter water before drinking. In the data center context, the same minerals and pH imbalances can also cause buildup, corrosion, and other challenges for water lines. As a result, the facility has its own water conditioning stage on-site to ensure that the water meets specifications for the cooling loops.

PhoenixNAP Water Conditioner
PhoenixNAP Water Conditioner

Next, the last major focus of our tour, the power infrastructure.

10 COMMENTS

  1. STH “hey we’re going to do something new”
    Me reading: “holy moly they’ve done one of the best tours I’ve seen in their first tour.”

  2. Fantastic article and especially video. Plenty of good practices to learn from. I’m surprised they signed off on this, if I was a customer I wouldn’t be too happy. But a fascinating look into an a world that is usually off-limits.

  3. We manage ~3000 servers and at many facilities. I’ll say from a customer perspective I don’t mind this level of tour. I don’t want someone inventorying our boxes but this isn’t a big deal. They’re showing racks, connectivity, hallways, cooling and power. The only other one I’d really be sensitive about is if they took video inside the office an on-site tech uses because there may be whiteboard or papers showing confidential bits. We use Supermicro and Dell servers. No big secret.

  4. I just semi-retired from a job of 9 years supporting 100+ quants, ~5K servers in a small/medium-sized financial firm’s primary data center, in the suburbs of a major US city.

    I gave tours every summer for the city-based staff…They had no idea of the amount of physical/analog gear required to deliver their nice digital world.

    …The funniest part was that each quant had their own server, which they used to run jobs locally and send jobs into the main cluster…They always wanted to see THEIR server!

  5. Always neat to see another data center and reminds of the time I worked in one a decade ago.

    I am surprised by the lack of solar panels on the roof or other infrastructure areas outdoors. It is a bit of an investment to build out but helps in efficiency a bit. In addition it can assist when the facility is on UPS power and needs to transition to generator during the day. Generally this is seen as a win-win in most modern data centers. I wonder if there was some other reason for not retrofitting the facility with them?

    The generators were mentioned briefly but I fathom an important point needs to be made: they are likely in a N+2 redundant configuration. Servicing them can easily take weeks which far too long to go without any sort of redundancy going. Remember that redundancy is only good for when you have it and any loss needs to be corrected immediately before the next failure occurs. Most things can be resolved relatively quickly but generators are one that can take extended periods of time. This leads me to wonder why there is only two UPS battery banks. I would have thought there would be three or four with a similar level of redundancy as a whole and to increase the number of transfer switches (which themselves can be a point of failure) to ensure uptime. Perhaps their facility UPS is more ‘modular’ in that ‘cells’ can be replaced individually in the functional UPS without taking down the whole UPS?

    While not explicitly stated as something data centers should have in their designs but keeping a perpetual rotation of buildout rooms has emerged in practice. This is all in the pursuit of increasing efficiency which pays off every few generations. At my former employer a decade ago had moved to enclosed hot/cold aisles to increase efficiency there in the new build out room. As client systems were updated, they went into the new room and eventually we’d have a void where there no clients and the facilities teams would then tear the room down for the next waves of upgrades.

    The security aspect is fun. Having some one, be it a guard or tech watch over third parties working on racks is common place. The one thing to note is how difficult it is to get an external contractor in there (think Dell, HPE techs as part of a 4 hour SLA contract) vs. normal client staff to work on bare metal. Most colocations can do it but you have to work with them ahead of time to ensure it can happen in the time frame necessary to adhere to those contracts. (Dell/HPE etc. may not count the time it takes to get through security layers toward those SLA figures.)

    There is a trend to upgrade CRAC systems to greater cooling as compute density is increasing. 43 kW per rack is a good upper bound for most systems today but the extreme scenarios have 100 kW rack possibilities on the horizon. Again, circling back to the build out room idea, perhaps is tie to migrate to water cooling at rack scale?

    I probably couldn’t tell in the pictures nor in the video, but one of the wiser ideas I’ve seen done in data center is to color code everything. Power has four dedicated colors for feeds plus a fifth color exclusively used for quick spare in emergencies for example. All out-of-band management cables had their own dedicated colors vs. primary purposes. If a cable had an industry standard color (fiber), the tags on them were color coded on a per tenant basis for identification. Little things that go a long ways in service and maintainability.

  6. Thank God STH did this. I was like oh that’s a security nightmare. You did a really good job showing a lot while not showing anything confidential.

    What I’d like to know and see is how this compares to EU and Singapore. Are those big facilities or like a small cage in another DC? How is the network between them.

    Equinix metal is getting a lot of wins.

  7. STH is the perfect site for datacentre reviews. You’ve obviously been to others and you’re not old fogey boring like traditional datacentre sites.

  8. Patrick,

    If it’s possible, I suggest touring a SuperNap (Switch) facility if they’d allow for it.

    That is one of the most amazing DCs I’ve ever been in. Their site does have some pics. It’s amazing.

    Philip

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.