Touring the PhoenixNAP Data Center

10

PhoenixNAP Power Infrastructure

Although many are familiar with in-rack PDUs, there are a few steps of breakers and such on the data center floors. The main feed into the on-floor power distribution (these are Eaton units below) is 480V power. As you may have seen during our cooling tour, the power story once again will take us outside.

PhoenixNAP Floor 2 480V Power Distribution
PhoenixNAP Floor 2 480V Power Distribution

As with many facilities, there are A+B power feeds that enter on different sides and take different paths into the facility. This data center is a medium voltage facility at 12,470V. One can see just how large the power infrastructure is here as Joe is filming some B-roll in the distance below.

PhoenixNAP Outdoor Power Infrastructure UPS 12470V
PhoenixNAP Outdoor Power Infrastructure UPS 12470V

Outside the facility, there is a fairly standard power conditioning and UPS stage. Something that is certainly a bit different here is that the long rows of batteries have ample air conditioners for the structures they are in. During the tour, it was ~100F / 38C outside and it gets warmer in the summer. We discussed how Phoenix does not get hurricanes, but it also usually has clear skies as can be seen below which means all of this equipment has to withstand many hours of direct sunlight (no cloud cover) and high temperatures.

PhoenixNAP Outdoor UPS
PhoenixNAP Outdoor UPS

The A+B battery banks are designed to last 60 seconds for the entire facility in the case of a power failure.

PhoenixNAP Outdoor Generators And Water Tanks
PhoenixNAP Outdoor Generators And Water Tanks

If the power was cut, within 3 seconds, six 2MW diesel generators are on-site to provide power. A megawatt is generally called enough power for 400-900 homes. The generators are in larger structures that almost look like shipping containers with vents and chimneys. Each of these structures houses two generators and there is on-site diesel storage. As with the water, there are multiple supply contracts to ensure supply continuity. The 2021 Texas freeze that brought down the internal network of one of the country’s largest insurers was due to its facilities being in Texas without proper facilities/ supply contracts in place.

PhoenixNAP Outdoor Above 1
PhoenixNAP Outdoor Above 1

Although not related to the actual power delivery, one may have seen the below in some of our photos.

PhoenixNAP Lightning Protection
PhoenixNAP Lightning Protection

The facility is in the desert but there are occasional “monsoon” rainstorms that come through infrequently. Those have lightning and so the walls around the entire facility have these air terminals that are designed to direct lightning strikes and protect the facility. Although it may have made for interesting photos/ video, there was no rain forecast for the next two weeks after we were there. Admittedly, this was a fun detail we wanted to share and this was the closest section to be able to do so.

Final Words

We often discuss “data centers” but we usually cannot film and tour them. From the outside, most data centers look like either office buildings or industrial warehouses/ factories. There are many different types of facilities out there. While we often discuss differences in terms of rack power capacity and total facility megawatts, there is a lot more that goes into data centers. A great example of this is how facilities have to be matched with their environments. As we move into an era where 5G moves more equipment from the traditional cloud data centers and back towards edge data centers, this becomes a bigger topic. Hopefully, this article and video will give our readers some idea of just how much goes into a data center.

Thank you to PhoenixNAP for letting us tour the data center and helping with travel. Also to the PhoenixNAP team for showing us around. Most data centers we visit are very different but it is great to get to show one on camera. Some of our readers deal mostly with hardware or software and do not get to see behind the scenes at facilities to know what the key components are.

Joe Filming At PhoenixNAP
Joe Filming At PhoenixNAP

Also, I just wanted to say a quick thank you to Joe for coming down to Phoenix from Seattle to film and edit the video portion of this piece. We did this just as COVID-era restrictions eased and after we were both vaccinated. Still, it took some extra effort to get this done.

As always, please let me know if this is something you like to see and we can work on doing more data center tours in the future.

10 COMMENTS

  1. STH “hey we’re going to do something new”
    Me reading: “holy moly they’ve done one of the best tours I’ve seen in their first tour.”

  2. Fantastic article and especially video. Plenty of good practices to learn from. I’m surprised they signed off on this, if I was a customer I wouldn’t be too happy. But a fascinating look into an a world that is usually off-limits.

  3. We manage ~3000 servers and at many facilities. I’ll say from a customer perspective I don’t mind this level of tour. I don’t want someone inventorying our boxes but this isn’t a big deal. They’re showing racks, connectivity, hallways, cooling and power. The only other one I’d really be sensitive about is if they took video inside the office an on-site tech uses because there may be whiteboard or papers showing confidential bits. We use Supermicro and Dell servers. No big secret.

  4. I just semi-retired from a job of 9 years supporting 100+ quants, ~5K servers in a small/medium-sized financial firm’s primary data center, in the suburbs of a major US city.

    I gave tours every summer for the city-based staff…They had no idea of the amount of physical/analog gear required to deliver their nice digital world.

    …The funniest part was that each quant had their own server, which they used to run jobs locally and send jobs into the main cluster…They always wanted to see THEIR server!

  5. Always neat to see another data center and reminds of the time I worked in one a decade ago.

    I am surprised by the lack of solar panels on the roof or other infrastructure areas outdoors. It is a bit of an investment to build out but helps in efficiency a bit. In addition it can assist when the facility is on UPS power and needs to transition to generator during the day. Generally this is seen as a win-win in most modern data centers. I wonder if there was some other reason for not retrofitting the facility with them?

    The generators were mentioned briefly but I fathom an important point needs to be made: they are likely in a N+2 redundant configuration. Servicing them can easily take weeks which far too long to go without any sort of redundancy going. Remember that redundancy is only good for when you have it and any loss needs to be corrected immediately before the next failure occurs. Most things can be resolved relatively quickly but generators are one that can take extended periods of time. This leads me to wonder why there is only two UPS battery banks. I would have thought there would be three or four with a similar level of redundancy as a whole and to increase the number of transfer switches (which themselves can be a point of failure) to ensure uptime. Perhaps their facility UPS is more ‘modular’ in that ‘cells’ can be replaced individually in the functional UPS without taking down the whole UPS?

    While not explicitly stated as something data centers should have in their designs but keeping a perpetual rotation of buildout rooms has emerged in practice. This is all in the pursuit of increasing efficiency which pays off every few generations. At my former employer a decade ago had moved to enclosed hot/cold aisles to increase efficiency there in the new build out room. As client systems were updated, they went into the new room and eventually we’d have a void where there no clients and the facilities teams would then tear the room down for the next waves of upgrades.

    The security aspect is fun. Having some one, be it a guard or tech watch over third parties working on racks is common place. The one thing to note is how difficult it is to get an external contractor in there (think Dell, HPE techs as part of a 4 hour SLA contract) vs. normal client staff to work on bare metal. Most colocations can do it but you have to work with them ahead of time to ensure it can happen in the time frame necessary to adhere to those contracts. (Dell/HPE etc. may not count the time it takes to get through security layers toward those SLA figures.)

    There is a trend to upgrade CRAC systems to greater cooling as compute density is increasing. 43 kW per rack is a good upper bound for most systems today but the extreme scenarios have 100 kW rack possibilities on the horizon. Again, circling back to the build out room idea, perhaps is tie to migrate to water cooling at rack scale?

    I probably couldn’t tell in the pictures nor in the video, but one of the wiser ideas I’ve seen done in data center is to color code everything. Power has four dedicated colors for feeds plus a fifth color exclusively used for quick spare in emergencies for example. All out-of-band management cables had their own dedicated colors vs. primary purposes. If a cable had an industry standard color (fiber), the tags on them were color coded on a per tenant basis for identification. Little things that go a long ways in service and maintainability.

  6. Thank God STH did this. I was like oh that’s a security nightmare. You did a really good job showing a lot while not showing anything confidential.

    What I’d like to know and see is how this compares to EU and Singapore. Are those big facilities or like a small cage in another DC? How is the network between them.

    Equinix metal is getting a lot of wins.

  7. STH is the perfect site for datacentre reviews. You’ve obviously been to others and you’re not old fogey boring like traditional datacentre sites.

  8. Patrick,

    If it’s possible, I suggest touring a SuperNap (Switch) facility if they’d allow for it.

    That is one of the most amazing DCs I’ve ever been in. Their site does have some pics. It’s amazing.

    Philip

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.