Touring the PhoenixNAP Data Center

10

Inside the PhoenixNAP Data Center Data Halls

Getting into the data center halls with racks of servers requires another set of security doors. Once on the floor, each customer’s gear has at least one more level of security whether that is a cage or locks on cabinets. Networking and power are dropped into the proper places from above. We are going to focus on the data center halls in this section.

PhoenixNAP Floor 1 Corridor
PhoenixNAP Floor 1 Corridor

As one can see, the facility is large. The current facility we are touring in Phoenix is around 200,000 sq ft, and the company has a parcel of land it is building a facility more than twice that size next to where we are. Most of these photos are from the first floor, but there is a second floor in the facility as well.

PhoenixNAP Floor 1 Corridor 2
PhoenixNAP Floor 1 Corridor 2

In terms of cages, we saw a fairly wide mix. Many customers and also the areas for the company’s bare metal cloud and other hosting offerings had their own cages. These cages often had additional security measures such as keys, RFID badges, and PIN keypads. Some of the units were simply bolted to the floor. In the video, we show an example of a type of cage used for payments processing racks where the cage is capped both on the top and bottom so one cannot just remove floor tiles to get into the cage.

PhoenixNAP Secure Cage
PhoenixNAP Secure Cage

Here is an aisle for smaller colocation customers. One can see full cabinets (left) along with quarter and half cabinets (right.) This type of aisle holds a special connection for me personally. Years ago when we moved STH off of AWS we started in a quarter cabinet. We then moved to a half cabinet briefly before going to a full cabinet. We now have full cabinets in multiple facilities both for hosting and also for testing servers.

PhoenixNAP Colocation Cold Aisle 1
PhoenixNAP Colocation Cold Aisle 1

As one can see, we have perforated tiles down the cold aisles and air vents on the hot aisle sides. Two things that our readers will likely notice at this point if they have been to many facilities. First, the racks here are not using any hard hot/ cold aisle containment. We showed that off a bit when we toured Intel’s Santa Clara fab-turned data center. Second, the ceilings are lower than in some other facilities, especially single-floor data centers. We were told they can handle 48U racks, but most of the racks we saw seemed to be 42U racks. We asked and although 208V 30A (~6kW) racks are very popular (these are also probably the most popular in North America), the facility can handle 44kW racks.

PhoenixNAP Floor 1 Cieling Hot Air And Power
PhoenixNAP Floor 1 Ceiling Air And Power

The facility in Phoenix is kept at 72F +/- 5F and 45% RH +/- 15%. For those wondering, since Phoenix is in the desert, humidity is a challenge. The dry ambient air is conducive to static buildup, and that is not what one wants in a data center. As a result, there are humidifiers strategically spaced through the facility.

PhoenixNAP Humidifier 1
PhoenixNAP Humidifier 1

These humidifiers expel fine puffs of moisture that help introduce more humidity back into the data center. This is not the only water circulating, but we are going to get more on the cooling later in this article.

PhoenixNAP started by building out the first floor but is now filling the data room on the second floor.

PhoenixNAP 2nd Floor Build Out
PhoenixNAP 2nd Floor Build Out

Here is a portion of the data hall. I managed to capture Joe who was filming the video for some scale.

PhoenixNAP 2nd Floor Build Out Space
PhoenixNAP 2nd Floor Build Out Space

We had the opportunity to see lots of different types of gear that included standard servers, GPU servers, blade servers, high-capacity, and high-performance storage arrays, and much more. We did not take close-ups of customer infrastructure. Personally, I always think this is a bit funny. Some of our racks in facilities have little STH logos because I want folks who walk by to stop and look. Still, for others, we wanted to respect privacy.

After starting with connectivity, and moving to the server halls, there are still two more components we wanted to focus on. Next, we are going to look at cooling before getting to the power infrastructure.

10 COMMENTS

  1. STH “hey we’re going to do something new”
    Me reading: “holy moly they’ve done one of the best tours I’ve seen in their first tour.”

  2. Fantastic article and especially video. Plenty of good practices to learn from. I’m surprised they signed off on this, if I was a customer I wouldn’t be too happy. But a fascinating look into an a world that is usually off-limits.

  3. We manage ~3000 servers and at many facilities. I’ll say from a customer perspective I don’t mind this level of tour. I don’t want someone inventorying our boxes but this isn’t a big deal. They’re showing racks, connectivity, hallways, cooling and power. The only other one I’d really be sensitive about is if they took video inside the office an on-site tech uses because there may be whiteboard or papers showing confidential bits. We use Supermicro and Dell servers. No big secret.

  4. I just semi-retired from a job of 9 years supporting 100+ quants, ~5K servers in a small/medium-sized financial firm’s primary data center, in the suburbs of a major US city.

    I gave tours every summer for the city-based staff…They had no idea of the amount of physical/analog gear required to deliver their nice digital world.

    …The funniest part was that each quant had their own server, which they used to run jobs locally and send jobs into the main cluster…They always wanted to see THEIR server!

  5. Always neat to see another data center and reminds of the time I worked in one a decade ago.

    I am surprised by the lack of solar panels on the roof or other infrastructure areas outdoors. It is a bit of an investment to build out but helps in efficiency a bit. In addition it can assist when the facility is on UPS power and needs to transition to generator during the day. Generally this is seen as a win-win in most modern data centers. I wonder if there was some other reason for not retrofitting the facility with them?

    The generators were mentioned briefly but I fathom an important point needs to be made: they are likely in a N+2 redundant configuration. Servicing them can easily take weeks which far too long to go without any sort of redundancy going. Remember that redundancy is only good for when you have it and any loss needs to be corrected immediately before the next failure occurs. Most things can be resolved relatively quickly but generators are one that can take extended periods of time. This leads me to wonder why there is only two UPS battery banks. I would have thought there would be three or four with a similar level of redundancy as a whole and to increase the number of transfer switches (which themselves can be a point of failure) to ensure uptime. Perhaps their facility UPS is more ‘modular’ in that ‘cells’ can be replaced individually in the functional UPS without taking down the whole UPS?

    While not explicitly stated as something data centers should have in their designs but keeping a perpetual rotation of buildout rooms has emerged in practice. This is all in the pursuit of increasing efficiency which pays off every few generations. At my former employer a decade ago had moved to enclosed hot/cold aisles to increase efficiency there in the new build out room. As client systems were updated, they went into the new room and eventually we’d have a void where there no clients and the facilities teams would then tear the room down for the next waves of upgrades.

    The security aspect is fun. Having some one, be it a guard or tech watch over third parties working on racks is common place. The one thing to note is how difficult it is to get an external contractor in there (think Dell, HPE techs as part of a 4 hour SLA contract) vs. normal client staff to work on bare metal. Most colocations can do it but you have to work with them ahead of time to ensure it can happen in the time frame necessary to adhere to those contracts. (Dell/HPE etc. may not count the time it takes to get through security layers toward those SLA figures.)

    There is a trend to upgrade CRAC systems to greater cooling as compute density is increasing. 43 kW per rack is a good upper bound for most systems today but the extreme scenarios have 100 kW rack possibilities on the horizon. Again, circling back to the build out room idea, perhaps is tie to migrate to water cooling at rack scale?

    I probably couldn’t tell in the pictures nor in the video, but one of the wiser ideas I’ve seen done in data center is to color code everything. Power has four dedicated colors for feeds plus a fifth color exclusively used for quick spare in emergencies for example. All out-of-band management cables had their own dedicated colors vs. primary purposes. If a cable had an industry standard color (fiber), the tags on them were color coded on a per tenant basis for identification. Little things that go a long ways in service and maintainability.

  6. Thank God STH did this. I was like oh that’s a security nightmare. You did a really good job showing a lot while not showing anything confidential.

    What I’d like to know and see is how this compares to EU and Singapore. Are those big facilities or like a small cage in another DC? How is the network between them.

    Equinix metal is getting a lot of wins.

  7. STH is the perfect site for datacentre reviews. You’ve obviously been to others and you’re not old fogey boring like traditional datacentre sites.

  8. Patrick,

    If it’s possible, I suggest touring a SuperNap (Switch) facility if they’d allow for it.

    That is one of the most amazing DCs I’ve ever been in. Their site does have some pics. It’s amazing.

    Philip

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.