Touring the PhoenixNAP Data Center

10

PhoenixNAP Connectivity

The first stop in this hallway is the meet-me-room. As a somewhat cool touch, the various network providers that come into the facility are etched into the glass in front of this network hub.

PhoenixNAP Connectivity Partners
PhoenixNAP Connectivity Partners

One will see the AWS and Google Cloud connectivity options in the facility. These are high-speed direct connections to the public cloud networks. AWS, for example, has a cage with another set of security protocols and checks such as having to have security present whenever the cage is worked on. We could not take photos of that cage, but it is an important feature. AWS and GCP only extend their networks to significant data centers since it requires a large buildout. For customers, this is important since it allows them to colocate specific machines or base infrastructure and yet have a direct capability to leverage cloud resources.

PhoenixNAP MMR Cages 1
PhoenixNAP MMR Cages 1

Phoenix, for many years, had several financial and other institutions that chose the area for its lower costs and lack of natural disasters. That has helped the city get a lot of connectivity options, and it seems most of them have a presence here.

PhoenixNAP MMR Cages 2
PhoenixNAP MMR Cages 2

As a fun note, getting into this room requires additional security checks, including a vascular (blood vessel) scan of one’s wrist.

One item that is important to many of our readers is roof access. Some data centers allow customers to mount fixed wireless, radio, satellite, free optics, and other gear on the roof. Others do not. We did not get to go to the roof, but we could see gear installed, and the PhoenixNAP team confirmed this was an offering for its customers.

PhoenixNAP Roof Mounting Access
PhoenixNAP Roof Mounting Access

In the video and the photos, you are likely to see a large number of conduits above the racks. From the meet-me-room, the various network providers have connectivity to the racks as data is moved throughout the facility. Something that is fun is that, in addition to standard cable shielding either on copper or fiber, there are kevlar “armored” cables that go through the facility for some customers who want a bit more protection.

PhoenixNAP Kevlar Shielded Cables
PhoenixNAP Kevlar Shielded Cables

The sheer amount of wiring is something that if you have never been to a data center before would seem borderline crazy. The wiring is extremely important as connectivity is a primary function of the data center. One can see more of this as we get further into the facility and in the video.

PhoenixNAP Cameras And Cables
PhoenixNAP Cameras And Cables

As a quick note here, we are not stopping to take photos of every camera. There are tons of them throughout the facility that are monitored. We just wanted to make a quick note that indeed there is video surveillance in data centers.

Next, let us get to the data center halls.

10 COMMENTS

  1. STH “hey we’re going to do something new”
    Me reading: “holy moly they’ve done one of the best tours I’ve seen in their first tour.”

  2. Fantastic article and especially video. Plenty of good practices to learn from. I’m surprised they signed off on this, if I was a customer I wouldn’t be too happy. But a fascinating look into an a world that is usually off-limits.

  3. We manage ~3000 servers and at many facilities. I’ll say from a customer perspective I don’t mind this level of tour. I don’t want someone inventorying our boxes but this isn’t a big deal. They’re showing racks, connectivity, hallways, cooling and power. The only other one I’d really be sensitive about is if they took video inside the office an on-site tech uses because there may be whiteboard or papers showing confidential bits. We use Supermicro and Dell servers. No big secret.

  4. I just semi-retired from a job of 9 years supporting 100+ quants, ~5K servers in a small/medium-sized financial firm’s primary data center, in the suburbs of a major US city.

    I gave tours every summer for the city-based staff…They had no idea of the amount of physical/analog gear required to deliver their nice digital world.

    …The funniest part was that each quant had their own server, which they used to run jobs locally and send jobs into the main cluster…They always wanted to see THEIR server!

  5. Always neat to see another data center and reminds of the time I worked in one a decade ago.

    I am surprised by the lack of solar panels on the roof or other infrastructure areas outdoors. It is a bit of an investment to build out but helps in efficiency a bit. In addition it can assist when the facility is on UPS power and needs to transition to generator during the day. Generally this is seen as a win-win in most modern data centers. I wonder if there was some other reason for not retrofitting the facility with them?

    The generators were mentioned briefly but I fathom an important point needs to be made: they are likely in a N+2 redundant configuration. Servicing them can easily take weeks which far too long to go without any sort of redundancy going. Remember that redundancy is only good for when you have it and any loss needs to be corrected immediately before the next failure occurs. Most things can be resolved relatively quickly but generators are one that can take extended periods of time. This leads me to wonder why there is only two UPS battery banks. I would have thought there would be three or four with a similar level of redundancy as a whole and to increase the number of transfer switches (which themselves can be a point of failure) to ensure uptime. Perhaps their facility UPS is more ‘modular’ in that ‘cells’ can be replaced individually in the functional UPS without taking down the whole UPS?

    While not explicitly stated as something data centers should have in their designs but keeping a perpetual rotation of buildout rooms has emerged in practice. This is all in the pursuit of increasing efficiency which pays off every few generations. At my former employer a decade ago had moved to enclosed hot/cold aisles to increase efficiency there in the new build out room. As client systems were updated, they went into the new room and eventually we’d have a void where there no clients and the facilities teams would then tear the room down for the next waves of upgrades.

    The security aspect is fun. Having some one, be it a guard or tech watch over third parties working on racks is common place. The one thing to note is how difficult it is to get an external contractor in there (think Dell, HPE techs as part of a 4 hour SLA contract) vs. normal client staff to work on bare metal. Most colocations can do it but you have to work with them ahead of time to ensure it can happen in the time frame necessary to adhere to those contracts. (Dell/HPE etc. may not count the time it takes to get through security layers toward those SLA figures.)

    There is a trend to upgrade CRAC systems to greater cooling as compute density is increasing. 43 kW per rack is a good upper bound for most systems today but the extreme scenarios have 100 kW rack possibilities on the horizon. Again, circling back to the build out room idea, perhaps is tie to migrate to water cooling at rack scale?

    I probably couldn’t tell in the pictures nor in the video, but one of the wiser ideas I’ve seen done in data center is to color code everything. Power has four dedicated colors for feeds plus a fifth color exclusively used for quick spare in emergencies for example. All out-of-band management cables had their own dedicated colors vs. primary purposes. If a cable had an industry standard color (fiber), the tags on them were color coded on a per tenant basis for identification. Little things that go a long ways in service and maintainability.

  6. Thank God STH did this. I was like oh that’s a security nightmare. You did a really good job showing a lot while not showing anything confidential.

    What I’d like to know and see is how this compares to EU and Singapore. Are those big facilities or like a small cage in another DC? How is the network between them.

    Equinix metal is getting a lot of wins.

  7. STH is the perfect site for datacentre reviews. You’ve obviously been to others and you’re not old fogey boring like traditional datacentre sites.

  8. Patrick,

    If it’s possible, I suggest touring a SuperNap (Switch) facility if they’d allow for it.

    That is one of the most amazing DCs I’ve ever been in. Their site does have some pics. It’s amazing.

    Philip

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.