Inside the PhoenixNAP Data Center Data Halls
Getting into the data center halls with racks of servers requires another set of security doors. Once on the floor, each customer’s gear has at least one more level of security whether that is a cage or locks on cabinets. Networking and power are dropped into the proper places from above. We are going to focus on the data center halls in this section.
As one can see, the facility is large. The current facility we are touring in Phoenix is around 200,000 sq ft, and the company has a parcel of land it is building a facility more than twice that size next to where we are. Most of these photos are from the first floor, but there is a second floor in the facility as well.
In terms of cages, we saw a fairly wide mix. Many customers and also the areas for the company’s bare metal cloud and other hosting offerings had their own cages. These cages often had additional security measures such as keys, RFID badges, and PIN keypads. Some of the units were simply bolted to the floor. In the video, we show an example of a type of cage used for payments processing racks where the cage is capped both on the top and bottom so one cannot just remove floor tiles to get into the cage.
Here is an aisle for smaller colocation customers. One can see full cabinets (left) along with quarter and half cabinets (right.) This type of aisle holds a special connection for me personally. Years ago when we moved STH off of AWS we started in a quarter cabinet. We then moved to a half cabinet briefly before going to a full cabinet. We now have full cabinets in multiple facilities both for hosting and also for testing servers.
As one can see, we have perforated tiles down the cold aisles and air vents on the hot aisle sides. Two things that our readers will likely notice at this point if they have been to many facilities. First, the racks here are not using any hard hot/ cold aisle containment. We showed that off a bit when we toured Intel’s Santa Clara fab-turned data center. Second, the ceilings are lower than in some other facilities, especially single-floor data centers. We were told they can handle 48U racks, but most of the racks we saw seemed to be 42U racks. We asked and although 208V 30A (~6kW) racks are very popular (these are also probably the most popular in North America), the facility can handle 44kW racks.
The facility in Phoenix is kept at 72F +/- 5F and 45% RH +/- 15%. For those wondering, since Phoenix is in the desert, humidity is a challenge. The dry ambient air is conducive to static buildup, and that is not what one wants in a data center. As a result, there are humidifiers strategically spaced through the facility.
These humidifiers expel fine puffs of moisture that help introduce more humidity back into the data center. This is not the only water circulating, but we are going to get more on the cooling later in this article.
PhoenixNAP started by building out the first floor but is now filling the data room on the second floor.
Here is a portion of the data hall. I managed to capture Joe who was filming the video for some scale.
We had the opportunity to see lots of different types of gear that included standard servers, GPU servers, blade servers, high-capacity, and high-performance storage arrays, and much more. We did not take close-ups of customer infrastructure. Personally, I always think this is a bit funny. Some of our racks in facilities have little STH logos because I want folks who walk by to stop and look. Still, for others, we wanted to respect privacy.
After starting with connectivity, and moving to the server halls, there are still two more components we wanted to focus on. Next, we are going to look at cooling before getting to the power infrastructure.