Microsoft is simultaneously tackling a number of engineering challenges with the Microsoft Azure Project Natick. Project Natick is the company’s foray into sunken environmentally friendly data centers that looks to lower operational costs alongside providing lower latency for future data centers. For those wondering, “Natick” is a codename after the town in Massachusetts. The company has progressed to larger and larger installations and is now showing off a third-generation submerged data center roughly twelve times larger than the previous attempts.
Microsoft Project Natick and Natick 2
Natick was an early concept where the company would submerge a rack of servers in a steel shell into the ocean. The idea was to use servers that were fairly similar to what would be deployed on land modified for marine use. These can be deployed in the ocean within 200km from users which is roughly equivalent to a 1ms directional hop over the Internet. Effectively, this gives undersea data centers a 2ms or lower round trip latency to half of the world’s population. The first version you can learn more about in Microsoft’s video here:
This first proof of concept was built and deployed in California within about a year. The first version operated for 105 days.
About two years later, Microsoft Research took the learnings from the original Natick and submerged 14 racks of servers instead of just one in a single pod off the coast of Scotland. That is a significant region not for its population density, but it is where a lot of renewable energy sources such as solar, offshore wind, and tidal/ wave power are deployed. Here is the video for the second generation undersea data center.
After well over a year of this second generation Natick data center, the company has found a number of interesting attributes.
In terms of power usage effectiveness, or the power efficiency of a data center, the PUE of Natick is 1.07 which is very low for a current in-production data center. Lower PUE can mean lower operational costs and also makes a data center more environmentally friendly.
Water usage of Natick 2 is “zero.” This compares to a traditional land-based data center Microsoft says uses 4.8L of water per kWh. Of course, the ocean is made of water and is being used here, but in a different way. Water usage has become a major topic of conservation discussions with data centers growing in power and therefore water usage. There are drought-prone regions, even near the shoreline, that cannot necessarily handle the water needs for next-generation data centers so this is a big deal.
The cylinders are vacuum-sealed which means the servers and IT equipment do not have to deal with humidity and dust. Also, given how well water conducts heat, and how large the oceans are, once an installation goes deep enough (say 200m), constant and cool ambient temperatures can be maintained without the same fluctuations we see on land-based data centers. If you saw our Inside the Intel Data Center at its Santa Clara Headquarters piece from years ago, Intel is leveraging the temperate climate of the bay area California to get “free” cooling, however, there are still some days that require air conditioning to cool their servers.
Microsoft took servers from the same batches, deployed them in traditional land-based data centers and these environmental factors have led to 7x better reliability for the undersea versions in the first 16 months or so. Natick 2 is designed to operate for five years without maintenance.
That is a big deal since the undersea Project Natick data centers cannot be serviced. Submerged data centers are truly a lights-out so these reliability factors are extremely important.
One other interesting fact that Microsoft offers is that the Natick concept is very fast to build and deploy. From the go-ahead decision, the assembly, deployment, and go-live running Azure services is only 90 days. That contrasts to 18-24 months for traditional data centers. Microsoft did not cite this, but while Natick is currently deployed in coastal Scottish waters, it can, in theory, be deployed in international waters as well likely reducing regulatory hurdles even further.
Microsoft Project Natick Gen 3
With some time operating Natick 2, Microsoft is starting to discuss Natick gen 3. Here, the concept is to put cylinders together with networking and power with a large steel lattice structure. The total size is under 300 feet long.
Each side of the structure has ballast tanks for transporting and deploying the setup. That can help make the solution easier to deploy potentially requiring smaller support vessels.
These twelve cylinders have around 5 megawatts of data center capacity. Microsoft says this is enough for a mini Azure region. Microsoft also says these 5MW designs for Natick can be grouped together to form larger undersea availability zones.
Spurred by Microsoft’s initial findings, there are now a number of startups looking to submerge data centers and also to put data centers on barges. As an example, Nautilus Data Technologies closed a $100M credit facility to build a 6MW Stockton, California barge data center earlier this year. They claim a 1.07 PUE (rumors are that the target is much better than that.) One of the advantages to these undersea (with ballast tanks) and barge data centers is that you could, in theory, even re-deploy the data center to a different location during periods of high demand.
One thing is for sure, the data center industry is looking at many options to deal with the next-generations of hotter and more power-hungry components that will make today’s servers look like low power devices in comparison. As servers move to a new
Since this is a weekend article we usually reserve for fun, one must of course ask the question of whether there will be 21st-century pirates plundering the future data centers of the high seas. Of course, companies like Microsoft are investing in technologies such as silicon root of trust to ensure hardware supply chains are not compromised. Perhaps this is part of the overall strategy.