While the bulk of NVIDIA’s focus at their most recent GTC 2026 trade show has been on their forthcoming Vera Rubin platform for obvious reasons, the company is still in the middle of delivering its Grace Blackwell family of offerings. This includes the GB300 Blackwell Ultra accelerator itself, as well as systems and racks built around NVIDIA’s second iteration of their flagship server processor.
Among the Blackwell products that NVIDIA is finally getting ready to deliver is the DGX Station, NVIDIA’s workstation-sized Grace Blackwell box. First announced during GTC 2025 alongside the DGX Spark, the DGX Station will be an OEM system specification that partners could follow to build complete workstations around NVIDIA’s server-grade processor. The fundamental idea is that the DGX Station would be the next step up from both the tiny DGX Spark and traditional x86 workstations that pair a discrete video card via PCIe. In short, the DGX Station is meant to be as close to a Grace Blackwell server as an individual workstation could get.
Now, a full year later, the DGX Station is finally coming to market, but with a big change. With the start of this week’s show, NVIDIA’s partners have begun taking orders for their respective systems, just in time to get your Claw on.
NVIDIA DGX Station Shipping Specifications
As with devices based on NVIDIA’s GB10 processor for SFF systems, NVIDIA’s DGX Station program has NVIDIA strongly defining the specifications of the systems that partners will be allowed to sell. The heart of the system is based around NVIDIA’s Blackwell Ultra GPU and 72-core Grace CPU. Both are soldered to the motherboard as NVIDIA is not using SXM modules here. They are paired with 252GB of HBM3e and 496GB of LPDDR5X memories, respectively.
Of particular note, the working HBM3e memory capacity of the shipping DGX Station is a downgrade from NVIDIA’s original 2025 specifications as a late change just before NVIDIA GTC 2026. In 2025, the specifications called for the workstation boxes to ship with a fully-enabled B300 complete with 288GB of HBM3e and 8GB/s of memory bandwidth. Instead, it would appear that NVIDIA has shifted to throwing salvaged B300 chips into the DGX Station, as the revised specs are consistent with only seven of the eight HBM3e stacks enabled. As a result, NVIDIA’s workstation-sized GB300 platform has about 12% less memory (252GB) and memory bandwidth (7.1GB/s) to work with than a full GB300 server part.
| NVIDIA DGX Station Key Specs | |
| CPU | NVIDIA Grace (72C/72T) |
| GPU | NVIDIA B300 Blackwell Ultra |
| Operating System | NVIDIA DGX OS |
| Memory | CPU: 496GB LPDDR5X SOCAMM GPU: 252GB HBM3e |
| Storage | Optional |
| Video Card | Optional: NVIDIA RTX PRO Blackwell 2000/4000/6000 |
| PSU | 1600W |
| Form Factor | Tower Workstation |
| Networking | ConnectX-8 800Gbps Ethernet 10Gb Ethernet (AQC-113C) 1Gb Ethernet (BMC) |
| Ports | Front: 2x USB-C 10Gbps, 2x USB-A 10Gbps, 1x Combo Audio Rear: 4x USB-A 10Gbps, 1x USB Micro-B (BMC), 2x 400GbE (QSFP112), 1x 10GbE LAN (RJ45, AQC-113C), 1x 1GbE LAN (RJ45, BMC), 3x Audio, 1x Mini DisplayPort (BMC) |
On the networking side, all DGX Station systems include a ConnectX-8 NIC with dual 400Gbps Ethernet ports (QSFP112) for high-speed networking. This is joined by a 10GbE NIC (RJ45) for general networking and a 1GbE NIC for BMC management.
NVIDIA is also rigorously defining the I/O requirements as well. Internally, all DGX Station systems come with four M.2 SSD slots running at PCIe Gen5 speeds, as well as a trio of PCIe Gen5 x16 slots. One running at full x16, and the other two running at x8. Even the front and back panel connectors are defined by NVIDIA, with systems including multiple USB Type-A and Type-C ports on the front, as well as more USB-A ports on the rear, and ports for interfacing with the BMC.

For users who need a DGX Station with graphics capabilities (something Blackwell Ultra lacks), NVIDIA also supports a limited selection of add-in video cards: the Blackwell editions of the RTX Pro 2000, RTX Pro 4000, and RTX Pro 6000.
A complete DGX Station is designed to run at up to 1.6kW, which is the limit of what a North American 120v outlet can provide. So it is not much of a stretch to say that NVIDIA has packed in about as much compute as a desktop system can hope to power. If anything it is a bit of a surprise that they did not have to detune the compute performance of the B300 a bit to make it all work.

As with the DGX Spark, NVIDIA is also providing the software stack for DGX Station systems, which will run the titular DGX OS. This continues to be an NVIDIA customized distribution of Ubuntu, which is currently based on Ubuntu 24.04.
Partner Systems
For the release of the DGX Station, several of NVIDIA’s regular partners have stepped up to provide systems based on the platform. ASUS, Dell, Gigabyte, HP, MSI, and Supermicro are among the vendors confirmed to be offering systems. Over the next few days, we will highlight many of these that we saw at the show.

Compared to the GB10 systems we have seen so far, the DGX Station systems offered show a bit more variety. Still, all of these are full tower desktop systems that need to be large enough to accommodate NVIDIA’s larger-than-ATX motherboard, as well as NVIDIA’s front panel I/O requirements.

As for how much a DGX Station system will set you back, most of the participating system vendors are not even publishing list prices for the systems. Dell, MSI, Supermicro, and others are not taking direct orders through their websites. Instead, potential customers need to make sales inquiries.

Suffice it to say, the high demand for GB300 already makes it an expensive piece of kit, never mind the current crunch on DRAM and NAND supplies that further adds to those prices.
Supermicro To Offer a GB200 System for HPC Developers
Alongside the GB300-based DGX Station systems, Supermicro, in particular, has a second desktop Grace Blackwell system that warrants a mention. Although not being sold as a DGX Station, the company will be offering a version of their Station with NVIDIA’s Blackwell B200 accelerator, the predecessor to the B300 Blackwell Ultra used in the official DGX Stations.
The GB200 Developer Kit-based system, as NVIDIA is branding it, is exclusive to Supermicro. And it is largely the same as a DGX Station, with the exception of the Blackwell GPU driving it.

Although unconventional at face value, there is an important purpose in having one of their partners offer a GB200-based system: the HPC market. One of the major architectural changes between the B200 and B300 GPUs is FP64 vector and FP64 tensor performance. The NVIDIA B300 trimmed FP64 performance to 1/32nd that of B200, making B200 more suited for HPC work. As a result, B200 remains NVIDIA’s fastest accelerator for the HPC market at this time since it has high-speed HBM and better FP64 performance than the B300.
The older Blackwell GPU also comes with some similarly reduced specifications. Most notably, it 186GB of HBM3e memory, versus the more spacious B300. This means that Supermicro’s GB200 system will be serving a very specific niche of the market. It is likely why there is only one system vendor offering it.
Like the DGX Station systems, Supermicro’s GB200 Super HPC Station is slated to be available this quarter.
Final Words from Patrick
I thought I would jump in and offer some final words and market context on this one. Like the NVIDIA GB10 platforms, the DGX Station is a motherboard with onboard compute and memory designed and sold by NVIDIA and packaged by OEMs. We also expect that, should an RMA process occur at the board level, NVIDIA would be involved in repairs. The idea of this machine is that it is more of a workgroup server, as by default, there are no HDMI or DisplayPort outputs. There were folks looking at the DGX Station in conjunction with the NVIDIA RTX Pro 6000 Blackwell GPUs, but we were told that the power sharing would still cap the power, so power for the PCIe GPU would impact the power available for the Grace CPU and B300 GPU to keep the entire board within its power limits.
The higher-end of the pricing scale we heard for these systems was in the $120-125K range. We heard of others in the $100K range, but those prices may be impacted by list versus deal discounts. It is highly unlikely these will sell for under $80-85K. While that is expensive, we have seen with the DGX Spark and NVIDIA GB10 systems that running large local models and keeping data on-prem is awesome and can save a lot of money. We have multiple good-sized (400-500GB with KV cache) models running locally, generating well over 10M tokens per day on slower systems. The idea of having one or more of these for a developer workgroup is highly appealing to folks who want to keep data private while still running large workflows. I actually expect these to sell very well.
Availability should start soon-ish, but some vendors will take longer to get to market. HP, for example, told me in the booth that their system was targeting August. We will have more on the systems in our series of NVIDIA GTC 2026 booth tours coming next week.



