If you are in the market for new servers in 2026, you have probably already noticed that something feels off. Prices are up, lead times are stretched, and the cost dynamics that governed your purchasing decisions just a year or two ago have fundamentally shifted. This is not your imagination. Instead, this is the reality of a server market grappling with significant component shortages and pricing pressures that show no signs of abating in the second half of this year.
For folks planning and buying infrastructure, the challenge is clear: how do you provision the computing resources your organization needs when the economics have been turned on their head? The answer lies in strategy, specifically, in rethinking some of the assumptions that may have guided your hardware decisions in previous years. From timing your purchases strategically to reconsidering whether you truly need dual-socket configurations, there are concrete steps you can take to navigate this challenging landscape. Note that we are using many AMD EPYC-based systems here from different vendors that we did not purchase to review. We have to say this is sponsored by AMD. One of the big reasons is that we are actually deploying EPYC platforms, not Xeon, right now for STH, as will become clear in this guide. Also, AMD EPYC is rapidly taking market share for reasons that will also be apparent.
In a bit of a throwback, we wanted to discuss the current state of the server component market and provide actionable guidance on how to maximize your purchasing power in 2026. Whether you are DRAM-light or NAND-heavy, running legacy virtualization stacks or building fresh AI infrastructure, there is a path forward, but it requires thinking differently about your hardware strategy.
The Problem: Component Prices Are Sky High
Let us start with the uncomfortable truth. DRAM pricing has reached levels that would have been unthinkable just a few years ago. Some chief information officers are now reporting they are paying seven to eight times what they paid for equivalent memory modules in previous cycles. That is not a minor increase it is a fundamental shift in the cost structure of server procurement.

The NAND market is not any better. As flash memory manufacturers continue to navigate supply chain constraints and shifting demand patterns driven by the AI boom, pricing has become increasingly unfavorable for buyers. The era of abundant, cheap solid-state storage appears to be on hiatus, and organizations need to factor this into their hardware planning.

Perhaps most concerning is that there is no end in sight. We keep hearing that pricing pressures will persist through the remainder of 2026, meaning this is not a situation where waiting a few months will yield better results. If you need servers, the math increasingly favors acting now rather than hoping for a market correction that may not arrive until 2027 or later.

This creates a fundamental challenge: organizations still need computing infrastructure to run their operations, pursue new initiatives like AI deployment, and replace aging hardware that has reached the end of its life. The solution is not to simply defer all purchases indefinitely. That strategy eventually catches up with you in the form of failing hardware, security vulnerabilities, and missed opportunities. Instead, it is about being smarter with how you approach your purchasing decisions.
Buy Earlier Rather Than Later
One of the most straightforward strategies in a high-price environment is to reconsider your timing. Most organizations operate on annual or quarterly budget cycles, and there is nothing unusual about that. It is simply how corporate finance works. However, this creates predictable patterns in the server market, with end-of-quarter periods traditionally offering the best opportunities for negotiated discounts.
If your memory and storage requirements are relatively modest, say, DRAM and NAND comprise less than a quarter of your total bill of materials, it may make sense to time your purchases for end-of-quarter discounting windows. Many organizations deploy servers for purposes like Active Directory infrastructure, where the workload does not require extensive memory channels or massive storage footprints. These deployments might only need a few CPU cores and perhaps 16GB RDIMMs, making them prime candidates for strategic timing.

But here is the crucial distinction: if your workloads are DRAM-heavy or NAND-intensive, waiting for discounts is likely a mistake this year. The economics have flipped. SSDs and RDIMMs are increasing in price, and the idea that you can negotiate meaningful discounts on components that are themselves appreciating in value is challenged this year. There is no discount required when demand outstrips supply and prices are already moving upward.
Here is a concrete example of how this plays out. Consider organizations deploying servers where 80% or more of the bill of materials cost is tied up in NAND and DRAM. Those components will increase in price both within the quarter and throughout the year. The smart move is to lock in pricing as early as possible rather than hoping for concessions that will not materialize.

It is also worth noting that quote durations are shrinking. Even getting a price quote valid for 30 days is becoming more challenging as vendors and distributors adjust to market conditions. We at STH have heard from numerous members who have seen price increases exceeding 30% within just a few weeks. That is not hypothetical, it is happening right now.
Our recommendation at STH is straightforward. If you have budget available and you know what you will need over the next 12 to 18 months, strongly consider purchasing in the first quarter of 2026. Yes, it feels aggressive, but when prices are rising this rapidly, the math favors acting now rather than waiting.




One sentence about the sponsored nature of the article, buried in the second paragraph, is an interesting choice.
This hits hard. We’ve been buying from a reseller for almost 10 yrs and they’ve change quote validity to 14 days. It used to be valid for 90 days
I like the article but I think there is one aspect missing. With CPU core counts going up drastically, you will have a bigger server consolidation. For the small and medium enterprises (emphasis on small), you also have to consider your n – 1 for virtualization redundancy (planning for one server failure). So less server consolidation and going towards single socket CPUs might be the better choice. Extreme case to stress the point: 4 single socket servers with each 64 cores is a much better case than 2 dual socket servers with 64 cores per socket (for comparison). With single socket servers you also have more DIMM slots per CPU, so you could go with smaller DIMM capacities (e.g. extreme case 64 instead of 256). At the end trading more memory capacity for less memory bandwidth, which might be worth depending on your workload.
How to turn water into wine in 2026!
Step 1. purchase 1 or more intel 8380 40c cpus ($1K)
2. purchase the least expensive 4 x 16GB ddr4 3200 rdimm ($100 each)
3. purchase 4 x 128GB optane pmem 200 (put into memory mode) ($69 each!)
4. purchase 1 or more mellanox 516-GCAT and cross flash to 516 CDAT (dual port 100G pci gen4) ($250 each)
5. find a new model (2026) pc gaming case that has good airflow ($150 for a good one!)
6. 2TB m.2 (hunt for a good deal at $150)
7. motherboard find someone at sth that will make your day!
this system takes care of the 95 percentile and still avoids the AI tax that most other components cost.
We’ve been seeing 3% per week price increases this year. It’s crazy out there.
The comment about not treating PCIe lanes as a number vs available speed is unfortunately not reflected in reality (admittedly, mainly for small businesses). Finding a dual 10Gb NIC at PCIe 5.0 x1 is not a thing (or even dual 25Gbit at PCIe 5.0 x2, or what would be really great would be dual 50Gbit with sfp56 at PCIe 5.0 x 4). Having a ssd backplane that offers PCIe 5.0 x2 per drive (allowing 8 drives for 16x PCIe at up to a theoretical 64GB/sec – which is still a massive improvement from SAS). Current PCIe lanes are wasted in many instances, so all we can do is go by count. For a SMB, or even an edge deployment, this would allow something much smaller like 4005 with 16 cores to still offer 8x NVMe SSD’s and 100Gb networking and 96GB ECC DDR5 (sticking with 1DPC – I do wish for 64GB ECC DDR5 UDIMMs) x 3 (or 4) for an affordable N+1 virtualization cluster (assuming software defined storage). But I guess the solution is either older hardware (like jpmomo suggests) or cloud.