Supermicro SSG-6047R-E1R72L 72x 3.5″ Drive 4U Storage Server Released

6
Supermicro SSG-6047R-E1R72L rear
Supermicro SSG-6047R-E1R72L rear

Recently a new 4U 72 drive storage server was released. The Supermicro SSG-6047R-E1R72L aims to combine storage density with compute power. Unlike many disk shelves that can handle 70 drives in 4U or 5U form factors, the Supermicro SSG-6047R-E1R72L fits these drives with room for a dual socket server system. The system is outfitted with 3x LSI SAS2308 controllers (in IT mode) to provide up to 24 x 6.0gbps SAS connections (144gbps total.) The storage server also includes 4x PCIe x8 slots to add additional networking capacity.

Supermicro SSG-6047R-E1R72L front
Supermicro SSG-6047R-E1R72L Front

The chassis design fits multiple drives on a single slide-out rail system. Several vendors are using this type of setup to increase storage density. One other trick that Supermicro is using is double-sided storage. On the rear of the chassis there is also 3.5″ storage bays that hold more than one drive per slide-out rail. Certainly this is using every available space in the chassis for storage. One other feature that is quite interesting is that even with 72x 3.5″ drives, Supermicro has added 2x 2.5″ fixed internal (not hot swap) drive bays and has an option for an additional 2x 2.5″ hot swap drive bays on the rear. With many storage tiering applications today using 2.5″ SSD units to accelerate storage performance, the possibility of addition additional 2.5″ storage is a sound one.

Supermicro SSG-6047R-E1R72L rear
Supermicro SSG-6047R-E1R72L rear

Supermicro SSG-6047R-E1R72L Specifications

  •     4U High Capacity Double-Sided Storage Chassis
  •     72x 3.5” SAS2 or SATA3 HDDs with 6Gb/s in 36x hot-swap drive bays
  •     Dual Intel® Xeon® processor E5-2600 (up to 135W) support
  •     Up to 512GB DDR3 1600MHz ECC Reg. memory in 16x DIMM sockets
  •     3x LSI 2308 IT mode Controllers (24x lanes of 6Gbps SAS)
  •     2x 2.5” internal fixed HDDs/SSDs for OS/Applications
  •     Optional 2x 2.5” rear hot-swappable HDDs/SSDs for OS/Applications
  •     4x available PCI-E 3.0 x8 expansion slots (Optional SAS HBA for additional JBOD Expansion, 10G, 40G LAN and 56Gb/s FDR IB card)
  •     Quad Port Gigabit LAN (optional 10G and 40G LAN)
  •     IPMI 2.0 (dedicated LAN) with Virtual Media/KVM over LAN
  •     2000W (1 + 1), Platinum Level, High Efficiency (95%) Digital Switching Power Supplies
  •     Optional Battery Backup Power (BBP) module
  •     Target Applications: NAS and SAN applications, Scale-out platforms such as Openstack Swift, archival storage, compliance storage and media storage

See here for the official release.

6 COMMENTS

  1. We’ve been working with a vendor to see if these would work for our needs. We were hoping to get all the backplanes chained together in IR mode but am having lots of trouble getting them out of IT mode.

    In addition, the vendor says that if you pull a drive to swap you pull both out (not good for us). I’m going to head down to their shop early next week to confirm some of these.

  2. I was about to scream bloody murder when for the life of me, I couldn’t figure out where the hell you get that many drives in half the holes till I seen the little bit about two drives per tray. How do you swap one without removing the other????

    I would hate to see the price tag though.

  3. Benji: According to my guy, you pull out both at once. So I’ll have to grab one and look at and take pics for you. Trying to figure out how I can make use of it if I have to pull 2.

  4. This is designed for newer distributed storage softwares like Ceph. it can survive multiple drive failures since the data is replicated.

    with this user can use today’s open source cloud centric storage software and provide high availability and high resilience without the need to buy expensive proprietary products.

  5. I would be very much against pulling a known good drive along with a suspected failed drive.
    If you are correct, then this is more of a niche-market toy with little gain to the rest of the community that run more mainstream things like Winblows or ZFS.

  6. Make each disk set a stripe, if one fails the other will “fail” then add raid 6, zfsz2, or zfsz3 on top of that. reliability + speed + capacity.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.