Today we have the review of a switch affectionately called the HPE Aruba Instant On 1960 24G 2XGT 2SFP+, or the shorter model number, the JL806A. As the name suggests, this is part of the new Instant On 1960 series of products. We previously reviewed the 48-port version in our HPE Aruba Instant On 1960 48G 2XGT 2SFP+ Switch Review JL808A article. Now, we are (finally) getting to the 24-port version.
HPE Aruba Instant On 1960 24G 2XGT 2SFP+ Switch Hardware Overview
With many of our recent reviews, we have been splitting the review into external and internal hardware overviews. We are going to continue that tradition here before moving into management.
We have a video version of this review as well. The video also includes the 48-port version of this switch:
As always, we suggest opening this in its own YouTube tab, browser, or app for the best viewing experience.
With that, let us get to the hardware.
HPE Aruba Instant On 1960 24G 2XGT 2SFP+ Switch External Hardware Overview
The switch itself is a 1U design with rounded corners instead of the boxy right angles we are accustomed to seeing. Since space is not at a premium inside this chassis, this is a nice aesthetic touch.
The main feature of this switch is its 1GbE port count. This has a total of 24x 1GbE ports. As a note, this is the JL806A model. There is also an Aruba JL807A that is the PoE version of this switch.
There is a part of us that acknowledges that the 1GbE market is huge. Still, for a late 2021/ early 2022 switch platform, it would have been nice to see a 2.5GbE offering. We have been seeing many more 2.5GbE corporate desktop PCs, firewalls, and WiFi APs recently so the ecosystem is transitioning, albeit slowly.
In terms of ports, there are four 10GbE ports. Two are 10Gbase-T ports. The other two are SFP+ ports. While it is nice to have dedicated 10Gbase-T ports, having four SFP+ these days with the prevalence of SFP+ to 10Gbase-T adapters would have been a more flexible solution in some circumstances. Then again, people search for switches by port type so having a 10Gbase-T port can make sense from a marketing standpoint.
On the rear of the switch, we basically have just the power input. The PSU is internal and not redundant as we will see in the internal overview section.
One of the more fascinating features of the 24-port model is this large vent. As we will see in the internal overview, this is a passively cooled switch. As a result, this vent is used to extract heat from the hotter part of the switch. There is one downside. This switch is not intended to be mounted directly under a desk/ table like many of the other 1960 switches because that would cover this vent.
Aruba has a small service tag. This is the only place where the model number is easily identifiable on the front of the system. We wish that Aruba added some identifiers to the faceplate. There is plenty of room.
One item that must be noted is small but important. If you look at the Instant On 1960 24-port and 48-port switches you will immediately notice something. The LEDs, 10GbE ports, and twenty-four of the 1GbE ports are all aligned. Likewise, the power inputs on the rear are lined up as well. For those that obsess over cable management, this consistency is a virtue. It also makes the line feel like a higher-quality line since we have seen other vendors just use the lowest cost placement that ends up not aligned within switch lines.
With that, let us get inside the switch to see how it works.
Sigh, I was all ready to learn about how fast these would start forwarding packets after a cold power up, but alas their ‘instant on’ is just marketing. Managed switches and routers have always suffered horrible boot times measuring into the minutes realm. We should all be holding these companies feet to the fire to makes their devices boot faster than the servers connected to them.
Don’t switches stay on 99.9% of the time?
Does the use of the vent in the 24-port model mean that it can only be racked at the top of the stack?
Obviously 24-port switches aren’t the first choice in situations where you’ll be cramming them in for port density; but given the tendency of demand for ports to creep up over time and the often-cramped ersatz wiring closets commonly populated by switches deliberately priced lower than the fancy enterprise options having to remember that a specific switch has to either go on top or have 1U or more of blank space included will probably bite a few people if that is in fact required.
Fanless is certainly appreciated for deskside use; but a top vent looks awfully like a violation of the tacit assumption that rackmount gear may make demands in terms of how cold the cold aisle is; and how much unimpeded flow-through they’ll be allowed; but aren’t really supposed to have binding opinions about what is above or below them. It’s a pity that they couldn’t have made it work with some combination of front, rear, and side vents.
@eug – In theory, yes they stay on all the time; that’s been my experience until they break. In practice there should be scheduled downtime (and we all do that, right?) to perform code upgrades that avoids or minimizes impact to users.
I am starting to question the rationality of Rohit’s reviews… or the quality of editorial and grammatical review at STH.
Saying the switch has an IP address and it uses DHCP in the same sentence shows a lack of understanding in those technologies, or a rush to get a review out the door. And I doubt that anyone installing this switch, after dropping a few C-notes for it, is a complete n00b that would nuke their own network; the threat implied by using an IP address that duplicates many SMB routers.
I suspect this switch actually attempts to use DHCP first. If DHCP fails to obtain a lease, then it defaults to it’s assigned IP address rather than resorting to APIPA addressing. And that assigned IP address of 192.168.1.1 IS a POOR CHOICE. Patrick, I’ll take a few $$ to rewrite a properly worded review for this product, thank you.
This review makes me wonder if Rohit ever found the loose screw between the keyboard and his chair. /facepalm/