Home Tags AI

Tag: AI

NVIDIA Jetson Orin Nano Developer Kit Launched for $499

2
We have a quick look at the newly launched $499 NVIDIA Jetson Orin Nano Developer Kit so you can see what is in the box

ChatGPT Hardware a Look at 8x NVIDIA A100 Powering the Tool

6
If you have heard about the OpenAI ChatGPT AI inference running on the NVIDIA A100 and what to know what a NVIDIA A100 is, this is for you
NVIDIA H100 Hopper FP8 Transformer Models Trained

Intel NVIDIA and Arm Team-up on a FP8 Format for AI

5
Intel, NVIDIA, and Arm team up on a common FP8 format for AI that the companies plan to submit in an open license-free format to the IEEE

Intel Accelerates Messaging on Acceleration Ahead of Sapphire Rapids Xeon

0
This week, Intel accelerated its acceleration messaging ahead of its upcoming Sapphire Rapids Xeon server CPU launch
Patrick Selfie With Cerebras WSE 2

Cerebras Wafer Scale Engine WSE-2 and CS-2 at Hot Chips 34

0
At Hot Chips 34, the Cerebras Wafer Scale Engine (WSE-2) was detailed, and the company showed how it is scaling out CS-2 deployments
Tesla V1 Dojo Interface Processor

Tesla Dojo Custom AI Supercomputer at HC34

4
At Hot Chips 34, we got a glimpse of the Tesla Dojo custom AI supercomputer, its V1 Interface NICs with HBM and how it scales data loading
HC34 Tesla Dojo UArch D1 Die Cover

Tesla Dojo AI Tile Microarchitecture

1
In the first Hot Chips 34 Tesla talk, the company discussed the Tesla Dojo microarchitecture, the underpinnings of its AI supercomputer chips
HC34 Untether AI Boqueria 1458 RISC V Cores

Untether.AI Boqueria 1458 RISC-V Core AI Accelerator

0
Untether.AI Boqueria is a 1458 RISC-V core AI accelerator discussed at Hot Chips 34 that aims to scale low-power AI inference

AMD-Xilinx and AI Updates at AMD Financial Analyst Day 2022

2
At AMD FAD 2022, the company discussed its embedded and AI strategy including AI Engine accelerator proliferation and unifying software

Intel Habana Greco AI Inference PCIe Card at Vision 2022

3
The Intel Habana Greco is the company's new AI inference accelerator in a low-profile PCIe card with massive generational improvements