Home Tags AI

Tag: AI

NVIDIA H100 Hopper FP8 Transformer Models Trained

Intel NVIDIA and Arm Team-up on a FP8 Format for AI

Intel, NVIDIA, and Arm team up on a common FP8 format for AI that the companies plan to submit in an open license-free format to the IEEE

Intel Accelerates Messaging on Acceleration Ahead of Sapphire Rapids Xeon

This week, Intel accelerated its acceleration messaging ahead of its upcoming Sapphire Rapids Xeon server CPU launch
Patrick Selfie With Cerebras WSE 2

Cerebras Wafer Scale Engine WSE-2 and CS-2 at Hot Chips 34

At Hot Chips 34, the Cerebras Wafer Scale Engine (WSE-2) was detailed, and the company showed how it is scaling out CS-2 deployments
Tesla V1 Dojo Interface Processor

Tesla Dojo Custom AI Supercomputer at HC34

At Hot Chips 34, we got a glimpse of the Tesla Dojo custom AI supercomputer, its V1 Interface NICs with HBM and how it scales data loading
HC34 Tesla Dojo UArch D1 Die Cover

Tesla Dojo AI Tile Microarchitecture

In the first Hot Chips 34 Tesla talk, the company discussed the Tesla Dojo microarchitecture, the underpinnings of its AI supercomputer chips
HC34 Untether AI Boqueria 1458 RISC V Cores

Untether.AI Boqueria 1458 RISC-V Core AI Accelerator

Untether.AI Boqueria is a 1458 RISC-V core AI accelerator discussed at Hot Chips 34 that aims to scale low-power AI inference

AMD-Xilinx and AI Updates at AMD Financial Analyst Day 2022

At AMD FAD 2022, the company discussed its embedded and AI strategy including AI Engine accelerator proliferation and unifying software

Intel Habana Greco AI Inference PCIe Card at Vision 2022

The Intel Habana Greco is the company's new AI inference accelerator in a low-profile PCIe card with massive generational improvements

Intel Habana Gaudi2 Launched for Lower-Cost AI Training

Intel Gaudi2 is the new AI training chip from the Habana Labs acquisition that offers a massive jump in terms of compute and memory specs
Inspur MetaEngine

Inspur MetaEngine for NVIDIA OVX

The Inspur MetaEngine is the company's NVIDIA OVX platform. We had a chance to talk to Inspur about why its customers are deploying it