- New Microsoft Maia AI Accelerator: better than AWS, but less HBM memory than NVIDIA and AMD for large AI model training and inference
- New AMD MI300 instances for Azure: A serious challenger to NVIDIA H100
- New NVIDIA H200 instances coming: more HBM memory
- Maia is built on TSMC 5nm, and has strong TOPS and FLOPS, but was designed before the LLM explosion (it takes ~3 years to develop, fab, and test an ASIC). It is massive, with 105 B transistors (vs. 80B in H100). It cranks out 1600 TFLOPS of MXInt8 and 3200 TFLOPS of MXFP4. Its most significant deficit is that it only has 64 GB of HBM but a ton of SRAM
- Microsoft Cobalt Arm CPU for Azure: 40% faster than incumbent Ampere Altra
Subscribe To Our Free Newsletter |