Bits & Bytes

IBM, Nvidia pair up on AI-optimized converged storage system. 

  • IBM SpectrumAI with Nvidia DGX, is a converged system that combines a software-defined file system, all-flash storage, and Nvidia’s DGX-1 GPU system. The storage system supports AI workloads and data tools such as TensorFlow, PyTorch, and Spark.

Google announces Cloud TPU Pods, availability in alpha. 

  • Google Cloud TPU Pods alpha are tightly-coupled supercomputers built with hundreds of Google’s custom Tensor Processing Unit (TPU) chips and dozens of host machines. Price/performance benchmarking shows a 27x speedup for nearly 40% lower cost in training a ResNet-50 network.

MediaTek announces the Helio P90. 

  • The Helio P90 system-on-chip (SoC) uses the company’s APU 2.0 AI architecture. APU 2.0 is a leading fusion AI architecture designed by MediaTek can deliver a new level of AI experiences that are 4X more powerful than the Helio P70 and Helio P60 chipsets.

Facebook open sources PyText for faster NLP development.

  • Facebook has open sourced the PyText modeling framework for NLP experimentation and deployment. The library is built on PyTorch and supports use cases such as document classification, sequence tagging, semantic parsing, and multitask modeling.

On scaling AI training.

  • This interesting article from OpenAI proposes that the gradient noise scale metric can be used to predict parallelizability of training for a wide variety of tasks, and explores the relationship between gradient noise scale, batch size, and training speed.

Dollars & Sense

Sign up for our Newsletter to receive this weekly to your inbox.