AWS offers Nvidia’s Tesla T4 chip and more from TWiML & AI

1024 684 This Week in Machine Learning & AI

Bits & Bytes

  • AWS offers Nvidia’s Tesla T4 chip for AI inference. The new EC2 instances, with Tesla T4 GPUs, will provide AWS customers with a new option for reduced-cost AI inference. T4 will also be available through the Amazon Elastic Container service for Kubernetes.
  • Qualcomm launches new, AI-enabled smart speaker platform.The new Qualcomm QCS400 SoC series brings its high-performance, low-power computer capabilities, and audio technology together to help deliver highly optimized, AI-enabled solutions designed for smarter audio and IoT applications.
  • Expert System announces the release of its new AI platform, Cogito. The new version of Cogito (14.4) delivers key updates in the areas of knowledge graphs, ML and RPA to help organizations accelerate and simplify the adoption of AI in their business workflows.
  • New Falkonry product seeks to simplify the creation and deployment of predictive models at the edge. The product, the Falkonry Edge Analyzer, is a portable self-contained engine that lets customers deploy predictive analysis on edge devices for low latency applications in disconnected, industrial environments or close to data sources.
  • IBM and Intel outline quantum AI advances. In a recent study, IBM researchers shared a quantum supervised learning algorithm with the potential to enable ML on quantum computers in the near future. Meanwhile, Intel and Hebrew University researchers unveiled a proof of deep learning’s superior ability to simulate the computations involved in quantum computing – revealing problems deep learning can excel at and proposing a promising way forward in quantum computing.
  • Neurala launches Brain Builder platform to accelerate the creation of custom vision AI solutions. The SaaS platform, Brain Builder, streamlines the creation of custom computer vision solutions by offering an all-in-one tool for data tagging, training, deployment, and analysis. I saw a demo of the product at GTC and it looked pretty interesting in its ability to quickly create an image classifier with just a few labeled data points, but I have questions about how it works (I was told it didn’t work based on transfer learning).
  • Facebook open-sources hardware designs for AI model training and inference. Facebook is donating its three hardware platforms for AI to the Open Compute Project. These include Zion for training workloads, Kings Canyon for AI inference, and Mount Shasta for video transcoding.
  • Google announces an all-neural on-device speech recognizer.Google rolled out an end-to-end, all-neural, on-device speech recognizer to power speech input in Gboard. In a research paper, Google presented a model trained using RNN transducer (RNN-T) technology that is compact enough to reside on a phone [Paper].

 

Dollars & Sense

To receive the Bits & Bytes to your inbox, subscribe to our Newsletter.

Leave a Reply

Your email address will not be published.