The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
AI in the Cloud at re:Invent
https://cdn.twimlai.com/wp-content/uploads/2018/12/17171534/artificial-intelligence-circuit-board-computing-50711-1024x696.jpg1024696The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)https://cdn.twimlai.com/wp-content/uploads/2018/12/17171534/artificial-intelligence-circuit-board-computing-50711-1024x696.jpg
The latest on the scene is this newsletter fourteen!
Dispatch from the cloud frontier
A couple of newsletters ago, I shared a few thoughts on AI and the cloud, and how the two fit together. In that piece, I noted that the major cloud vendors are investing heavily in machine learning and AI offerings, and described my three-tiered mental model for Cloud AI Services. This week I’m here in Las Vegas at Amazon Web Services’ re:Invent conference, and getting to watch it all play out in practice.
Without a doubt, Amazon’s huge investment in machine learning and AI continues, with the company announcing a ton of new ML features and services this week. (Actually, they had too much to announce in their Wednesday and Thursday keynotes, so have been making announcements over the past couple of weeks, and more are expected into next week.)
Here’s a summary of their recent ML and AI announcements, grouped by the aforementioned tiers:
Amazon Rekognition. The company’s image analysis service received feature updates and accuracy improvements, and new Video analysis features were added.
Amazon Comprehend. A new NLP-as-a-Service offer that can do things like entity extraction and topic modeling.
Amazon Translate. A new Translation-as-a-Service offering.
Amazon Transcribe. A new audio transcription service with noteworthy support for custom vocabularies and multiple speaker identification (both “coming soon”).
Cloud-based data science platforms
Amazon SageMaker. Hosted Jupyter Notebooks with hooks in to the AWS service ecosystem, optimized models, and one-click train and deploy.
Amazon EMR. The company’s managed Hadoop service now offers support for new GPU instance types and Amazon’s deep learning framework of choice, Apache MXNet.
Amazon Greengrass ML Inference. Greengrass is the company’s software for programming edge devices, and this adds the ability to publish cloud-trained models to any Greengrass device.
Amazon DeepLens. This was the hot new toy at re:Invent! DeepLens is a $249 developer kit (e.g. for learning, not a production offering), consisting of a video camera and a 100 GLOPS Intel Atom-based computer in a single compact form factor with inference-at-the-edge capabilities.
I’ve got much more to say about each of these! For a full (and fun!) discussion and analysis of these new features, keep an eye out for the re:Invent Roundup Roundtable show I recorded for the podcast, coming next week!
Sign up for our Newsletter to receive this weekly to your inbox.