Accelerator Support

Accelerator Support

As the use of ML accelerates in organizations of all sizes, the increase in complexity of ML models is driving up both time and cost of training as well as the cost of production inference. To address this problem, a number of companies are emerging in the space, building ML-specific hardware acceleration platforms specifically tuned for ML use cases. In concert with this, many of the End-to-End ML platforms and tools are building out support for these hardware acceleration platforms, making it easy for the platform users to leverage these new technologies to speed up training and reduce costs of both training and production inference.

Some of these underlying accelerator technologies are enabling customers to distribute their workloads, reduce training time from weeks or days to minutes while simultaneously lowering their costs. We will be tracking this space closely. Stay tuned for more information here on the Solution Guide.

Deploy machine learning models to production
SAS Visual Data Mining and Machine Learning
Solve the most complex analytical problems with a single, integrated, collaborative solution
The MLOps Platform
Modern MLOps focused on speed and simplicity
Weights & Biases
With a few lines of code, save everything you need to debug, compare and reproduce your models
Spell is DLOps
Creating data science
Hugging Face
The AI community building the future
Determined AI
Build models, not infrastructure
Industry-leading AI OS for machine learning