As the use of ML accelerates in organizations of all sizes, the increase in complexity of ML models is driving up both time and cost of training as well as the cost of production inference. To address this problem, a number of companies are emerging in the space, building ML-specific hardware acceleration platforms specifically tuned for ML use cases. In concert with this, many of the End-to-End ML platforms and tools are building out support for these hardware acceleration platforms, making it easy for the platform users to leverage these new technologies to speed up training and reduce costs of both training and production inference.
Some of these underlying accelerator technologies are enabling customers to distribute their workloads, reduce training time from weeks or days to minutes while simultaneously lowering their costs. We will be tracking this space closely. Stay tuned for more information here on the Solution Guide.