Deep learning has become the standard in both computer vision and natural language processing. Time series research has also seen some progress however, particularly in industry classical models still reign supreme. In this session we will survey the promise and barriers to leveraging deep time series models. We will discuss using LSTMs, other RNNs, and transformer models to forecast multivariate time series data and dive into how they compare to classical methods (e.g. linear regression, ARIMA, etc). We will also explore why time series hasn’t had its own “ImageNet” or “BERT” moment and if it is even possible. Finally, we will dive into the problem of deploying and monitoring deep time series models in a production setting as well as issues surrounding concept drift and anomalies in temporal data.
Deep learning has made performance gains and become the industry standard in Computer Vision and NLP, however in the time series domain many companies still prefer simple models (ARIMA, Prophet, Linear Regression…). In this session we will go over some of the latest research on using LSTMs, other RNNs, and transformer models to forecast multivariate time series data and dive into how they compare to more classical methods. We will also look at some of the additional challenges to using transfer learning in multivariate time series relative to areas like Computer Vision and NLP. We will explore why time series hasn’t had its own “ImageNet” or “BERT” moment and if it is even possible.
Finally, we will dive into the problem of deploying and monitoring deep time series models in a production setting as well as issues surrounding concept drift and anomalies in temporal data.
Altogether, this session aims to present attendees with a comprehensive analysis of the barriers to employing modern techniques in an industry as well as the benefits. The session will include practical examples of how these models preform in different industries such as retail and healthcare.