Today we’re joined by Diana Marculescu, Department Chair, and Professor of Electrical and Computer Engineering at the University of Texas at Austin.
Subscribe: iTunes / Google Play / Spotify / RSS
We caught up with Diana to discuss her work on hardware-aware machine learning. In particular, we explore her keynote, “Putting the “Machine” Back in Machine Learning: The Case for Hardware-ML Model Co-design” from the Efficient Deep Learning in Computer Vision workshop at this year’s CVPR conference.
In our conversation, we explore how her research group is focusing on making ML models more efficient so that they run better on current hardware systems, and what components and techniques they’re using to achieve true co-design. We also discuss her work with Neural architecture search, how this fits into the edge vs cloud conversation, and her thoughts on the longevity of deep learning research.
Connect with Diana!
- Presentation: Putting the “Machine” Back in Machine Learning: The Case for Hardware-ML Model Co-design
- Slides: Putting the “Machine” Back in Machine Learning: The Case for Hardware-ML Model Co-design
- Paper: FLightNNs: Lightweight Quantized Deep Neural Networks for Fast and Accurate Inference
- Paper: Towards Efficient Model Compression via Learned Global Ranking
- Paper: NeuralPower: Predict and Deploy Energy-Efficient Convolutional Neural Networks
- Paper: Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours
- Join the TWIML Community!
- Check out our TWIML Presents: series page!
- Register for the TWIML Newsletter
- Check out the official TWIMLcon:AI Platform video packages here!
- Download our latest eBook, The Definitive Guide to AI Platforms!
“More On That Later” by Lee Rosevere licensed under CC By 4.0