Sam Charrington: Hey, what’s up everyone? This is Sam. A quick reminder that we’ve got a bunch of newly formed or forming study groups, including groups focused on Kaggle competitions and the fast.ai NLP and Deep Learning for Coders part one courses. It’s not too late to join us, which you can do by visiting twimlai.com/community. Also, this week I’m at re:Invent and next week I’ll be at NeurIPS. If you’re at either event, please reach out. I’d love to connect.

All right. This week on the podcast, I’m excited to share a series of shows recorded in Orlando during the Microsoft Ignite conference. Before we jump in, I’d like to thank Microsoft for their support of the show and their sponsorship of this series. Thanks to decades of breakthrough research and technology, Microsoft is making AI real for businesses with Azure AI, a set of services that span vision, speech, language processing, custom machine learning, and more. Millions of developers and data scientists around the world are using Azure AI to build innovative applications and machine learning models for their organizations, including 85% of the Fortune 100. Microsoft customers like Spotify, Lexmark, and Airbus, choose Azure AI because of its proven enterprise grade capabilities and innovations, wide range of developer tools and services and trusted approach.
Stay tuned to learn how Microsoft is enabling developers, data scientists and MLOps and DevOps professionals across all skill levels to increase productivity, operationalize models at scale and innovate faster and more responsibly with Azure machine learning.

Learn more at aka.ms/azureml.

All right, onto the show!

Erez Barak: [00:02:06] Thank you. Great to be here with you, Sam.

Sam Charrington:
[00:02:08] I’m super excited about this conversation. We will be diving into a topic that is generating a lot of excitement in the industry and that is Auto ML and the automation of the data science process. But before we dig into that, I’d love to hear how you got started working in ML and AI.

Erez Barak: [00:02:30] It’s a great question because I’ve been working with data for quite a while. And I think roughly about five to 10 years ago, it became apparent that the next chapter for anyone working with data has to weave itself through the AI world.
The world of opportunity with AI is really only limited by the amount of data you have, the uniqueness of the data you have and the access you have to data.
And once you’re able to connect those two worlds, a lot of things like predictions, new insights, new directions, sort of come out of the woodwork.
So seeing that opportunity, imagining that potential, has naturally led me to work with AI. I was lucky enough to join the Azure AI group, and there’s really three focal areas within that group.
One of them is machine learning. How do we enable data scientists of all skills to operate through the machine learning lifecycle, starting from the data to the training, to registering the models to putting them in productions and managing them, a process we call ML Ops. So just looking at that end to end and understanding how we enable others to really go through that process in a responsible trusted and known way has been a super exciting journey so far.

Sam Charrington: [00:03:56] And so do you come at this primarily from a data science perspective, a research perspective, an engineering perspective?
Or none of the above? Or all of the above?

Erez Barak: [00:04:07] I’m actually going to go with all of the above. I think it’d be remiss to think that if you’re  a data science perspective, and you’re trying to build a product and really looking to build the right set of products for people to use as they go through their AI journey, you’d probably miss out on an aspect of it. If you just think about the engineering perspective, you’ll probably end up with great info that doesn’t align with any of the data science.
So you really have to think between the two worlds and how one empowers the other. You really have to figure out where most data scientists of all skills need the help, want the help, are looking for tools and products and services on Azure to help them out, and I think that’s the part I find most compelling. Sort of figuring that out and then really going deep where you landed, right? ‘Cause if we end up building a new SDK, we’re going to spend a whole lot of time with our data science customers, our data science internal teams and figure out, “Well, how should our SDK look like?”
But if you’re building something like Auto ML that’s targeted not only at the deeper data scientist, but also the deeper rooted data professionals, you’re going to spend some time with them and understand not only what they need, but also how that applies to the world of data science.

Sam Charrington: [00:05:27] And what were you working on before Azure AI?

Erez Barak: [00:05:31] So before Azure AI, in Microsoft, I worked for a team called Share Data, which really created a set of data platforms for our internal teams. And prior to joining Microsoft, I worked in the marketing automation space, at a company called Optify. and again the unique assets we were able to bring to the table as part of Optify in the world of marketing automations were always data based. We were always sort of looking at the data assets the marketers had and said, “what else can we get out of it?”
Machine learning wasn’t as prevalent at the time, but you could track back to a lot of what we did at that time and how machine learning would’ve helped if it was used on such a general basis.

Sam Charrington: [00:06:12] Yeah, one of the first machine learning use cases that I worked with were with folks that were doing trying to do lead scoring and likelihood to buy, propensity to buy types of use cases. I mean that’s been going on for a really long time.

Erez Barak: [00:06:30] So we’re on a podcast so you can’t see me smiling, but we did a lot of work around building lead scoring…and heuristics and manual heuristics, and general heuristics, and heuristics that the customer could customize. And today, you’ve seen that to really evolve to a place where there’s a lot of machine learning behind it. I mean, it’s perfect for machine learning, right? You’ve got all this data. It’s fresh. It’s coming  in new. There’s insights that are really hard to find out. Once you’ve start slicing and dicing it by regions or by size of customers, it gets even more interesting so all the makings for having machine learning really make it shine.

Sam Charrington: [00:07:07] Yeah you are getting pretty excited I think.

Erez Barak: [00:07:08] Oh, no, no, no. It’s a sweet spot there. Yes.

Sam Charrington: [00:07:12] Nice. You want to dive into talking about Auto ML? For the level of excitement and demand for Auto ML and enthusiasm that folks have for the topic, not to mention the amount of confusion that there is for the topic, I’ve probably not covered it nearly enough on the podcast. Certainly when I think of Auto ML, there’s a long academic history behind the technical approaches that drive it.
But it was really popularized for many with Google’s Cloud Auto ML in 2018, and before that they had this New York Times PR win that was a New York Times article talking about how AI was going to create itself, and I think that contributed a lot to, ‘for lack of a better term in this space’, but then we see it all over the place. There are other approaches more focused on citizen data science.
I’d love to just start with how you define Auto ML and what’s your take on it as a space and its role and importance, that kind of thing.

Erez Barak: [00:08:42] Yeah, I really relate to many of the things you touched on. So maybe I’ll start – and this is true for many things we do in Azure AI but definitely for Auto ML – on your point around academic roots. Microsoft has this division called MSR, Microsoft Research, and it’s really a set of researchers who look into bleeding edge topics and drive the world of research in different areas. And that is when we first got, in our team, introduced to Auto ML.
So a subset of that team has been doing research around the Auto ML area for quite a few years. They’ve been looking at it, they’ve been thinking. It yes, I’ve heard the sentence, “AI making AI.”
That’s definitely there. But when you start reading into it like what does it mean and to be honest, it means a lot of things to many people.
It’s quite overused. I’ll be quite frank. There’s no one industry standard definition that says, “Hmm, here’s what Auto ML is.” I can tell you what it is for us. I can tell you what it is for our customers. I can tell you where we’re seeing it make a ton of impact. And it comes to using machine learning capabilities in order to help you, being the data scientist, create machine capabilities in a more efficient, in a more accurate, in a more structured fashion.

Sam Charrington: [00:10:14] My reaction to that is that it’s super high level. And it leaves the door open for all of this broad spectrum of definitions that you just talked about. For example, not to over index on what Google’s been doing, but Cloud Auto ML Vision when it first came out was a way for folks to do vision cognitive services, but use some of their own data to tune it. Right? Which is a lot different. In fact, they caught a lot of flack from the academic Auto ML community because they totally redefined what that community had been working for for many years and started creating the confusion.
Maybe a first question is, do you see it as being a broad spectrum of things or is it how do we even get to a definition that separates the personalized cognitive services trained with my own data versus this other set of things?

Erez Barak: [00:11:30] I think you see it as more of that general sense, so I would say probably not. I see it as a much more concrete set of capabilities that adhere to a well known process.
That actually is agreed upon across the industry. When you build a model, what do you do? You get data, you featurize that data. Once the features are in place, you choose a learner, you choose an algorithm. You train that algorithm with the data, creating a model. At that point, you want to evaluate the model, make sure it’s accurate. You want to get some understanding of what are the underlining features that have most affected the model. And you want to make sure, in addition, that you can explain that model is not biased, you can explain that model is really fair towards all aspects of what it’s looking at.
That’s a well-known process. I think there’s no argument around that in the sort of the machine learning field that’s sort of the end to end. Auto ML allows automating that process. So at its purest, you feed Auto ML the data and you get the rest for free if you may.
Okay? that would be sort of where we’re heading, where we want to be. And I think that’s at the heart of Auto ML. So, where does the confusion start? I could claim that what we or others do for custom vision follows that path, and it does. I can also claim that some of what we do for custom vision is automated. And then there’s  the short hop to say, “Well, therefore it is Auto ML.” But I think that misses the general point of what we’re trying to do with Auto ML. Custom vision is a great example where Auto ML can be leveraged. But Auto ML can be leveraged wherever that end to end process happens in machinery.

Sam Charrington: [00:13:27] Nice. I like it. So maybe we can walk through that end to end process and talk about some of the key areas where automation is applied to contribute to Auto ML.

Erez Barak: [00:13:44] So I’d like to start with featurization. And at the end of the day, we want an accurate model. A lot of that accuracy, a lot of the insights we can get, the predictions we can get, and the output we can get from any model is really hinged on how effective your featurization is. So many times you hear that, “Well, 80% of the time data scientists spend on data.”
Can I put a pin on, do you know where that number comes from?
Oh of course. Everyone says that’s the number, everyone repeats it. It’s a self-fulfilling prophecy. I’m going to say 79% of it just to be sure. But I think it’s more of an urban legend at that point. I am seeing customers who do spend that kind of percentages  I am seeing experiments rerun that take that amount of time. Generalizing that number is just too far now to do.

Sam Charrington: [00:14:42] I was thinking about this recently, and wondering if there’s some institute for data science that’s been tracking this number over time. It would be interesting to see how it changes over time I think is the broader curiosity.

Erez Barak: [00:14:55] It would. I should go figure that out.
[laughs]
So anyone who builds a model can quickly see the effect of featurization on the output. Now, a lot of what’s done, when building features, can be automated. I would even venture to say that a part of it can be easily automated.

Sam Charrington: [00:15:24] What are some examples?

Erez Barak: [00:15:25] Some examples are like, “I want to take two columns and bring them together into one.” “I want to change a date format to better align with the rest of my columns.” And even a easy one, “I’d like to enhance my data with some public holiday data when I do my sales forecasting because that’s really going to make it more accurate.” So it’s more data enhancement, but you definitely want to build features into your data to do that.
So getting that right is key. Now start thinking of data sets that have many rows, but more importantly have many columns. Okay? And then the problem gets harder and harder. You want to try a lot more options. There’s a lot more ways of featurizing the data. Some are more effective than others. Like we recently in Auto ML, have incorporated the BERT model into our auto featurization capability. Now that allows us to take text data we use for classification and quickly featurize it. It helps us featurize it in a way that requires less input data to come in for the model to be accurate. I think that’s a great example of how deep and how far that can go.

Sam Charrington:
[00:16:40] You mentioned that getting that featurization right is key. To what extent is it an algorithmic methodological challenge versus computational challenge? If you can even separate these two. Meaning, there’s this trade off between… Like we’ve got this catalog of recipes like combining columns and bending things and whatever that we can just throw at a data set that looks like it might fit. Versus more intelligent or selective application of techniques based on nuances whether pre-defined or learned about the data.

Erez Barak: [00:17:28] So it extends on a few dimensions. I would say there are techniques. Some require more compute than others. Some are easier to get done. Some require a deeper integration with existing models like I mentioned BERT before, to be effective. But that’s only one dimension. The other dimension is the fit of the data into a specific learner. So we don’t call it experiments in machine learning for nothing. We experiment, we try. Okay? Nobody really knows exactly which features would affect the model in a proper way, would drive accuracy. So there’s a lot of iteration and experimentation being done. Now think of this place where you have a lot of data, creating a lot of features and you want to try multiple learners, multiple algorithms if you may. And that becomes quickly quite a mundane process that automating can really, really help with.
And then add on top of that, we’re seeing more and more models created with just more and more features. The more features you have, the more nuanced you can get about describing your data. The more nuanced the model can get about predicting what’s going to happen next, or we’re now seeing models with millions and billions of features coming out. Now, Auto ML is not yet prepared to deal with the billion feature model, but we see that dimension extend. So extend compute, one, extend the number of iterations you would have, extend to the number of features you have. Now you got a problem that’s quickly going to be referred to as mundane. Hard to do. Repetitive. Doesn’t really require a lot of imagination. Automation just sounds perfect for that. So that’s why one of the things we went after in the past, I’d say six to twelve months is how we get featurization to a place where you do a lot of auto featurization.

Sam Charrington: [00:19:22] I’m trying to parse the extent to which, or whether, you agree with this dichotomy that I presented. You’ve got this mundane problem that if a human data scientist was doing would be just extremely iterative, and certainly one way of automating is to just do that iteration a lot quicker because the machine can do that.
Another way of automating is… let’s call it more intelligent approaches to navigating that feature space or that iteration space, and identifying through algorithmic techniques what are likely to be the right combinations of features as opposed to just throwing the kitchen sink at it and putting that in a bunch of loops. And certainly that’s not a dichotomy, right? You do a bit of both. Can you elaborate on that trade off or the relationship between those two approaches? Is that even the right way to think about it or is that the wrong way to think about it?

Erez Barak: [00:20:33] I think it’s a definitely a way to think about it. I’m just thinking through that lens for a second. So I think you describe the brute force approach to it. On one side. The other side is how nuanced can you get about it?
So what we know is you can get quite nuanced. There’s things that are known to work, things that are not known to work. Things that work with a certain type of data set that don’t work with another. Things that work with a certain type of data set combined with the learner that don’t work with others. So as we build Auto ML, I talked about machine learning used to help with machine learning. We train a model to say, “Okay, in this kind of event, you might want to try this kind of combination first.” Because if you’re… I talked about the number of features, brute force is not an option. So we have have toto get a lot more nuanced about it, so what Auto ML does is given those conditions if you may, or those features for that model, it helps shape the right set of experiments before others. That’s allowing you to get to a more accurate model faster.
So I think that’s one aspect of it. I think another aspect, which you may have touched on, and I think is really important throughout Auto ML, but definitely in featurization, is why people are excited about that. The next thing you are going to hear is that I want to see what you did. And you have to show what kind of features you used.
And quickly follows is, “I want to change feature 950 out of the thousand features you gave me. And I want to add two more features at the end because I think they’re important.” That’s where my innovation as a data scientist comes into play.
So you’ve got to, and Auto ML allows you to do that, be able to open up that aspect and say, “Here’s what I’ve come up with. Would you like to customize? Would you like to add? Would you like to remove?” Because that’s where you as a data scientist shine and are able to innovate.

Sam Charrington: [00:22:39] So we started with featurization. Next step is learner/model selection?

Erez Barak: [00:22:45] I think it’s probably the best next step to talk about. Yes. I think there’s a lot of configuration that goes into this like how many iterations do I want to do?For instance. How accurate do I want to get? What defines accuracy? But those are more manual parameters we ask the user to add to it. But then automation again comes into play as learner selection. So putting Auto ML aside, what’s going to happen? Build a set of features, choose a learner, one that I happen to know is really good for this kind of problem and try it out. See how accurate I get. If it doesn’t work, but even if it works, you are going to try another. Try another few. Try a few options. Auto ML at the heart of it is what it does.
Now, going to what we talked about in featurization, we don’t take a brute force approach. We have a model that’s been trained over millions of experiments, sort of knows what would be a good first choice given the data, given the type of features, given the type of outcome you want. What do we try first? Because people can’t just run an endless number of iterations. It takes time, takes cost, and sort of takes the frankly it takes a lot of the ROI out of something you expect from Auto ML.
So you want to get there as fast as possible based on learnings from the past. So what we’ve automated is that selection. Put in the data, set a number of iterations or not set them. We have a default number that goes in. And then start using the learners based on the environment we’re seeing out there and choosing them out from that other model we’ve trained over time.
By the way, that’s a place where we really leaned on the outputs we got from MSR. That’s a place where they, as they were defining Auto ML, as they were researching it, really went deep into, and really sort of created assets we were then able to leverage. A product sort of evolves over time and the technology evolves over time, but if I have to pick the most, or the deepest rooted area, we’ve looked at from MSR, it’s definitely the ability to choose the right learner for the right job with a minimal amount of compute associated with it if you may.

Sam Charrington: [00:24:59] And what are some of the core contributions of that research if you go to the layer deeper than that?

Erez Barak: [00:25:10] Are you asking in context of choosing a model or in general?

Sam Charrington: [00:25:13] Yeah, in the context of choosing a model. For example, as you described, what is essentially a learner, learning which model to use, that created a bunch of questions for me around like, “Okay how do you  represent this whole, what are the features of that model? And what is the structure of that model?” And I’m curious if that’s something that came out of MSR or that was more from the productization and if there are specific things that came out of that MSR research that come to mind as being pivotal to the way you think about that process.

Erez Barak: [00:25:57] So I recall the first version coming out of MSR wasn’t really of the end to end product, but at the heart of it was this model that helps you pick learners as it relates to the type size of data you have and the type of target you have. This is where a lot of the research went into. This is where we publish papers around, “Well, which features matter when you choose that?” This is where MSR went and collected a lot of historical data around people running experiments and trained that model.
So the basis at the heart of our earliest versions, we really leaned on MSR to get that model in place. We then added the workflow to it, the auto featurization I talked about, some other aspects we’ll talk about in a minute, but at the heart of it, they did all that research to understand… Well, first train that model. Just grabbing the data.

Sam Charrington: [00:26:54] And what does that model look like? Is it a single model? Is it relatively simple? Is it fairly complex? Is it some ensemble?

Erez Barak: [00:27:06] I’ll oversimplify a little bit, but it profiles your data. So it takes a profile of your data, it profiles your features, it takes a profile of your features. It looks at the kind of outcome you want to achieve. Am I doing time series forecasting here? I’m doing classification. I’m doing regression that really matters. And based on those features picks the first learner to go after.
Then what it does is uses the result of that first iteration, which included all the features I’m talking about, but also now includes, “Hey, I also tried learner X and I got this result.” And that helps it choose the next one. So what happens is you look at the base data you have, but you constantly have additional features that show you, “Well, what have I tried and what were the results?”
And then the next learner gets picked based on that. And that gets you in a place where the more you iterate, the closer you get to that learner that gives you more accurate result.

Sam Charrington: [00:28:14] So I’m hearing elements of both supervised learning. You have a bunch of experiments and the models that were chosen ultimately, but also elements of something more like simple reinforcement learning, contextual bandits, explore, exploit kind of things as well.

Erez Barak: [00:28:37] It definitely does both. If I could just touch on one point, reinforcement learning, as it’s defined, I wouldn’t say we’re doing reinforcement learning there. Saying that, we’re definitely… every time we have an iteration going or every X times we have that, we do fine tune the training of the model to learn as it runs more and more.
So I think reinforcement learning is a lot more reactive. But taking that as an analogy, we do sort of continuously collect more training data and then retrain the model that helps us choose better and better over time.

Sam Charrington: [00:29:15] Interesting. So we’ve talked about a couple of these aspects of the process. Feature engineering, model selection, next is once you’ve identified the model, tuning hyper-parameters and optimization. Do you consider that its own step or is that a thing that you’re doing all along? Or both?

Erez Barak: [00:29:38] I consider it part of that uber process I talked about earlier. We’re just delving into starting to use deep learning learner within Auto ML. So that’s where we’re also going to automate the parameter selection, hyper-parameter selection. A lot of the learners we have today are classic machine learning if you may, so that’s where hyper-parameter tuning is not as applicable. But saying that, every time we see an opportunity like that, I think I mentioned earlier in our forecasting capabilities, we’re now adding deep learning models. In order to make the forecasting more accurate, that’s where that tuning will also be automated.

Sam Charrington: [00:30:20] Okay, actually elaborate. I think we chatted about that pre-interview, but you mentioned that you’re doing some stuff with TCN and Arema around times series forecasting. Can you elaborate on that?

Erez Barak: [00:30:36] Yeah, so I talked about this process of choosing a learner. Now you also have to consider what is your possible set of learners you can choose from. And what we’ve added recently are sort of deep learning models or networks that actually are used within that process. So TCN and Arema are quite useful when doing times series forecasting. Really drive the accuracy based on the data you have. So we’ve now embedded those capabilities within our forecasting capability.

Sam Charrington: [00:31:12] So when you say within forecasting, meaning a forecasting service that you’re offering as opposed to within…

Erez Barak: [00:31:21] No, let me clarify. There’s three core use cases we support as part of Auto ML. One for classification, the other for regression, and the third for times series forecasting. So when I refer to that, I was referring more to that use case within Auto ML.

Sam Charrington: [00:31:42] Got it. So in other words in the context of that forecasting use case, as opposed to building a system that is general and applying it to time series and using more generalized models, you’re using now TCN and Arema as core to that, which are long-proven models for times series forecasting.

Erez Barak: [00:32:07] Yeah, I would argue they’re also a bit generalized, but in the context of forecasting. But let me tell you how we’re thinking about it. There’s generally applicable models.
Now, we’re seeing different use cases like in forecasting there are generally applicable models for that area, that are really useful in that area. That’s sort of the current state we’re in right now. And we want to add a lot more known generally applicable models to each area. In addition to that, sort of where we’re heading and as I see this moving forward, more and more customers will want to add their own custom model.
We’ve done forecasting for our manufacturing. We’ve tuned it to a place where it’s just amazing for what we do because we know a lot more about our business than anyone else. We’d like to put that in the mix every time your Auto ML considers the best option. I think we’re going to see- I’m already seeing a lot of that, sort of the ‘bring your own model’. It makes sense.

Sam Charrington: [00:33:06] That’s an interesting extension to bring your own data, which was one of the first frontiers here.

Erez Barak: [00:33:11] I mean you’re coming in to a world now, it’s not “Hey, there’s no data science here. There’s a lot of data science going on so I’m the data scientist. I’ve worked on this model for the past, you name it, weeks? Months? Years? And now this Auto ML is really going to help me be better? I don’t think that’s a claim we even want to make. I don’t think that’s a claim that’s fair to make.
The whole idea is find the user where they are. You have a custom model? Sure, let’s plug that in. It’s going to be considered with the rest in a fair and visible way, maybe with the auto featurization it even goes and becomes more accurate. Maybe you’ll find out something else, you want to tune your model. Maybe you have five of those models, and you’re not sure which one is best so you plug in all five. I think that’s very much sort of where we’re heading, plugging into an existing process that’s already deep and rich wherever it lands.

Sam Charrington: [00:34:07] The three areas that we’ve talked about, again featurization, model selection, and parameter tune or optimization are I think, what we tend to think of as the core of Auto ML. Do you also see it playing in the tail end of that process like the deployment after the model’s deployed? There’s certainly opportunities to automate there. A lot of that is very much related to dev ops and that kind of thing, but are there elements of that that are more like what we’re talking about here?

Erez Barak: [00:34:48] I think there’s two steps, if you don’t mind I’ll talk about two steps before that.
I think there’s the evaluation of the model. Well, how accurate is it, right? But again you get into this world of iterations, right? So that’s where automation is really helpful. That’s one.
The other is sort of the interpretation of the model. That’s where automation really helps as well. So now, especially when I did a bunch of automation, I now want to make sure, “Well, which features really did affect this thing? Explain them to me. And work that into your automated processes. Did your process provide a fair set of data for my model to learn from? Does it represent all all genders properly? Does it represent all races properly? Does it represent all aspects of my problem, uses them in a fair way? Where do you see imbalance?”
So I think automating those pieces are right before we jump into deployment, I think it’s really mandatory when you do Auto ML to give that full picture. Otherwise, you’re sort of creating the right set of tools, but I feel without doing that, you’re sort of falling a bit short of providing everyone the right checks and balances to look at the work they’re doing. So when I generalize the Auto ML process, I definitely include that.
Back to your question on do I see deployment  playing there? To be honest, I’m not sure. I think definitely the way we evaluate success is we look at the models deployed with Auto ML or via Auto ML or that were created via Auto ML and are now deployed. We looked at their inferences. We look at their scoring, and we provide that view to the customer to assess the real value of their model.
Automation there I think if I have to guess, yes. Automation will stretch there. Do I see it today? Can I call it that today? Not just yet.

Sam Charrington: [00:36:54] Well, a lot of conversation  around this idea of deploying a model out into production, and thankfully I think we’ve convinced people that you can, it’s not just deploy once and you’re not thinking about it anymore. You have to monitor the performance of that model and there’s a limited lifespan for most of the models that we’re putting into production and then the next thing that folks get excited about is, “Well I can just see when my model falls out of tolerance and then auto-retrain…” It’s one of these everyone’s talking about it, few are actually doing it. it sounds like you’re in agreement with that like we’re not there yet at scale or no?

Erez Barak: [00:37:42] So I think we often refer to that world as the world of ML ops. Machine learning operations in a more snappy way. I think there’s a lot of automation there. If you look at automation, you do it dev ops for just code. I mean, forget machine learning code, but code, let alone models, is very much automation we need.
I do think there’re two separate loops that have clear interface points. Like deployed models, like maybe data about data drift.
But they sort of move in different cycles at different speeds. So we’re learning more about this but I suspect that iteration of training, improving accuracy, getting to a model where the data science team says, “Oh, this one’s great. Let’s use that.”
I suspect that’s one cycle and frankly that’s where we’ve been hyper-focused on automating with Auto ML. There’s naturally another cycle of that, operations that we’re sort of looking at automation opportunities with ML ops. Do they combine into one automation cycle? Hmm, I’m not sure.

Sam Charrington: [00:38:58] But it does strike me that when for example, the decision “Do I retrain from scratch? Do I incrementally retrain? Do I start all the way over?” Maybe that decision could be driven by some patterns or characteristics in the nature of the drift in the performance shift that a model could be applied to. And then,  there’re aspects of what we’re thinking about and talking about as Auto ML that are applied to that dev ops-y part. Who knows?

Erez Barak: [00:39:37] No, I’d say who knows. Then listening to you I’m thinking oh, to myself that while we sort of have a bit of a fixed mindset on the definition we’d definitely need to break through some of that and open up and see, “Well, what is it that we’re hearing from the real world that should shape what we automate, how we automate and under which umbrella we put it?” I think, and you will notice, it’s moving so fast, evolving so fast. I think we’re just at the first step of it.

Sam Charrington: [00:40:10] Yeah. A couple quick points that I wanted to ask about. Another couple areas that are generating excitement under this umbrella are neural architecture surge and neural evolution and techniques  like that. Are you doing anything in those domains?

Erez Barak: [00:40:30] Again, we’re incorporating some of those neural architectures into Auto ML today. I talked about our deeper roots with MSR and how they got us that first model. Our MSR team is very much looking deeper into those areas. They’re not things that formulated just yet but the feeling is that the same concepts we put into Auto ML, or automated machine learning can be used there, can be automated there. I’m being a little vague because it is a little vague for us, but the feeling is that there is something there, and we’re lucky enough to have the MSR arm that, when there’s a feeling there’s something there, some research starts to pan out, and they’re thinking of different ideas there but to be frank, I don’t have much to share at that point in terms of more specifics yet.

Sam Charrington: [00:41:24] And my guess is we’ve been focused on this Auto ML as a set of platform capabilities that helps data scientists be more productive. There’s a whole other aspect of Microsoft delivering cognitive services for vision, and other things where they’re using Auto ML internally and where it’s primarily deep learning based, and I can only imagine that they’re throwing things like architecture surge and things like that at the problem.

Erez Barak: [00:41:58] Yeah. So they do happen in many cases I think custom vision is a good example. We don’t see the general patterns just yet and for the ones we do see, the means of automation haven’t put out yet. So if I look at where we were with the Auto ML thinking probably a few years back is where that is right now. Meaning, “Oh, it’s interesting. We know there’s something there.” The question is how we further evolve into something more specific.

Sam Charrington: [00:42:30] Well, Erez, thanks so much for taking the time to chat with us about what you’re up to. Great conversation and learned a ton. Thank you.

Erez Barak: [00:42:38] Same here. Thanks for your time and the questions were great. Had a great time.