I just got home last night from San Francisco, this time returning from the KeyBanc Capital Markets Emerging Technology Summit. As a participant in their MOSAIC Industry Leaders program, my role at the conference is to participate in one-on-one and group meetings with their institutional investor clients as a subject-matter expert on ML and AI.
Having spent a couple of packed days answering questions from various angles about the state of the market and the various vendors serving it, I thought I’d report here on some of the key themes that arose from their questions, along with a snapshot of my take on each:
AI and the cloud.
- This came up in nearly every discussion. I’ve shared my take on the cloud ML/AI stack in previous newsletter issues. In a nutshell, I think applications and data are quickly moving to the cloud, and thus ML and AI will move to the cloud as well. The big cloud vendors—AWS, Microsoft, Google, and IBM—all have similar offerings at each layer of the stack. Thus, in these early days, most enterprise customers are choosing a cloud-first based on broad criteria, and making due with that cloud’s data stack. That tends to advantage AWS and, to a lesser extent, Microsoft. Highly technical buyers and enterprises making a decision based primarily on the data stack often go Google. IBM shops and those looking for complete solutions often turn to that vendor.Google is betting big on Tensorflow, by seeding the market with it and trying to build the best cloud upon which to run it/the models it produces. (They’re making the same bet with Kubernetes, too.) In both cases, the market recognizes the threat and is responding in ways that impede their ability to translate open source success into cloud dominance. AWS throwing its weight behind ONNX, the Open Neural Network Exchange project, which promises framework interoperability, and EKS, it’s managed Kubernetes service, are examples.
AI chips and hardware.
- Another topic that came up often. NVIDIA with its GPUs is the clear leader in the deep learning acceleration market, and they’ve got everyone and their brother gunning for them. They’ve created a pretty big moat with CUDA and its ecosystem, and all the deep learning frameworks are built to take advantage of it. I theorize, though, that the significance of this moat is reduced in a hyperscale (i.e. cloud-first) world. None of cloud giants want to be beholden to a single source, and they’re all building their own chips (eg Google’s TPU) to reduce the dependency on NVIDIA lower costs and increase performance. NVIDIA knows this and has been running like hell to drive up performance/price. Based on the hyperscale assumption, I’m bearish on the prospects of independent chip vendors like Graphcore, Cerebras, Wave Computing as long-term, sustainable, independent companies. That said, if they can get their products to market at scale quickly enough, and carve out their respective niches, plenty of interesting exit opportunities remain.Intel remains a wildcard in this space. They own the broader server and datacenter CPU market, have great enterprise and hyperscale relationships, and generally bring tremendous scale and resources to the fight. They were slow off the starting blocks, are weighed down by their scale, and face the classic innovator’s dilemma at every turn. Yet, they recognize the threat to their business and are running hard (and acquiring) to catch up. They’re looking to edge out NVIDIA with projects like nGraph, which aspires to be able to compile deep neural networks written in any framework to run efficiently on any hardware backend, including current and future CPUs, GPU, and accelerators.
Winners and losers.
- Even when it wasn’t asked directly, underlying all the investor questions was an ultimate desire to suss out the winners and losers in the shift to AI. While
<CYA-disclaimer>
I wasn’t dispensing any investment advice</CYA-disclaimer>
, if you buy into an AI-everywhere and cloud-first vision of how this all plays out, there are some clear winners and losers. Cloud vendors that can differentiate themselves and execute well enough to retain scale will win big in ML and AI. For startups in the ML/AI tools space, the path to a long-term, sustainable, independent company is a difficult one, as they’re squeezed by both open source and cloud (increasingly funded by the same adversary). But the market is still young, so, again, there’s an opportunity to differentiate, create outsized customer value, and still win. Even bigger winners, IMO, are application providers, for example, but not limited to, SaaS vendors, who are early to AI and build it deeply into their products. This will allow them to better serve their customers and build superior proprietary data sources, a virtuous cycle.
These were the main themes that surfaced in my couple of days at the event. AI applications, databases, big data platforms, and more came up as well, but I’ll save these for future articles.
Please share any questions you have about the AI market, or any comments on my take.