Microsoft Ignite 2023 was one part a celebration of yearlong technological innovations, one part announcing the general availability of previously announced products, one part vision, one part ecosystem and four parts copilots everywhere. Copilots promise an historic software-led productivity increase. Perhaps for the first time in industry history we’re seeing huge demand for software coincide with the ability to make it easier to write software. Just as AWS turned the data center into an API, copilots are turning software development into natural language, enabling many more people to create. The implications on productivity are massive and we believe will kick off a new wave of growth that will become increasingly noticeable throughout 2024.
In this Breaking Analysis we give you our impressions of Microsoft Ignite 2023. theCUBE Research Analyst George Gilbert and CUBE Collective contributor Sarbjeet Johal both weighed in for this episode and we’ll also share some recent ETR data that shows the progression of some of the major AI players in the past twelve months and the the relative impact Gen AI has had on each of their businesses.
Satya’s Keynote Underscored Microsoft’s Current Gen AI Lead
As usual, Satya was a strong presenter. His main focus was on the AI copilot stack that is going to supercharge the next wave of innovation. There were several “we’re announcing the general availability of…” types of announcements, including Azure boost, which is Microsoft’s server offload engine (i.e. like AWS Nitro), Fabric, Microsoft’s modern data platform, Copilots for 365 and Studio…in addition over 100 new updates.
Jensen was on stage doing his thing and talking about the NVIDIA supercomputer clusters they’ve jointly built with Microsoft. Importantly, this is not Azure infrastructure …it’s not Azure Boost. Rather this is running on NVIDIA systems infrastructure with a thin layer of Azure software to help OpenAI train and run its models. Microsoft is currently winning in training and inference of LLMs, maybe not so much out of foresight but because they had plugged internal holes and outsourced the infrastructure to NVIDIA which has put them in a really good position.
Where was Sam Altman? Now we Know…
At the time of recording the video for this post (Thursday evening), we noted that while Jensen was on stage with Satya, Sam Altman, CEO of OpenAI was not…We found that interesting particularly after watching the OpenAI launch recently where we felt Sam Altman somewhat underplayed Satya’s presence at the event. Of course on Friday afternoon we saw the news about Altman’s ouster from OpenAI.
Azure Follows the Leader in Infrastructure then Pulls a Judo Move with its AI Stack
Microsoft’s Graph Connects all the Pieces and is a Linchpin of its Competitive Advantage
Just like AWS, Azure now has a Nitro, a Graviton, an Inferentia and a Tranium. Arm-based chips which we knew were coming and are finally here. And Satya spent most of his time double clicking into the AI Stack which looks like this:
We don’t have the time today to dig in too deep but let’s say a few things here starting at the bottom and moving up.
Satya gave a nice commercial for Azure as the world’s computer and reiterated their commitment to have 100% of energy usage be renewable by 2025…he talked about the network and the hollow core fiber they’re manufacturing…their servers and chips which we’ll discuss in a moment and alternatives from AMD and Nvidia and all the way up the stack into the data layer and spent a lot of time on the copilots.
The Graph is a Semantic Layer that Connects all the Elements & Enables Copilots to Act
The interesting thing we see evolving is the Microsoft Graph, which connects all apps, services and the infrastructure that supports them. It’s essentially a semantic layer that makes all the elements and the data feeding them coherent. The reason this is important is all these copilots work on the graph and it allows them to take action. The idea is the copilots know what to and can be a system of agency that acts with fidelity and confidence because the data is all coherent and trusted.
Satya said:
The way to think about this is copilot will be the new UI that helps us gain access to the world’s knowledge and your organization’s knowledge. But most importantly, it’s your agent that helps you act on that knowledge.
That is enabled by the Microsoft knowledge graph.
The services underneath in Azure support the upper layers of the stack and drive consumption of compute, storage, networking, database and all the platform services that power not only all the productivity software but also the copilots that consume all these resources.
This architecture and its self-supporting model from infrastructure to software powered by autonomous AI is a massive flywheel for the consumption of Azure services.
Chip Wars Heat up
Let’s briefly talk about the custom silicon Microsoft has announced.
Two chips were announced. Maia, the AI chip for inference and training. Maia is manufactured on a five nanometer TSM process and has 105 billion transistors. It’s being packaged in a data center config with closed loop cooling that can be retrofitted into existing data center infrastructure. That’s not unique to Microsoft by the way, we’ve seen other vendors taking a similar approach…but it’s cool – no pun intended.
The Azure Cobalt CPU on the right in the above picture is a 128-core chip built on an Arm Neoverse and it’s designed for general cloud services on Azure.
As we said earlier, we have the Azure version of Nitro virtualization and offload, AI chips like Inferentia and Tranium and Graviton in the form of Cobalt. A big question is how much of a lead does AWS have which announced Graviton in 2018, AI chips in 2019 and 2021. Microsoft chips are Arm-based so time to tape out will be compressed and if Microsoft can line up foundry capacity, which it appears to have done, perhaps it can close the gap on AWS.
Custom silicon is critical because hyperscalers will optimize workloads through integration and develop features that confer unique advantage to their clouds.
High Level Puts and Takes from Ignite 2023
Some of the high level themes at Ignite are shared below.
Microsoft has an offering for professional developers with its GitHub copilot. It has copilots like Studio for, a platform for citizen developers, a copilot for 365 for its users, its search copilot (bye bye Bing), Azure ops copilots including a security copilot which is new…copilots everywhere. With a promise of vertical market copilots.
By the way Microsoft announced a number of security products at Ignite 2023, which while some are playing catch up they’re still essential. Check out this post by SiliconANGLE security journalist David Strom for more info.
As well there was lots of emphasis on ecosystems from infrastructure partners to ISVs.
We talked earlier about the resource graph and its power.
A big takeaway ahead of re:Invent is the dynamics of the LLM market are evolving quickly. Microsoft has a differentiated and leading strategy thanks to its OpenAI investment and the pace Microsoft is moving at is AWS-like. The integration of services and apps via the semantic graph and the juxtaposition relative to AWS’ diverse, choice-oriented ethos is notable.
But the door is still open for AWS the week after Thanksgiving to show its stuff. Amazon will have the last word in AI in 2023 at the show. They have to combat the narrative that AWS is the old guard cloud. Our guess is we’ll see a strong showing from Amazon as usual but the pressure is on and the clock is ticking.
Keeping up with the AI Joneses
To underscore the importance of not falling too far behind in the AI race, let’s bring in some ETR data to show what’s happened since the announcement of ChatGPT.
The graphic above uses a format we’ve shown many times. It depicts machine learning and AI spending patterns among some of the leading platforms. The vertical axis is Net Score or spending momentum, and the horizontal axis is an indicator of presence in the data, determined by the N mentions in the quarterly survey of more than 1,700 IT decision makers.
In the upper right you can see OpenAI. ETR started tracking OpenAI in July of 2023 and you can see where it is today. This is an astoundingly strong Net Score. Literally off the charts.
Also you can see the position of Microsoft just below OpenAI and actually more ubiquitous on the X axis. But look where Microsoft was in October, 2022. Compare that with AWS who, while it has made moves, they’re not nearly as a significant as were made by Microsoft and even with Google. As we said in our research note last week, there appears to be a correlation between announcements and general availability and the adoption of Gen AI offerings. While this is not unusual, what is striking is the speed at which adoption is occurring post general availability.
The moves that Google, Microsoft and, of course, OpenAI made were more dramatic than AWS’. We view this data as a proxy for market presence and with the general availability of AWS Bedrock last month and new announcements likely at re:Invent we expect big moves from Amazon coming into 2024.
For context, we plot IBM Watson and Oracle. Last week we published on, on IBM’s big move up with watsonx post GA. IBM was below below Oracle last survey. We’re going to be watching all this in the January survey. The point is this is really a tight race and that race is on in a big way. A lot of folks talk about, this being a marathon and it is…but it doesn’t mean there’s plenty of time to relax. Getting a head start in this race and keeping close to the lead is going to confer competitive advantage in our view.
We’ve seen that advantage already go to Microsoft from the standpoint of mind share and initial revenue. But the market is still, small so we’ll keep monitoring its pulse.
Final Thoughts
This whole idea of copilots everywhere, where everyone becomes a developer is powerful. If the new interface to technology is words, this is going to give us a massive productivity boost. It’s starting with the laptop/desktop end user interaction. It’s rapidly moving to developers so they can develop software faster and that’s going to go into many different use cases, vertical markets and domain specific LLMs along the Gen AI power law.
As we’ve talked about, this flywheel of productivity is in play. Erik Brynjolfsson at the recent UiPath Forward conference said that he’d be disappointed if productivity doesn’t grow to 3 to 4% annually, up from its tepid 1.2%.
The second point above is the accelerated demand for software meets the ease of building software. This is the first time we’ve ever seen that in the industry and it’s going to create an interesting dynamic. John Furrier’s brought up a point that potentially there could be unintended consequence for Microsoft. His theory is that with AI, developers can develop better productivity software than Microsoft has. Perhaps this is what AWS customers or partners are banking on. Leveraging LLMs to compete with Microsoft by creating better software than Microsoft has.
The third point above is the Microsoft graph where all apps services and that supporting infrastructure becoming connected and coherent in a data layer. This is yet another massive flywheel for Azure. The key point is it not only is your assistant, it also allows the AI to take action. It becomes a system of agency.
End user productivity is the king and as we’ve said, 2023 is the year of technology innovation. The year 2024 in our view must be the year of showing ROI and productivity.
Those companies that can show ROI are going to distance themselves from their competitors.
Keep in Touch
Many thanks to George and Sarbjeet for the help this week. Thanks to Alex Myerson and Ken Shifman on production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight who help us keep our community informed and get the word out. And to Rob Hof, our EiC at SiliconANGLE.
Remember we publish each week on Wikibon and SiliconANGLE. These episodes are all available as podcasts wherever you listen.
Email david.vellante@siliconangle.com | DM @dvellante on Twitter | Comment on our LinkedIn posts.
Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail.
Watch the full video analysis:
Note: ETR is a separate company from Wikibon and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at legal@etr.ai.
All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.
Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of Wikibon. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.