Formerly known as Wikibon

Breaking Analysis: Connecting the dots on the emerging Databricks tech stack

With George Gilbert, Rob Strechay & Andy Thurai

The recent Databricks Data+AI Summit attracted a large audience and, like Snowflake Summit, featured a strong focus on large language models, unification and bringing AI to the data. While customers demand a unified platform to access all their data, Databricks and Snowflake are attacking the problem from different perspectives. In our view, the market size justifies the current enthusiasm seen around both platforms but it’s unlikely that either company has a knockout blow for the other. This is not a head on collision. Rather Snowflake is likely years ahead in terms of operationalizing data. Developers can build applications on one platform, like Oracle when it won the market, that perform analysis and take action. Databricks likely has a similar lead in terms of unifying all types of analytic data – e.g. BI, predictive analytics & generative AI. Developers can build analytic applications across heterogeneous data, like Palantir today. But they have to access external operational applications to take action. 

In this Breaking Analysis we follow up last week’s research by connecting the dots on the emerging tech stack we see forming from Databricks. With an emphasis on how the company is approaching generative AI, unification and governance…and what it means for customers.  To do so we tap the knowledge of three experts who attended the event, CUBE analysts Rob Strechay and George Gilbert and AI market maven Andy Thurai of Constellation Research. 

Databricks Data+AI Summit Focused on Three High-Level Themes

The three big themes of the event are seen above with several announcements and innovation areas of focus shown beneath each one. 

  1. Democratization of Analytics – Lakehouse IQ uses LLMs to infer some of the business meaning from technical data artifacts like a semantic layer would,, marketplace is focused on data sharing with partnerships like Twilio, Dell and Oracle of all firms…and Lakehouse Apps to enable the safe running of apps.
  2. A platform for GenAI App Development – The MosaicML acquisition as a way to make building and fine-tuning custom models simpler and more cost-effective, Vector Search, which makes it easy to find relevant content to feed to generative models ; Model Serving and Unity Catalog for AI where AI features and functions are integrated.
  3. Governance for Everything – Unity Catalog, which harmonizes access to all analytic data across the enterprise along with federated mesh and monitoring tools and then the Lakehouse 3.0 focus, which unifies all the different formats like Delta, Hudi & Iceberg…a feature announcement that was very well received by developers.

The keynote from Ali Ghodsi was very strong. Matei Zaharia participated extensively as did Naveen Rao from MosaicML and the other co-founders got some good air time. We also heard from JPMC on stage as well as JetBlue and Rivian and that was just day one which was followed up by Marc Andreessen and Eric Schmidt on day 2.

The consensus was that Databricks put on a first rate program with lots of technical and business meat on the bone.  The following points further summarize our views: 

  • During the two-day keynote, the three main focus areas shown above resonated with the audience. Particularly the emphasis Databricks unifying access to all analytics workloads and assets. The discussions did not deeply delve into specifics of data warehousing, Delta lakes and Lakehouse, but did feature relevant use cases.
  • Eric Schmidt’s segment and the demonstration of LakehouseIQ were particular highlights, with the latter being considered one of the best demos seen in a long time.
  • The keynote’s spotlight on AI, specifically GenAI, was perceived as compelling in targeting three key user personas: data engineering folks responsible for data lineage, governance, security, and accessibility; data scientists building and creating models; and ML engineers in charge of productionizing the models.
  • The acquisition of MosaicML and the expansion of the Unity Catalog was viewed as a strong move to consolidate Databricks’ position in a competitive market, while also appealing to these key user personas.

That said, we would like to see more integration with other data source types from hyperscale cloud providers. Overall, however, the announcements and the event were compelling.

Bottom Line

In our view, the keynote was impressive, presenting a robust vision for future growth and development in key areas. The introduction of LakehouseIQ, along with the acquisition of MosaicML and the continuing maturation of Unity Catalog, puts Databricks in a strong position to maintain its position and compete for incremental share. However, we believe further integration with open-source modeling and other data source types like hyper cloud providers would make their strategy more inclusive and appealing. Despite some identified areas of improvement, we believe the keynote was overall successful and compelling.

Databricks’ Vision Expands to an Application Platform

The slide above depicts what we see as Databricks’ emerging tech stack, building on top of previous work we’ve shared in Breaking Analysis. In our view, Databricks has successfully pivoted from what was perceived as a weakness – data management – to a strength, through their harmonization and unification of all data assets and analytic data products.

The following additional points are noteworthy:

    1. Semantic Layer Relevance: In recent discussions, we’ve highlighted the need for a semantic layer to build digital twins, so that mainstream developers can build apps like Uber that “program the real world”. The semantic layer translates real world business entities like riders and drivers into integrated data  that can be updated in real time. Palantir and Enterprise Web are examples of companies that have successfully implemented or are far along in terms of building applications on a workflow with a semantic layer to enable the development of digital twins.
    2. LakehouseIQ’s Role: Dubbed as the future of Databricks, LakehouseIQ is a way to start building a semantic layer, an innovative approach to make sense of complex data artifacts and extract business-meaningful data. This capability enables the translation of business terms and entities into technical data artifacts, a feat that extends beyond conventional BI metrics like bookings, billings, and revenues into people, places and things.
    3. GenAI Applications: Although still in early stages and not yet as deep as  BI metrics layers, the broader potential of this semantic layer is to serve as a platform for building future analytic applications, including those using GenAI. The envisaged applications involve natural language interaction with the platform, aided by an LLM and orchestrated with LangChain.
    4. Unity as a Catalog: More than a mere discovery engine like most traditional IT catalogs, Unity lays the foundation for Databricks’ transformation. It not tracks data products with properties like lineage, but it will be able to push down centralized permissions policies, with external systems like Redshift or Snowflake. Though not yet fully realized, its announcement is an indicator of a promised arrival in the near-to-mid term.
    5. Knowledge Engine Impact: Microsoft’s demonstration  of its Office 365 Business Chat Copilot acting as an intelligent assistant across all the people, projects, and related Office documents serves as a compelling example of the potential of a semantic layer or knowledge. We believe Databricks has similar ambitions for all analytic data artifacts.

Bottom Line

In our opinion, Databricks has successfully turned a perceived weakness into a strength by focusing on the harmonization and unification of data assets. With the introduction of LakehouseIQ and Unity, and their move towards GenAI applications, we believe Databricks is shaping a future where the ability to model business entities in familiar language and transparently translate them into technical data artifacts is central. These innovations hold the potential to profoundly transform how businesses interact with and manage their data.

MosaicML Puts Databricks Directly in the LLM Game

It has been well-documented that the MosaicML acquisition was a mostly stock deal done at Databricks’ previous ~$38B valuation, now believed to be in the low $20B range. As such the actual cost of the acquisition to Databricks was a little more than half of the reported purchase price of $1.3 billion. As with Snowflake’s ~$150M acquisition of Neeva, a key motivation of these moves is to acquire talent and tech, which is in short supply. 

The acquisition is a strategic move for Databricks. The idea is to offer enterprises the tools to build their own large language models (LLMs) easily and cost-effectively, using their own data, while making the whole process part of the broader Databricks toolchain and workflow. This strategy could potentially reduce the costs associated with training and running these models. Specifically, while general-purpose LLMs will continue to exist, we believe there is a market for specialized LLMs that are more cost-effective and fine-tuned for specific tasks. Both Snowflake and Databricks are working towards providing this capability focusing on enterprise-class governance and IP protection.

Two other key points:

  1. MosaicML’s open-source Large Language Models (LLM), known as MPT LLMs, is currently the most downloaded open-source LLM with nearly 3.5 to 4 million downloads. This substantial user base, comprising data scientists and even casual users, is both a large community asset and shows that there is very substantial appetite for building custom LLMs
  2. MosaicML reportedly has a considerable inventory of GPUs used for training ML models, acquired over the past couple of years. This extensive GPU inventory could be a crucial resource when deployed for training new models specific to a customer, which can also reduce IP leakage. 

There is some FUD around loss of control of customer IP, which most non-Hyperscale vendors want to foster. Hyperscale vendors typically provide options to create private instances of LLMs fine-tuned on a customer’s data.

Unpacking Snowflake & Databricks Differentiators

While the analogies may make each respective company bristle, we see Snowflake as Oracle-like in that it is focused on unifying all forms of data management with the objective of operationalizing analytics. Meaning not only performing the analysis and presenting the results but actually taking action on the data. This all with a focus on an integrated experience with a promise of strong and consistent governance.

Databricks we view as Palantir-like in that we see it using LLMs to build a semantic layer on top of Unity’s catalog of all analytic data assets. Most semantic layers have the business rules and logic spelled out programmatically. LakehouseIQ uses an LLM to infer these business-meaningful connections. From the data artifacts, notebooks, dashboards, and models it can begin to infer business concepts such as organizational structure or product line hierarchies or a company’s use of a revenue calendar.

Furthermore, the following points are relevant:

  • Snowflake advocates for a seamless transition from analytics to transactions, enabling users to operationalize their data without requiring external applications. By embedding analytics in its stack, Snowflake’s database management system can both serve the analytics and transform them into actionable transactions.
    • Key Point: Snowflake promotes operationalization of analytics within its own stack.
  • Databricks, on the other hand, offers an application layer with a knowledge engine, LakehouseIQ. This system can sit on top of both modern and legacy applications, eventually facilitating the creation of a digital representation of a business (digital twins).
    • Key Point: Databricks’ application layer provides a semantic layer that will eventually make it easy to build digital twins.
  • The differentiating factor for Databricks lies in its strong hold over data scientists, machine learning engineers, and data engineers dealing with semi-structured or unstructured data. The mindshare Databricks holds over these personas, thanks to its heritage and a comprehensive set of libraries and tools, is significant.
    • Key Point: Databricks has a large following among data scientists and machine learning engineers, which adds to its appeal.
  • A final point is the need for specialized large language models (LLMs) tailored to specific tasks, in addition to general-purpose, high-end models. Both Snowflake and Databricks aim to offer tools to facilitate the creation of these task-specific models, with an eye towards run-time cost efficiency.
    • Key Point: Both Snowflake and Databricks are focused on providing tools for building more specialized, cost-effective LLMs that provide enterprise-class features and minimize IP leakage risks.

A Nuanced Point on Costs

Whether perceived or real, many customers have cited that building the data engineering pipelines outside of Snowflake (or in Databricks) is more cost effective. This perception may originate because Snowflake bundles AWS costs in its consumption fees whereas Databricks does not. As such customers receiving the Databricks bill may see it as cheaper. It’s also possible that the Snowflake engine is optimized for interactive queries and so carries more overhead for batch pipelines.  More research is required to determine the actual TCO of each environment and that will take time. 

Databricks strategically emphasizes the use of S3 or object-based storage in its architecture, which it positions as advantageous in terms of cost competitiveness. This decision aids them in their relationship with cloud providers as they can sell more capacity at a lower price. In contrast, while Snowflake stages data from S3 / object stores, it also leverages block storage along with compute services as part of its cloud services layer architecture. Block storage has proven to be extremely reliable and performant but it is also more expensive. 

While perhaps appropriate for many workloads, this makes Snowflake’s underlying architecture comparatively appear more expensive and in times of budget constraints could present headwinds for the company’s consumption model. Snowflake began aggressively addressing cost competitiveness last year by integrating Iceberg tables and further pushing into S3. This cost reduction strategy was a major focus during the initial keynotes, signaling Snowflake’s commitment to making its platform more affordable.

Using Gen AI to Leapfrog Supervised Learning

As we reported last week, we suspect Snowflake is strategically banking on NVIDIA’s stack by wrapping it into its container services. We posited that it was Snowflake’s intention to leapfrog existing ML/AI tool chains, a field where Databricks historically excels, and advance directly to the unsupervised learning for generative AI models. However, Databricks appears to be taking the same leapfrogging approach…disrupting its own base before the competition does so.

Snowflake in our view definitely sees this as an opportunity for a reset, as we inferred from Christian Kleinman’s interview with theCUBE and Nvidia. Meanwhile, Databricks has made a significant pivot in the last six months, particularly with the MosaicML acquisition. As Ali Ghodsi stated, MosaicML “just works” – you simply fill out a configuration file and the system trains the model.

Though not everyone will want to train their own models, it’s notable that the number of customers using Hugging Face transformers went from 700 to over 1500 on the Databricks platform in the first six months of the year. Moreover, the company shared that its consumption of GPUs is growing by 25% month over month. This indicates a substantial demand for building and running specialized models. Databricks aims to cater to this demand by being the most accessible and user-friendly platform for training and fine-tuning. This shift in strategy by Databricks is notable and has merit in our view.

The Holy Grail of Data Unification

Both Databricks and Snowflake are driving toward a unified platform but significant differences remain, in a large part related to the audiences they originally served – i.e. data scientists versus BI professionals. While we often discuss these two firms as on a direct collision course the reality is their paths to unification are different and we believe the market is large enough such that both firms can thrive.

To be specific, Databricks was the center of gravity for building data engineering pipelines out of semi-structured clickstream data from mobile apps and websites. Data scientists and ML engineers used this refined data for building ML models that might predict customer behavior. Spark’s support for procedural languages such as Python, libraries for working with semi-structured data, and integrated ML tools made it the natural home for this type of development.

Snowflake’s SQL-only roots and peerless support for interactive SQL queries made it the natural home of data engineers, data analysts, and business analysts building business intelligence applications. But Snowflake’s more powerful DBMS engine has let it add support for multi-model transactional workloads as well as procedural languages such as Python. It still has much work to do in capturing the hearts and minds of the Python community doing AI/ML.

Reimagining Data Warehousing and Playing the Open Card

A big part of Databricks’ marketing narrative is to position Snowflake as an outdated data warehouse. While Snowflake dramatically simplified enterprise data warehouses and unleashed the power of the cloud by separating compute from storage, its data cloud vision and application development framework are creating new markets well beyond traditional EDW markets. 

Nonetheless, Databricks spent considerable time at its event discussing how it is reimagining its data warehouse engine, Databricks SQL. The company had Reynold Xin, its chief architect and co-founder, on stage talking about how they’re re-conceiving  the data warehouse by eliminating tradeoffs between query optimization (i.e. speed), costs and simplicity. Their approach is to circumvent decades of research on query optimization by collecting years of telemetry on the operation of Databricks SQL. They use that data to train AI/ML models that make better optimization decisions than the assumptions embedded conventional engines. His argument was that Databricks has figured out how to give you all three with no tradeoffs. He mentioned some unnamed company’s Search Optimization Service (he was of course talking about Snowflake) and how they were expensive and forced to make such tradeoffs. 

One other area Databricks stresses as a competitive advantage is its openness. Here’s what Databricks co-founder Matei Zaharia on theCUBE with John Furrier addressing this topic.  

One of the big things we’ve always bet on is open interfaces. So that means open storage format so you can use any computing engine and platform with it, OpenAPIs like Apache Spark and MLflow and so on because we think that will give customers a lot more choice and ultimately lead to a better architecture for their company. That’s going to last for decades as they build out these applications. So we we’re doing everything in that way where if some new thing comes on, that’s better at ML training than we are or better at SQL Analytics or whatever. You can actually connect it to your data. You don’t have to re-platform your whole enterprise, maybe lose out some capabilities you like from Databricks in order to get this other thing, and you don’t have to copy data back and forth and generate zillions of dollars of data movement. – Matei Zaharia, Databricks

[Watch Matei Zaharia discuss Databricks’ philosophy on open interfaces].

Databricks Pounds the Open Narrative

Databricks believes it has an edge over Snowflake relative to its open source posture. They have based their technologies, like Delta Lake and Delta Tables, on open source platforms like Apache Spark and MLflow. During the event, Databricks revealed two additional contributions to the open source community. The intention seems to be to create a barrier against Snowflake by championing the benefits of open source technology over Snowflake’s closed source system.

Databricks isn’t just promoting open source but also provides the value-add of running the software for users as a managed service. Their proposition is to let users store their data in any format (Delta tables, Iceberg, Hudi) and run any compute engine on it, which they position as currently a more open approach than what Snowflake offers.

The number of downloads for Databricks’ open source offerings are substantial. Spark has a billion downloads per year, Delta Lake has half a billion, and MLflow has 120 million downloads per year.

However, Snowflake would argue that it provides access to open formats, it commits to open source projects and supports a variety of query options. Ultimately customer spending will be the arbiter of how important the open posture is to the market.

Databricks in Database – What the Spending Data Indicates

To add some context here, ETR began tracking Databricks’ entry into the database / warehouse space only recently. When new products are introduced it often takes several quarters or more to collect enough critical mass in the survey base and that has been the case with Databricks in the database space. The chart below shows Databricks’ customer spending profile for the Database/Data Warehouse sector in the ETR data set. 

The graphic shows the granularity of Net Score, ETR’s proprietary methodology that tracks the net percent of customers spending more on a platform. The lime green indicates new logos; the forest green represents the percent of existing customers spending 6% or more relative to last period, the gray is flat spend, the pink it spending down 6% or worse and the red is the percent of customers churning.

Subtract the reds from the greens and you get Net Score which is shown on the blue line. Databricks in this space has a very robust Net Score in the mid 60’s. Note that anything above 40% we consider highly elevated. Moreover, while the company’s entrance into this space is relatively recent, the survey sample is roughly N=170+.

The yellow line is an indicator of presence in the data set, calculated by the N divided by the total N of the sector. So as you can see it’s early days for Databricks in this space…but its momentum is strong as it enters the traditional domain of Snowflake.

Snowflake’s Presence in Database is Maturing

The ETR data set uses a taxonomy in order to enable time series tracking and like to like comparisons. The intent is where possible to make an apples-to-apples comparison across platforms and that requires mapping various products into taxonomical buckets. We stress this point because vendor marketing rarely allows for simple mappings. As an example, ETR doesn’t have a “Data Cloud” category, however it has begun to track Snowpark and Streamlit, which allows us to gauge the relative strength of various platforms and force our own comparisons across taxonomical buckets.

With that in mind, the following chart shows the same Net Score granularity as the previous chart for Snowflake.

Several points are notable:

  • Spending velocity on Snowflake’s core data platform is decelerating noticeably. While it’s still comfortably above the 40% level it has consistently moderated over the past several quarters, reflecting cost optimizations and market headwinds.
  • The percent of new logos in the survey has also declined recently, while those existing customers spending more has increased.
  • Snowflake data has shown a six quarter uptick in the percent of customers spending flat (+ / – 5%).
  • While the red continues to be small, it also is on the upswing since calendar Q2 2022.

The other main takeaway is Snowflake’s presence in this market is maturing with a much longer history than Databricks. Its N is 50% larger than that of Databricks as seen in the yellow line above.

The reverse case is similar. In other words if you model Snowflake’s presence in Databricks’ wheelhouse the delta is even more pronounced. In other words Snowflake appears to have more ground to make up in the world of data science than Databricks seems to have in the data warehouse sector. That said, data management is perhaps a more challenging nut to crack.

The Expanding Databricks Universe

Databricks and Snowflake have contrasting strategies in their approach to data management. Initially, Databricks functioned like Informatica, developing data engineering pipelines that converted raw data into normalized tables that data scientists used for ML features. Many companies utilized both Databricks and Snowflake for analytics and interactive dashboards.

However, Databricks has now evolved its platform to encompass both data engineering and application platform functions, similar to what WebLogic was known for in the era of Web applications in the on-premise world. Although Databricks is not yet a database management system that handles operational and analytic data, they’re leveraging their data engineers and data scientists to create analytic apps. The combination of Unity and LakehouseIQ within their platform aims to make Databricks an application platform, akin to what Palantir has achieved.

In contrast, Snowflake has emerged as a dominant database management system akin to Oracle. They unify all data types under a single query and transaction manager. Hence, while Databricks is expanding its range to become an analytic application platform, Snowflake continues to strengthen its position as a powerful DBMS. They still have work to do in building the application services that make development of operational applications much easier.

Data application platforms like Databricks and Snowflake are integral in our increasingly data-driven world. However, there seems to be a shift in the perception of these platforms:

  • Many customers, especially those that built pipelines using tools that defined the modern data stack like Fivetran and dbt, are moving some of their core data engineering pipelines off Snowflake. Whether it’s due to the platform’s optimization for interactive and concurrent queries over batch workloads isn’t certain, but this shift challenges the unified governance that Snowflake espouses and is something we’re watching closely.
  • Databricks offers a unique solution, providing unified governance for heterogeneous data assets. Their Unity feature enables them to govern assets wherever they are, positioning them as a promising platform for future data applications. The energy and enthusiasm for generative AI has been captured by Databricks, perhaps on par or even more so than Microsoft.

Both Snowflake and Databricks have much work ahead to close gaps:

  • Snowflake, while having a strong narrative about building apps on its platform, seems limited in the number of apps available. While hundreds of developers are lining up to build apps on Snowflake, the company still has progress to make to realize their vision of the app store for enterprise data apps.
  • Databricks, despite its advantage in machine learning and appealing more to data scientists and ML engineers, has a powerful vision for building analytic applications but it is still early days.

When it comes to model creation, model maintenance, model management, and data governance, Databricks’ focus is notable.  An upcoming challenge for these platforms will be managing privacy and regulations when deploying models, especially across different jurisdictions. This area, currently not completely addressed by either platform, will become increasingly important as more large language models move into the production phase.

On balance, the Databricks event impressed us and noticeably elevated our view of the company’s position in the market. The fast-paced nature of the industry means things can change rapidly, especially when large established players such as AWS, Microsoft, Google and Oracle continue to invest in research and development and expand their respective domains.

Keep in Touch

Many thanks to Alex Myerson and Ken Shifman on production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight who help us keep our community informed and get the word out. And to Rob Hof, our EiC at SiliconANGLE.

Remember we publish each week on Wikibon and SiliconANGLE. These episodes are all available as podcasts wherever you listen.

Email david.vellante@siliconangle.com | DM @dvellante on Twitter | Comment on our LinkedIn posts.

Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail.

Watch the full video analysis:

Image: Timon Schneider/Wirestock

Note: ETR is a separate company from Wikibon and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at legal@etr.ai.

All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.

Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of Wikibon. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.

Keep in Touch

Thanks to Alex Myerson and Ken Shifman on production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight who help us keep our community informed and get the word out. And to Rob Hof, our EiC at SiliconANGLE.

Remember we publish each week on theCUBE Research and SiliconANGLE. These episodes are all available as podcasts wherever you listen.

Email david.vellante@siliconangle.com | DM @dvellante on Twitter | Comment on our LinkedIn posts.

Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail.

Note: ETR is a separate company from theCUBE Research and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at legal@etr.ai or research@siliconangle.com.

All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.

Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of theCUBE Research. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.

You may also be interested in

Book A Briefing

Fill out the form , and our team will be in touch shortly.

Skip to content