📊 Replace Google Analytics with warehouse analytics. Get the guide.
Has Salesforce Admitted Defeat to Snowflake in the Battle for Customer 360?
We'll send you updates from the blog and monthly release notes.
Founder and CEO of RudderStack
October 04, 2022
Salesforce’s announcement of Genie, a new product that it’s calling a Customer Data Platform, is probably the biggest news coming from the tech behemoth in some time. While the announcement focuses primarily on the use cases enabled by the new product, the thing I find most interesting is actually the data foundation of Genie.
The key to Genie’s ability to help companies “turn data into customer magic” is a bi-directional data sync between Salesforce and Snowflake. In a way, this is a concession to the overwhelming tide of the warehouse-first paradigm. Salesforce is no longer trying to own all of the data required to build a comprehensive customer source of truth, the coveted customer 360.
If you can’t beat the warehouse, strategically partner with the warehouse
Genie is enabled by a number of what Salesforce calls strategic partnership innovations. But it’s the open data sharing between Salesforce and Snowflake that really powers the product. Genie is all about creating and using a customer 360, and Salesforce is relying on Snowflake for this data. Like a Genie, the details of this open data sharing aren’t very clear, but one can guess that this will support pulling data from Snowflake into Salesforce as well as syncing Salesforce’s copy of customer data (or parts of it) into Snowflake.
Syncing data from Snowflake to Salesforce
The first part of the sync, getting data into Salesforce, makes a ton of sense. After all, companies store a lot of data in their warehouse, and all SaaS vendors want a copy of this data. Segment’s SQL traits, Amplitude and Braze’s warehouse integrations, and now Salesforce’s warehouse sync are all trying to address this. This also has led to the creation of a category called reverseETL with vendors like Census, Hightouch, and RudderStack of course, who enable this data movement into SaaS tools that don’t natively support it.
Syncing data from Salesforce to Snowflake
The other part of the sync to enable data movement from Salesforce into Snowflake is a lot more interesting. Traditionally, SaaS vendors want to hold their customer’s data to keep the customers locked-in to their platforms. They make it extremely hard to get that data out. Often SaaS tools only expose data through difficult to use and severely rate-limited APIs. ETL tools like FiveTran and Matillion built big businesses by automating this complexity and making it easier to extract data from SaaS tools, but they only go so far. There are a number of limitations to these cloud ETL tools:
- Getting anything close to a real-time copy of the data is impossible. You only get point in time copies without the change-log, and hence all updates in between are not captured.
- There are no database style consistency guarantees (e.g. foreign key constraints across tables) on the data.
- The data models of the SaaS tool and what is available over API are often different.
That list is far from exhaustive, and if the new Salesforce to Snowflake sync works as promised, it will eliminate a number of these challenges. But a major question remains: Why is Salesforce okay with giving away the keys to its kingdom by making it easy to take the data out? I think the answer is simple: there are fundamental and far reaching problems with storing your customer master in Salesforce or any other walled SaaS garden for that matter.
The traditional SaaS model is incompatible with today’s data use cases
While the traditional SaaS model of delivering software has tons of benefits, from not worrying about IT to cost savings and faster time to value. At the same time, SaaS tools also severely limit flexibility of usage. With this model, you’re fundamentally limited to the use cases that the SaaS vendor offers. Nowhere does this show up more prominently than customer data locked in Salesforce. Let’s consider a few questions to parse this out:
- What if Salesforce’s choice of underlying database (Oracle) isn’t enough to meet your data volume and performance requirements of storing high volume data (e.g. first party events or other app data)?
- What if you want to build BI and analytics dashboards on top of your customer data in Salesforce?
- What if your data science team is looking to build advanced ML use cases on top of that data?
- What if you want to send customer data from your production database into Salesforce?
Over the years, Salesforce has tried to address each one of these by buying (e.g. Tableau) or building (e.g Einstein), but these vertically integrated solutions have their limitations. At the end of the day, you’re still caged by what Salesforce provides. This is where the data warehouse comes in as a magic bullet to address all these challenges.
The data warehouse takes over
While data warehouses have been around for almost three decades, they didn’t become prominent until Amazon launched a data-warehouse in the cloud (Redshift) in 2012, and then Snowflake came along. The cloud data warehouse immediately made storing and processing data at scale much easier and gained rapid adoption. Now companies could affordably and efficiently store all of their customer data in a central location, and creating customer records in these cloud data warehouses was the avenue to address all the challenges mentioned above. Want advanced analytics? Want to build ML use cases? Want to store customer data that you cannot store in Salesforce (e.g. events)? Just copy everything into a CDW and build on-top.
Yes, with a CDW you still have to figure out the painful process of getting data out of Salesforce, but that’s a small cost (not that FiveTran is exactly cheap) to pay for the benefits delivered by aggregating customer data in a central storage layer. With the widespread adoption of the cloud data warehouse, companies, knowingly or unknowingly, made the warehouse the customer source of truth. These warehouses now had the data to create a more complete customer profile than Salesforce or any other SaaS vendor ever could. The paradigm had shifted.
Recognizing the warehouse-first paradigm
I think that the Genie announcement, with its open data sharing between Salesforce and Snowflake, is evidence that Salesforce has recognized this paradigm shift—which is significant. The company has observed this irreversible change over the last few years, and instead of building higher walls, they’ve finally decided to embrace it. Giving up control and actively working to make it easy to get data out of their platform is the first step!
Salesforce’s place in the new world
So, will we still need Salesforce in a warehouse-first world? Absolutely. But its place is as an engagement layer that doesn’t keep the full customer profile, but only the data needed to enable that engagement. Salesforce users, whether they be sales, marketing, or support, still want to work on top of Salesforce. They prefer its user experience and the ecosystem of tools that comes with it. But Salesforce won’t have the golden customer master, and it won’t need it, because that will be in Snowflake or its equivalent. Salesforce will only have the relevant data needed to enable its users. If they really do deliver a seamless way to sync this data back and forth, it will be something special.
So, will there be a Salesforce in 20 years? In all likelihood, yes. But if a startup can replicate a Salesforce like experience on Snowflake, and if Snowflake can deliver on its promise of enabling transactional use cases, things will get interesting for sure!
Taming architectural complexity
Even in the midst of this massive shift towards the warehouse, creating and actually using a customer 360 is no small task, but the reward is undeniable. It’s why so many companies are emerging with “solutions” aimed at delivering on this promise. It’s encouraging to see movement away from black box solutions towards more open systems centered on the data warehouse. But this comes with its own challenges.
With the customer source of truth in a warehouse, data architectures are becoming increasingly complex. This is partly because data-warehouses are still not suited for real-time use cases and aren’t purpose-built for customer data. As a result, you may end up building a number of pipelines:
- A real-time integration via one tool to get data into Salesforce
- A separate ETL pipeline from another vendor to get data from Salesforce to Snowflake
- Another homegrown script or rETL tool to get data back into salesforce
Now imagine scaling the complexity of that infrastructure 5x because customer data is present not in just Salesforce but across many systems. Furthermore, beyond just getting the raw data, creating a true c360 requires steps like identity stitching and user profile creation. And that’s just the start. Once you create the user profile, you want to activate that data while enabling advanced use cases like analytics and ML. These are non-trivial challenges.
Taming this complexity is exactly the reason we started RudderStack. Our product enables brands to collect customer data from all the sources including first-party and third-party ones; model it in a data-warehouse to create that golden customer 360 and activate it into downstream sales, marketing, support, and analytics tools.
Learn how RudderStack can help you build a Customer 360
Download our Data Maturity Guide to learn how to create a true customer 360 in your warehouse of choice and build on top for every use case.
ABOUT THE AUTHOR
Founder and CEO of RudderStack
Segment 2 years after the Twilio acquisition
By Soumyadeb Mitra, Eric Dodds
RudderStack Intern Experience: Improving Digital Campaign Management
By Henry Lassa
Five Ways To Shorten Time To Value for Your Customer Engagement Data
By Christie Green
We'll send you updates from the blog and monthly release notes.