Let’s be honest: You can’t build great agentic AI products without a rock-solid data foundation. The reality, however, is that most data foundations have some significant cracks. Developers and product managers still spend too much time wrestling with fragmented or inaccessible data and trying to govern data across siloed systems. This friction pushes great ideas further from production.
Snowflake is here to change that.
Today, Snowflake is introducing a significant slate of new tools and enhancements entirely focused on providing developers with the ultimate data flexibility and accessibility. Our advancements to Snowflake Horizon Catalog, Snowflake Openflow and Snowflake Postgres, combined with performance improvements, make it easier for developers to securely connect to and use their enterprise data by:
Streamlining data migrations with the help of AI
Accelerating development by centralizing data, simplifying data ingestion and access, and increasing interoperability
Enhancing compliance processes and resiliency with enterprise-grade security and governance features and manageability at scale
Gaining faster insights through improved ingestion and query performance along with enhanced cost visibility at scale
Let’s take a closer look at how we’re delivering data without limits.
Moving data from legacy systems is often a slow, expensive gamble, especially when dealing with complex, interdependent code and schemas. We’re taking the headache out of this process by building AI directly into our migration tooling.
SnowConvert AI now provides a faster, more predictable and less expensive migration path, so you can get trustworthy data into developers’ hands quickly. AI-powered code verification and repair (in public preview) accelerates migration by automating the tedious and time-consuming process of testing and repairing converted code, so you can ensure accuracy and quality prior to deployment. Automated and incremental code validation (generally available) automatically checks converted code for semantic equivalence in smaller increments, significantly boosting data confidence.
We’re also extending SnowConvert AI’s support beyond the database to include end-to-end ecosystem migration — including legacy ETL as well as BI repointing, which involves updating reports to use the new database (in public preview). This breadth of support helps you reduce risk and increase consistency across your entire data environment, and dramatically reduce the time and cost required to complete a migration without sacrificing quality.
Modern app dev shouldn’t require you to be a pipeline orchestration expert. You need a streamlined environment where transactional and analytical data live together and where you can work with open data formats easily, no matter the engine or cloud.
This is where simplified workflows and real interoperability become differentiators. We’re redefining the enterprise lakehouse by unifying data workloads, expanding support for open standards and adding more connectors and deployment options to streamline end-to-end pipelines. The ultimate goal is to give you the flexibility to build what you need, how you need it, when you need it.
Postgres has established itself as the number one database among developers and the backbone for modern application development. You already know that Snowflake Postgres (in public preview soon) is bringing Postgres to the Snowflake AI Data Cloud. It gives you the Postgres your developers want, on the enterprise-grade platform your business needs, and makes it easier to connect data between your transactional and analytical systems. Its full compatibility with open source Postgres means you can run your operational workloads on Snowflake without rewriting code, and developers can continue using the specific Postgres extensions, ORMs and clients/frameworks they depend on. Now you can build smarter applications and context-aware AI agents using fresh transactional data, all while simplifying your architecture and accelerating innovation on a single platform.
We are also announcing pg_lake, a set of open source Postgres extensions that allow developers to use Postgres to interact with your lakehouse. pg_lake allows Postgres to query your analytical data directly where it lives in object storage. It does this by giving you the ability to use standard SQL to read and write to Apache Iceberg™ tables, and query or load data directly from file formats such as CSV, Parquet or JSON. With pg_lake, we are bringing the power of Postgres to your lakehouse.
For a truly easy-to-use, connected and trusted enterprise lakehouse, your data needs to be universally accessible and interoperable across engines and cloud platforms. Snowflake has consistently highlighted the benefits of secure cross-cloud and cross-region data accessibility and data sharing. Now, with Snowflake Horizon Catalog’s support for external engine read (in public preview soon) and write (in private preview soon) access for Snowflake managed Iceberg tables via open APIs from Apache Polaris (incubating) and Apache Iceberg REST Catalog, we’re enhancing interoperability without lock-in. This makes it significantly easier to access Snowflake managed Iceberg tables from external query engines that support the Iceberg REST protocol. Instead of setting up a separate Apache Polaris account, configuring the integration, managing a separate set of users and roles and setting up separate security configs, you can now simply access the tables directly from Horizon Catalog in your Snowflake account.
We’re also extending our zero-ETL data-sharing capabilities to open table formats (generally available), including Apache Iceberg and Delta Lake tables regardless of which metadata catalog the data is in. Support for the latest Apache Iceberg V3 capabilities (in private preview), such as new variant and geospatial data types, opens up even more use cases for your Iceberg tables.
Snowflake is also introducing a new level of enterprise data protection for your lakehouse. Our Business Continuity/Disaster Recovery (BCDR) for Snowflake managed Iceberg tables (in public preview) allows you to create asynchronous copies of account objects and databases across regions and clouds and form failover groups. With this robust resiliency and recovery capability, your data remains accessible — and your business up and running — when a disaster or a cloud outage occurs.
Simplifying end-to-end data pipelines allows you to get data from more sources and put it into data teams’ hands more quickly. Combine that with the ability to quickly build and share enriched data products, and you’ve got a significant boost for your agentic AI initiatives.
Snowflake Openflow tackles the first part of the process, focusing on automating data extraction and integration from virtually any source and making it easier to keep data centralized across the enterprise lakehouse. Openflow has been generally available with the bring your own cloud (BYOC) deployment option on AWS, and the Snowflake deployment option through Snowpark Container Services is now generally available on AWS and Microsoft Azure. The Openflow Snowflake deployment offers a fully integrated experience, removing the need for data engineers to manage infrastructure, configure networks or worry about security boundaries between systems.
Snowflake is also adding new integration options to its expansive library of connectors and deployment options, all aimed at helping you efficiently connect to and use your data:
Unified, zero-copy enterprise data integration: SAP Snowflake (in private preview) extends SAP Business Data Cloud with fully managed data and AI capabilities, simplifying the enterprise data landscape through a bidirectional integration. Our partnership with Salesforce (generally available) delivers the same zero-copy model that’s powered by Snowflake Intelligence, with unmatched performance and built-in governance. We also partnered with Oracle (in public preview soon) to extend a new CDC collaboration for high-speed data replication across on-premises and cloud environments.
dbt projects on Snowflake (generally available): Build, test, deploy and monitor data transformation dbt projects directly in Snowflake so data engineers can focus on delivering insights, not maintaining infrastructure.
Snowpark Connect for Apache Spark™ (generally available): Use the open source Spark Connect client to run your Apache Spark code directly on Snowflake with minimal changes. Snowpark customers see an average of 5.6x faster performance and 41% cost savings.1
Once developers have access to all this data, they need a dependable yet simple way to collaborate and efficiently iterate on their projects. Creating and sharing advanced data products is an efficient way to deliver enriched data. In addition to sharing databases, tables and secure views as data products, you can now easily package and share Snowflake Notebooks and UDFs by using the declarative sharing configuration (generally available soon) in the Snowflake Native App Framework.
As you connect more data and scale your AI initiatives, the demands for security, governance and business continuity capabilities only increase. AI needs clean, accurate data for training. Robust resiliency and data security are table stakes, and compliance isn’t optional.
Snowflake Horizon Catalog is the universal AI catalog that provides context and governance for AI over all your data. It offers interoperability without lock-in for an enterprise lakehouse and provides enterprise-grade security and governance features for data and AI. Horizon Catalog also helps AI better understand your data by providing the missing context that helps AI agents correctly interpret data. Lastly, Horizon Catalog enables cross-region, cross-cloud manageability and sharing at scale with management across all accounts in your organization, seamless collaboration and BCDR across clouds and regions.
Newly available Horizon Catalog capabilities include AI Redact (in public preview), an AI SQL function to detect and redact personally identifiable information (PII) in unstructured data. This directly addresses a major roadblock to rapid enterprise AI adoption — protecting PII in the unstructured data that AI models depend on for training — and helps organizations make more of their enterprise data AI ready. We are also announcing Data Security Posture Management (in public preview), a simple UI in Trust Center that allows you to manage and automate sensitive data detection, tagging, protection and monitoring.
Additional improvements to Horizon Catalog now in public preview include visibility into external data lineage and a simplified data quality UI in Snowsight interface with a tab for automated data profiling. A new anomaly detection UI and alert system (in public preview) centralizes security anomalies across all accounts in an organization and alerts you to new ones. Additional Trust Center updates include:
Building third-party Trust Center extensions (in public preview) and making them available to other Snowflake customers via Snowflake Marketplace
Allowing Global Org Admin to see security posture across all accounts in Organization Account (generally available soon)
Supporting security is also the focus of Snowflake’s enhancements to hybrid tables. Not only are hybrid tables now generally available on Microsoft Azure, but they also support Snowflake’s Tri-Secret Secure (TSS) encryption model (generally available on AWS, in public preview on Microsoft Azure). TSS protects data with Snowflake’s built-in user authentication and a composite master key that combines a Snowflake maintained key with a customer-managed key, providing a higher level of security. In addition, hybrid tables’ automatic rekeying capabilities (generally available on AWS and Azure) help developers meet security and compliance standards by automatically and regularly changing encryption keys.
On the business resiliency side, Snowflake Backups (generally available soon) give you a powerful tool to bolster cyberresilience, support compliance and enhance data integrity for auditing or legal purposes. You can create point-in-time Backups and set them up to be immutable, meaning once they are created they cannot be modified or deleted, even by admins. In the event of a ransomware scenario, a natural disaster or an outage, you can combine Backups with Snowflake’s account replication capabilities so that all Backup sets and policies can be replicated to a different region or cloud provider and recovered.
Speed and cost efficiency directly impact both your bottom line and your ability to deliver value to your users. When you’re dealing with real-time data streams and running heavy analytical workloads, performance at scale is especially critical.
Snowflake continuously delivers automatic improvements across the platform to ensure your data pipelines and queries run faster.
The benefits start with Snowflake Optima intelligent optimization capabilities on Snowflake Standard Warehouse Generation 2 (Gen2) that deliver faster query performance for analytics and data engineering workloads. These include Optima Indexing (generally available), which analyzes your workloads and proactively identifies recurring point-lookup queries that can be accelerated. One automotive customer experienced a 15x acceleration for frequently recurring, highly selective queries on Gen2 warehouses thanks to Snowflake Optima.
For streaming workloads, Snowflake Streaming V2, the newest version of our next-gen data-ingestion framework (generally available on AWS with Azure and GCP coming soon), supports a simplified architecture that delivers a 56% reduction in query completion time and better end-to-end latency in a recent benchmark test.2 We also introduced a more predictable, usage-based pricing model that helps teams reduce overall spend.
Snowflake Dynamic Tables simplify data pipelines by allowing you to define the desired state with a single SQL query. New immutability features (generally available) allow you to lock areas of the tables so they don’t change during refreshes, resulting in less recomputation and lower costs. Dynamic Iceberg tables (generally available) integrate with data lakes, enabling you to store data in external cloud storage (AWS S3, Azure Blob Storage and so on) while being managed by Snowflake.
Today’s developers are finding they must frequently keep an eye on cloud costs as part of their AI efforts. Snowflake customers can now manage consumption with tools that dig into the details, including performing granular cost allocation of a shared resource (in private preview) across all of their organization’s Snowflake accounts via SQL. New tag-based budgeting allows organizations to set budgets for users of shared resources (in private preview soon), enabling them to monitor consumption of shared resources at the user level and prevent cost overruns.
Snowflake is raising the bar when it comes to setting an easy, connected and trusted data foundation to accelerate enterprise-ready data and AI. Developers have one central, AI-ready platform to easily migrate, access and connect multiple types of data from multiple sources to build agentic AI apps — and do so with blazing-fast performance, high scalability and effective cost management.
Learn more about how the AI Data Cloud can help you build better and faster with new capabilities that deliver intelligent, governed AI at scale and modernize the developer workflow. To see these features in action, check out the BUILD 2025 agenda and join one of the many in-depth sessions or hands-on labs.
______________
1Based on customer production use cases and proof-of-concept exercises comparing the speed and cost for Snowpark versus managed Spark services between November 2022 and May 2025. All findings summarize actual customer outcomes with real data and do not represent fabricated data sets used for benchmarks.
2Benchmark report is derived from the TPC-DS benchmark, and as such, its results are unofficial and not validated or certified by the Transaction Processing Performance Council. These results are for informational purposes only and are not comparable to any official TPC-DS results.
Forward-looking statements
This article contains forward-looking statements, including about our future product offerings, and are not commitments to deliver any product offerings. Actual results and offerings may differ and are subject to known and unknown risk and uncertainties. See our latest 10-Q for more information.
Real business intelligence is more than seeing a number — it’s about understanding the story within it. In the UK, […]
Data engineering is having a moment. Everyone suddenly cares about pipelines, lineage and “AI foundations.” It still surprises me, mostly […]
We are thrilled to announce the availability of Claude Opus 4.5, Anthropic’s most capable model available to customers on Snowflake […]