The Observe CLI will enable both users and agents to operate on observability data and context through a growing set of reusable skills — structured workflows for common tasks like investigating incidents, tracing failures or validating changes. Some of these workflows will be able to run autonomously in the background, while others will be invoked interactively. The model will support both agents handling routine tasks and engineers directly executing and guiding more complex investigations, potentially from environments like Claude Code. The CLI will become the control surface for this system, providing granular, programmatic access to Observe’s capabilities and enabling observability to be composed, automated and extended over time.
We are also showcasing progress on read and write support for Apache Iceberg. Observability data can be written directly to Iceberg tables in your own data lake, stored in your object storage and accessed through the Observe UI, CLI or MCP or using any compatible engine. This means you own the telemetry that Observe ingests and processes. It lives alongside the rest of your data, under your control and accessible through the tools your teams already use.
Teams across the organization can query and combine telemetry with other data without needing to extract or replicate it into separate systems. At the same time, engineering teams retain the ability to use Observe’s native observability workflows on top of that same data. Observability workflows run with similar performance on Iceberg tables as on Observe-native data. The result is increased flexibility in how data is stored and governed and how it is accessed and used.
We’ve always believed observability is a data problem. As AI increases system complexity and telemetry volume, this pressure intensifies, stressing the scale and economics of the underlying data platform.
Teams are responding on two fronts. First, they need more efficient ways to store, process and query high-volume telemetry without unsustainable cost growth. Second, they are adopting AI-driven workflows to reduce MTTR and to shift toward more proactive reliability engineering.
Observe is building to address these needs. Support for Iceberg gives teams more flexibility in managing observability costs by allowing them to store telemetry in their data lake on low-cost cloud storage, while enabling broad access and avoiding vendor lock-in. The Observe CLI reflects a dramatic shift in how observability is consumed, from curated, UI-driven experiences to programmatic access via CLI and MCP, with tools like Claude Code and ChatGPT.
As users of observability and their access patterns continue to evolve, the need for telemetry and context at scale remains constant. Observe delivers that foundation while continuing to expand how customers store, access and work with their data.
View our Observe by Snowflake launch event, hosted by Jeremy Burton, GM of Observability at Snowflake.
Forward-looking statements
This article contains forward-looking statements, including about our future product offerings, and are not commitments to deliver any product offerings. Actual results and offerings may differ and are subject to known and unknown risk and uncertainties. See our latest 10-Q for more information.
Performing the same investigation with a context-aware agent may look like this after the agent receives the alert:
What the agent returns is not a list of raw signals, but a scoped set of findings on what broke and how to fix it.
Observe provides AI-driven observability through its built-in AI SRE as well as via MCP or CLI (coming soon). SREs, developers, support engineers and automated agents can access and interact with telemetry through the interface that best fits their workflow. This programmatic access enables organizations to build custom, agent-driven workflows on top of their observability data.
The Observability Context Graph models semantics and relationships across the environment. It connects logs, metrics and traces across services and infrastructure and extends to include business and code context. The result is faster, more accurate reasoning using context structured and curated for observability.
The Telemetry Lakehouse Foundation underpins Observe’s architecture. It provides low-cost cloud storage and compute-storage separation, inheriting Snowflake’s core properties to enable Observe to ingest, store and analyze telemetry at petabyte scale, at lower cost.
Context is critical for effective AI-powered observability. As teams begin to incorporate agents into operational workflows, they will encounter challenges if the agents lack context. Queries may time out, responses may be unreliable or token consumption may be higher than anticipated.
With access to the right observability context, agents become much more efficient. Agents are able to resolve ambiguous terms, traverse complex relationships and narrow the search space in order to return more accurate results at significantly lower cost. This is what enables agent-driven workflows in practice. In Observe, the Observability Context Graph is the fundamental architectural distinction that drives query accuracy, speed and cost efficiency.
How does an AI SRE agent, enabled with context, speed up the troubleshooting process? Consider an incident investigation, where a service starts returning errors and the scope of impact is unclear.
In a typical workflow, engineers may do the following:
A large portion of the time is spent simply figuring out where to look.
Since Observe joined Snowflake three months ago, we’ve been moving at an accelerated pace, onboarding new users and building new features that extend observability access. But the core problem Observe set out to solve has not changed. Our mission remains observability at scale, a challenge facing many enterprises today.
The volume of telemetry in the form of logs, metrics and traces generated by modern systems has outpaced traditional approaches, with leading enterprises ingesting hundreds of terabytes of data daily. The size and complexity of telemetry only promises to grow with greater prevalence of AI-generated code and the proliferation of AI agents. Left unsolved, observability cost becomes untenable and troubleshooting time escalates.
The problem with legacy systems is the architecture. The issues are manifold:
Because observability at scale is a data problem, it makes the greatest architectural sense for observability solutions to be tightly integrated with a data platform. At Observe, we made the decision early on to build on a modern observability architecture on top of Snowflake, leveraging its rich capabilities to deliver observability unhindered by the cost and performance constraints of traditional architectures.
Observe is an AI-powered observability platform, designed from the outset to operate at scale. It’s designed to solve for the shortcomings of legacy systems with a modern observability architecture, comprising the AI SRE, the Observability Context Graph and the Telemetry Lakehouse Foundation. The result is faster troubleshooting at lower cost.
Since joining Snowflake, we’re seeing a shift in who is engaging with observability. Data leaders are leaning in, with a growing focus on treating telemetry as a first-class citizen. There is strong interest in storing, governing and analyzing telemetry alongside the rest of the enterprise’s data, not isolated within engineering tools.
At the same time, interest in AI SRE, DevOps and software engineering workflows is accelerating. This is driving expansion within existing customers, as observability becomes a more active part of operations. We’re also seeing forward-leaning teams bypass the traditional observability interface to build their own observability agents on Observe’s MCP server, directly on top of data and context stored in Observe.
A large part of this momentum — both new customer onboarding and expansion — comes from reduced friction to get started. Snowflake customers can now use existing Snowflake credits, without limitation, towards their Observe usage, making it much easier to adopt observability as part of the broader Snowflake data platform.
We are extending the platform in two directions: agent-native access and data interoperability using open formats. Our goal across both is to allow telemetry data and the context built on top of it to be accessible from the environments users prefer and through access paths that fit how they want to work.
Developers are increasingly working in IDEs, terminals and alongside coding agents. Observability needs to exist in that flow.
We are introducing a CLI (coming soon) designed for agent-driven workflows, which will provide another path for developers to interact with telemetry — and possibly the most natural option for doing so. Context awareness is critical to agent efficiency and, importantly, Observe will enable agents executing tasks through the CLI to query its Context Graph.
In the new era of enterprise AI, the value of a platform is no longer measured by its features, but […]
Data integration is one of the first and most critical steps in building any data pipeline. It’s how raw data […]
The Observe CLI will enable both users and agents to operate on observability data and context through a growing set […]