We’ve spent the past two years making AI agents capable. They can query your databases, summarize your documents, route your workflows, and initiate transactions on your behalf. Some of them are genuinely impressive.
The harder challenge is the one most organizations haven’t solved and that’s ensuring they’re accountable.
An agent can act. But accountability is a different question entirely. When a human employee takes an action, there’s a chain of identity attached to it. When an agent does, there often isn’t. As agents move from demos into production, that gap becomes a governance problem.
This is the agent identity problem. An agent should have a verifiable identity of its own: defined rights, defined scope and a persistent record of what it did. Without it, you can’t answer what happened, who authorized it or whether it stayed within its boundaries, and that becomes a liability the moment something goes wrong.
Most don’t. And that gap is already slowing adoption in some of the most sophisticated AI programs in the world.
In regulated industries, AI doesn’t reduce audit complexity. It amplifies it. If an agent queries your database, generates a recommendation or initiates an action — and something goes wrong six months from now — your team will need to answer:
Who created the agent?
What rights did it have, and for how long?
What data did it touch?
And if it produced a derived insight (a projection, a summary, an output no one explicitly authorized, for example), who owns that?
Picture a loan underwriting agent. It queries credit data, flags risk and produces an approval recommendation. A year later, a borrower disputes the outcome. Your compliance team needs to reconstruct exactly what data the agent accessed, under whose authority, and whether its output stayed within approved scope. If that record doesn’t exist, you’re not just exposed. You’re starting from nothing.
They seem like reasonable questions. The problem is that most identity infrastructures weren’t designed to answer them.
Traditional identity systems were built for stable roles and defined access. Agents don’t fit that model.
An agent might spin up for a single task, pull from four data sources, and disappear by noon. Each source may have had proper access controls in isolation. But the combined output (a derived insight) can cross into territory nobody authorized. The agent did exactly what it was built to do. The problem was that nobody defined the boundary.
And even after an agent is gone, the record still has to exist.
Think about a traditional scheduled batch process, a payroll script that runs every night at midnight. It has a name, an owner, a full audit trail. A dynamic agent that ran for three hours and returned a recommendation? Without intentional architecture, it leaves almost no governance footprint.
Governance can’t be bolted on. It has to be the architecture from the start.
Here’s what that looks like in practice:
It’s why these principles are built into Snowflake by design, guiding our own AI agent development and the agents we enable our customers to build.
When we built our own Snowflake Go-To-Market AI Assistant, we wanted to empower our teams with all of the relevant sales knowledge, customer stories and account insights at their fingertips. For this to work, we had to get two things right: we had to ensure that the information provided could be trusted and we put controls in place so the agent only exposed the right information to the right people at the right time.
As a result, we started with these as design constraints, not features:
The result: our agent now empowers over 6,000+ employees and answers over 35,000 questions per week — an agent our teams trust to operate autonomously, with full auditability after the fact.
At scale, we’re supporting our customers in the same way. Customers across companies like TS Imagine, Fanatics and United Rentals are all building agents on Snowflake to accelerate their business.
LendingTree, an online lending marketplace, for example, uses Snowflake Cortex Code to rapidly build and deploy AI agents that deliver personalized financial guidance to consumers. The platform enables their teams to move from idea exploration to production in days rather than weeks, powering smarter financial decisioning workflows and more tailored consumer experiences that help borrowers navigate complex lending options.
Solving agent identity doesn’t just reduce risk. It removes the friction that’s stalling adoption.
Right now, fear of the unknown is what drives enterprises to assign a human to watch every agent, build apps instead of true agents or avoid the category entirely. That’s expensive. And it defeats the purpose.
That hesitation disappears once you can answer who the agent is, what it’s authorized to do and what it actually did. Agents earn trust the same way people do. Not through intention. Through evidence.
Capability is no longer the constraint. Trust is.
We’ve spent the past two years making AI agents capable. They can query your databases, summarize your documents, route your […]
Kafka Connector V4 defaults to schematized ingestion, where each JSON key maps to its own table column. This is more […]
From technical workflows to guided collaboration At the center of this evolution is a shift in how people interact with […]