From Pilot to 6,000 Users: How to Scale Enterprise AI Agents

In mid-2025, we launched the Snowflake GTM AI Assistant to roughly 6,000 users across our sales and marketing organization — our first full-scale enterprise deployment of Snowflake Intelligence. By year-end, the assistant had already answered more than 330,000 questions, helping users work faster and make better decisions.

Over the past few months, we published several developer-focused posts on the technical foundations of building an enterprise AI assistant —  how to structure sales data, enable knowledge retrieval for unstructured content, test agent behavior and refine instructions. Those posts were useful, but the questions we kept getting weren’t purely technical:

  • How did you staff the project?

  • What did the development timeline look like?

  • How did you roll it out without breaking trust?

  • What change-management tactics actually worked?

  • What results did you see after launch?

This post focuses on those nontechnical ingredients: team structure, launch phases, scope strategy and how we scaled from an MVP to a trusted enterprise capability.

Development team and launch phases

We started in late February 2025 with a narrow goal: build a RAG-based knowledge assistant so GTM teams could find the right documents across fragmented tools, web pages and drives. At the time, the project had one dedicated data scientist.

By late May 2025, we made the decision to pursue a full GTM AI assistant using Snowflake Intelligence — not only indexing knowledge content but also integrating structured sales and marketing data. That shift changed the required bar for quality and reliability, so we scaled the team to three to four data scientists and added a dedicated product manager.

From Day 1, we treated quality and user trust as P(-1). User interviews and prior AI project experience made one thing clear: The first impression determines adoption. If early experiences are unreliable, rebuilding confidence takes dramatically more effort than getting it right up front.

That’s why we chose a phased rollout strategy: Start small, validate quality and workflows, then expand.

Here is how our launch phases looked at a glance:

Tip: A common failure mode we see is rushing an agent into broad usage before it consistently meets the quality bar. Prototyping AI agents is easy; launching reliable agents to thousands of users is the hard part. Give teams the time and space to earn trust early — it pays off later.

From a narrow MVP to expanding capabilities

Enterprise environments are complex. Across our sales and marketing organization alone, we support 15+ distinct personas, each with different workflows and needs:

  • Account executives (AEs)

  • Solution engineers (SEs)

  • SDRs

  • Product marketing managers

  • Partner/alliance managers

  • Professional services specialists

  • Sales and marketing leadership

  • Campaign managers

  • … and more

As with any product, we avoided the “connect everything and hope users figure it out” trap. Instead, we deliberately started with a narrower MVP targeting the personas where we could deliver the highest value fastest.

Based on audience size and development effort per persona, we focused our GA launch on AEs, SEs and SDRs. These three groups represented roughly 50% of our 6,000-user target audience, allowing us to maximize early impact while keeping scope achievable.

After GA, we expanded intentionally: unlocking additional personas and capabilities one step at a time — without compromising reliability for the core audience.

Here is an illustrative example of how our agent’s capabilities expanded over time:

  • GA scope: Knowledge Assistant + core Salesforce, product usage and finance data.

  • First month post-GA: Bug fixes and expanded coverage and depth across the same sources.

  • Later waves: New data sources, such as marketing data, call transcripts and emails, partnership data, web search support and more.

When we launched the AI assistant in mid-September 2025, it was powered by six semantic views with 48 tables and ~1,400 columns. After GA, we launched 40+ new features/data sources, expanding the data layer to 10 semantic views with 64 tables and more than 1,750 columns.

Tip: Overly broad scope is one of the most common reasons enterprise AI initiatives stall. Start with a scope where the agent answers fewer questions — but answers them correctly. It’s easier to expand capabilities after trust is established than to recover adoption after early failures.

User activation and change management

Successfully deploying enterprise solutions takes time — often a full quarter or more. Like any product rollout, users naturally fall along the adoption curve: innovators, early adopters, early and late majority, and laggards.

Everett Rogers’ diffusion of innovations theory.

 

During the pilot and beta phases, engagement tends to come from users higher up that curve — those eager to experiment with new tools. Once you reach GA and scale to thousands of users, reality changes. Late adopters enter the picture, expectations rise, and frustration can surface if user adoption is not happening fast enough.

This is where many otherwise solid AI initiatives stall — not because the technology doesn’t work, but because users never fully adopt it.

Ensuring users actually try it!

Using an AI assistant as part of a daily workflow represents a major habit change. Most users won’t switch unless your solution is dramatically better — 10x faster, 10x easier — or enables something they simply couldn’t do before. Feature parity with existing tools is rarely enough, especially for late adopters.

That said, even the best product can’t win users if they never try it in the first place.

So how do you ensure users both try your AI assistant and keep coming back?

First validate quality and optimize for stickiness

Our approach was deliberate and sequential, in sync with our launch phases:

During the pilot phase, we focused exclusively on quality and trust. Our primary goal was to validate correctness, reliability and basic usefulness — remember, quality is P(-1).

In beta, once quality was proven, we shifted the focus toward feature completeness and stickiness. The question became: Does this agent solve enough real problems that users want to come back week after week?

The results gave us confidence:

  • >92% NPS among beta users

  • >70% weekly active user (WAU) retention rate

Those metrics told us we had a product worth scaling.

Create momentum and organizational excitement

At the GA phase, our primary question became simple: Are users aware of the GTM AI Assistant — and are they actually trying it?

To drive adoption, we partnered closely with our sales enablement team and ensured the fundamentals were in place:

  • A dedicated internal product page

  • Clear user guides and how-to documentation

  • Short videos from top users sharing best practices

  • A dedicated Slack channel for feedback and feature requests

From there, we actively built momentum:

  • Launch emails and live demos with individual teams

  • Regular mentions by sales executives — and even our CEO — in all-hands and team meetings

  • Weekly adoption reports shared with sales leadership so they could encourage participation within their teams

This combination of enablement, visibility and leadership support proved critical.

The result: Sustained adoption at scale

Through disciplined activation and change  management, we drove significant adoption with the GTM AI assistant over the 3 months following GA.

 

All of this effort paid off. Within the first two weeks after GA, roughly 25% of launched users had tried the agent at least once. By the end of the year, that number had grown to nearly 77% overall and over 90% across our primary personas (AEs, SEs and SDRs).

For us, this reinforced a key lesson: Scaling enterprise AI isn’t just a technical challenge — it’s also a product activation and change-management problem.

Post-launch product thinking

Change goes both ways.

Transforming go-to-market workflows with enterprise AI agents is a major shift — not just for users but also for the data teams building those agents.

Traditionally, data teams are accustomed to working on relatively contained projects: building data pipelines, training models or launching dashboards, often with limited post-launch evolution. Once deployed, these assets typically require only incremental maintenance.

AI agents are fundamentally different.

An enterprise AI assistant is a living product. It continuously interacts with users, adapts to new data, evolves with changing workflows and must keep pace with rapid platform and model innovation. This requires a shift toward product thinking, system-level design and sustained team collaboration over time.

Adapting the team after GA

Following a successful GA launch, we had to evolve how we worked as a team.

Almost immediately, we saw:

  • A surge in inbound feature requests from users

  • Increased expectations around responsiveness and reliability

  • A growing need to balance rapid iteration with maintaining trust

  • Ongoing changes in the underlying AI platform that required rethinking earlier design decisions

At the same time, quality could not slip. Bugs needed to be addressed quickly, regressions prevented and improvements shipped without destabilizing the experience users had come to trust.

Scaling the product meant scaling our operating model.

What we invested in post-launch

To avoid being overwhelmed — and to sustain momentum — we invested early in the following areas:

  • An agile way of working: We moved to sprint-based development with a clear intake, triage and prioritization framework to balance bugs, feature requests and longer-term improvements.

  • Expanding the team with complementary skill sets: Beyond data science, we added analytics engineering and backend engineering capacity aligned with roadmap needs and system complexity.

  • Automated testing and CI/CD: Shortly after GA, we invested in automated testing frameworks and CI/CD pipelines to increase deployment speed while maintaining a high quality bar.

  • Dedicated time for platform health and evolution: Every sprint included explicit capacity for refactoring, architecture evolution and platform upgrades to prevent long-term technical debt.

The key takeaway

Launching an enterprise AI agent isn’t the finish line — it’s the starting point.

Long-term success requires treating AI agents as products, not projects: with ownership, iteration, operational rigor and continuous investment. Teams that embrace this shift are far better positioned to scale responsibly, adapt to change and continue delivering value as both user expectations and AI capabilities evolve.

The impact and ROI

At this point, you might be wondering: What were the actual results — and was it worth the investment?

Let’s break it down.

By the end of 2025, the GTM AI Assistant was answering over 35,000 questions per week for more than 2,500 weekly active users, compared to roughly 10,000 weekly questions at initial launch.

This growth wasn’t driven by adoption alone.

In the three months following GA, as we expanded the assistant’s capabilities and users began incorporating it into their daily workflows, usage intensity increased significantly. The average number of questions per weekly active user grew from 8.5 to 14, reflecting both higher trust and deeper integration into real work.

Productivity gains at enterprise scale

The time savings varied by use case:

  • Minutes for simple knowledge or data retrieval

  • Hours for more complex analysis that would otherwise require custom SQL or analyst support

Even with a conservative assumption of 5 minutes saved per question, the impact compounds quickly. At our scale, this translated to the equivalent of 65+ full-time employees’ worth of annual productivity across a 6,000-person organization.

And that’s a lower-bound estimate. 

Demonstrating ROI at scale

At the end of 2025, we conducted a detailed analysis to understand both the cost structure and the return on investment of the GTM AI Assistant.

The analysis showed that the cost per active user was comparable to standard enterprise productivity tools on a per-user basis. When weighed against the productivity gains unlocked, the assistant delivered a return on investment exceeding 5x, even before any dedicated cost-optimization efforts.

Importantly, this estimate reflects only direct productivity savings. It excludes secondary benefits such as reduced analyst load, faster deal cycles and improved decision quality across sales and marketing teams. In other words, the economics were compelling well before the solution reached full maturity or operational efficiency.

Final thoughts

AI is fundamentally transforming enterprise and go-to-market teams.

The era of isolated demos and small-scale experiments is coming to an end. Enterprise AI is now delivering real, measurable impact at scale. But achieving that impact requires more than strong models or clever prompts — it demands the right mindset, disciplined execution and a deep focus on change management.

As we’ve outlined in this post, successfully scaling enterprise AI agents means:

  • Treating quality and trust as nonnegotiable

  • Launching deliberately and expanding scope intentionally

  • Investing in activation and adoption — not just technology

  • Evolving data teams toward long-term product ownership

Enterprises that get this right in the coming months won’t just improve productivity — they’ll fundamentally change how their teams work and gain a meaningful competitive advantage.

The shift is already happening. Enterprises that act now will define the next generation of go-to-market execution.

LATEST ARTICLE

See Our Latest

Blog Posts

admin February 18th, 2026

We are thrilled to announce the availability of Claude Sonnet 4.6, Anthropic’s latest model, on Snowflake Cortex AI. As a […]

admin February 18th, 2026

In mid-2025, we launched the Snowflake GTM AI Assistant to roughly 6,000 users across our sales and marketing organization — […]

admin February 18th, 2026

The Impact of AI-Enabled Commerce on Consumer Goods Companies and the Foundational Steps Industry Leaders Are Taking Now to Gain […]