Security and Governance Best Practices for Deploying Snowflake Intelligence Using Horizon Catalog

With Snowflake Intelligence now generally available and organizations accelerating adoption of agentic AI solutions, it’s an ideal time to take a closer look at your security and governance programs. Snowflake Horizon Catalog provides a universal AI catalog without vendor lock-in, unifying context and governance across all data so organizations can securely manage, discover and govern their entire data and AI estate across any engine, format or cloud. In this post, we’ll walk you through the architecture and best practices that help you deploy Snowflake Intelligence securely, responsibly and at scale using Horizon Catalog.

Snowflake offers enterprise-grade defense-in-depth controls in addition to built-in, proactive security controls for data and AI. At a high level, Snowflake Intelligence is like the rest of Snowflake with respect to how carefully we handle your data and help you secure and govern your environment. However, there are some unique challenges related to AI use. The diagram below illustrates the Snowflake Intelligence architecture annotated with how each step is protected. 

1. Connection

A user connects to ai.snowflake.com or a URL such as si-<org-acct>.privatelink.snowflakecomputing.com using private connectivity. With the new Snowflake Intelligence-only access support (via ALLOWED_INTERFACES configuration), users can seamlessly access ai.snowflake.com without accessing the rest of the Snowflake experience. 

2. Snowflake Horizon

This is where the bulk of the defense-in-depth controls fall to help you to secure and govern access at scale. 

2a. Network security: Snowflake offers a comprehensive set of controls for securing a network, such as network policies and privatelink. Using privatelink only, a customer can lock down the Snowflake instance to allow only the customer’s private networks to access Snowflake Intelligence. By default, Snowflake prevents access from malicious IPs through the built-in malicious IP protection feature. Customers can also use egress network rules to control access to only their trusted external tools. 

2b. Identity and access management: Snowflake Intelligence sessions automatically inherit the logged-in user’s identity, default role and default warehouse. This user identity is mapped and maintained throughout the session’s lifecycle, designed to ensure that all background agents and tools inherit the user’s assigned privileges. Consequently, neither agents nor Snowflake Intelligence can perform any action beyond the logged-in user’s permissions. 

Users should authenticate to their Snowflake instance using strong authentication methods. We recommend that customers use federated authentication or multifactor authentication (such as a passkey or any authenticator app). By default, Snowflake checks for credentials over the dark web and blocks access using leaked password protection. Additionally, Snowflake is on a journey to enforce strong authentication by default

2c. Classification and tagging: Customers should classify and tag their data to accurately identify sensitive information. This identification is critical for applying the proper protection policies to mitigate sensitive data leakage and ensure that users (along with the agents and tools they employ) access only what they are authorized to see. Snowflake automatic data classification and tagging makes it easier for customers to classify the data. Snowflake intelligence respects the data classification, helping to protect sensitive data.

2d. Access control and data protection: Once the user has passed the network checks and is authenticated, an authorization step will kick in. This is where customers should leverage least-privilege access by adhering to the following guidelines:

  • Use role-based access control (RBAC) to govern access to all objects, including data and warehouses. To enforce tighter security, customers should also control access to the large language models themselves. Snowflake allows organizations to manage model access at two levels: An account-level allowlist that defines all permissible models and role-level access controls to further restrict which models specific users (or roles) are allowed to use. In addition, customers can use granular roles such as CORTEX USER and CORTEX_EMBED_USER to grant access to Snowflake Cortex AI features. 
  • Data protection: Customers should use attribute-based access controls to restrict access to sensitive data sets by leveraging Snowflake column-level security and row-level security. Snowflake Intelligence respects all the data protection policies applied to the data objects. By default, Snowflake maintains rigorous security standards, always encrypting customer data at rest and in transit. 

2e. Data quality: When utilizing Snowflake Intelligence, the accuracy of responses generated by underlying tools (such as Snowflake Cortex Search and custom solutions) directly depends on the quality of your data. To encourage  reliable, unbiased results, customers should proactively manage data health. Snowflake’s native capabilities — including data quality policies, anomaly detection and notifications — can be leveraged to immediately alert data stewards to issues such as duplicates or null values that could lead to inaccurate or biased AI responses.

2f. Monitoring: Agent monitoring and evaluations allow you to evaluate agents before deployment and monitor all agent traffic in production, providing detailed visibility into the quality and latency of your agents.

3. Agent API

Snowflake Intelligence uses agent APIs to orchestrate access to a comprehensive set of tools. This tool set includes native capabilities such as Cortex Search and Snowflake Cortex Analyst, along with any other required customer-defined tools, to generate the final, appropriate response.

4. Snowflake Cortex tool orchestrations

All orchestration and inter-tool communication is governed through the cloud services layer to maintain consistent checks and balances (as outlined in step 2). For example, when Cortex Analyst generates an SQL query (Cortex Analyst can generate only select queries), it can access only the objects that the calling user has permission for. The virtual warehouse then executes this query strictly within the user’s security context. This ensures that all existing data governance policies — including masking, row-level policies, tokenization and so on — are automatically applied, as every operation carries the user’s identity, context and permissions.

5. Final response

Snowflake Intelligence will use the orchestrator to reiterate multiple steps through Anthropic or Azure-OpenAI models to generate the final response. Across all those steps, Snowflake Intelligence preserves the user’s permissions and identity end to end.

Trust Center is an essential companion for security teams to understand the current security posture of their Snowflake environment (including Snowflake Intelligence) and compliance against CIS benchmarks. 

Regulation and compliance frameworks (both current and emerging regulations, such as the EU AI Act) are also a critical part of the evaluation of AI tools. Snowflake recently achieved the ISO/IEC 42001 certification, underscoring our commitment to providing customers with transparency and accountability in our AI practices. 

Next steps

As you embark on your AI journey with Snowflake Intelligence, remember that innovation thrives on a foundation of trust. Security and governance aren’t just safeguards — they’re accelerators for responsible AI adoption. Here are three actions you can take today to strengthen your organization’s security posture and confidently scale Snowflake Intelligence:

  1. Check out agent monitoring to track the usage of your agents over time and configure granular RBAC to ensure least-privilege access for agents.

  2. Set up automatic data classification to automatically detect and tag sensitive data and prevent sensitive data leakage.

  3. Check out the Trust Center to understand your current security posture and take action on the critical severity findings.

LATEST ARTICLE

See Our Latest

Blog Posts

admin November 26th, 2025

Real business intelligence is more than seeing a number — it’s about understanding the story within it. In the UK, […]

admin November 26th, 2025

Data engineering is having a moment. Everyone suddenly cares about pipelines, lineage and “AI foundations.” It still surprises me, mostly […]

admin November 26th, 2025

We are thrilled to announce the availability of Claude Opus 4.5, Anthropic’s most capable model available to customers on Snowflake […]