Best Practices for Responsible AI Innovation and Governance Frameworks

With the breakneck speed of AI advancement, new innovations inevitably outpace global governments’ abilities to regulate its use. When regulations struggle to keep up, AI technologies left unchecked run the risk of infringing on fundamental rights and freedoms. Some of the most pressing risks include:

  • Privacy: AI systems can process enormous amounts of personal data, raising concerns about how this data is used and protected.

  • Bias: AI algorithms can inadvertently perpetuate biases present in training data, creating “algorithm prisons” —  situations where individuals or systems become trapped or constrained by the decisions generated by algorithms, often without the ability to understand or challenge those outcomes.

  • Autonomy: AI systems used in decision-making processes can potentially undermine individual autonomy if not properly designed.

Rather than struggle with a reactive approach tackling new technology case by case, governments worldwide are developing AI governance frameworks that proactively seek ways to address these challenges. Where frameworks are lacking, or need to be developed, these governments first set AI principles. By establishing baseline requirements and protocols to support the safety, security, equity and transparency of AI technologies, these frameworks intend to create an environment that will foster safe and responsible AI innovation, while keeping pace with the latest developments.

In this post, we delve into the evolving landscape of AI governance, uncovering how different governments across the globe are navigating these challenges.

The European Union’s AI Act sets the global standard

In 2018, the EU AI Alliance began the process of gathering more than 6,000 stakeholders to establish ethical principles and create a public dialogue on global trustworthy use. After six years, the EU’s Artificial Intelligence Act marks the world’s first comprehensive legal framework specifically designed to manage the risks associated with AI technologies, encompassing provisions to be implemented gradually over six to 36 months. 

The AI Act emphasizes a human-centric approach to AI development and deployment, following the philosophy that AI should not be an end in itself, but rather a tool to enhance human flourishing and promote societal well-being. It accomplishes this by establishing several core principles, including: 

  • Prohibiting harmful AI practices: The AI Act prohibits the use of AI systems with unacceptable risks, including those that manipulate people’s decisions or exploit their vulnerabilities, evaluate people based on social behavior or personal traits, or predict criminal risks. It also bans systems that leverage untargeted scraping of facial images, infer emotions in workplaces or schools, or categorize individuals based on biometric data.

  • Enforcing regulations based on risk classification: High-risk AI systems — including those used as product safety components and those that are themselves products covered by EU legislation — are subject to heightened obligations. Limited-risk AI systems, like chatbots, face lighter transparency obligations, while minimal-risk systems (such as AI-enabled video games and spam filters), remain largely unregulated. 

  • Establishing obligations for transparency: AI systems must be designed for transparency, providing clear instructions that include details about the provider, the system’s capabilities and limitations, and any potential risks. Companies must inform users when they’re interacting with an AI system, except when it’s obvious or for legal purposes such as crime detection. 

  • Creating measures to support innovation: The EU is requiring member states to establish “AI regulatory sandboxes” where AI systems can be developed, tested and validated before market release. Small and medium-sized businesses and start-ups will have priority access to these sandboxes.

Europe’s proactive stance on AI governance positions it as a global leader in setting regulatory standards, and many additional countries are planning to adopt similar governance frameworks over the coming months.

The U.S. prioritizes four pillars of responsible AI innovation

The U.S. has continued to build on past administrations’ efforts to establish comprehensive AI governance frameworks. At first, several federal departments established principles like the Department of Defense’s responsible AI principles and the Department of Energy’s AI Risk Management Playbook. Recently, the White House published its AI Bill of Rights to guide development. Central to this is the ongoing refinement of laws and executive directives to mitigate potential risks associated with AI deployment while also fostering an environment conducive to innovation. The U.S. has focused on these four core pillars of responsible AI innovation as the foundation for its frameworks: 

  1. Safety: Ensuring the safety of AI involves rigorous testing procedures, certification requirements, and mechanisms for continuous monitoring and evaluation.

  2. Security: Security considerations are paramount to safeguarding AI systems against cyberthreats, unauthorized access and malicious use. This involves setting standards for data protection, encryption protocols and resilience against adversarial attacks.

  3. Equity: Equity in AI development aims to prevent biases in algorithms that could perpetuate discrimination or inequitable outcomes. This involves promoting diversity in AI research and development, ensuring representative data sets, and implementing fairness and accountability measures.

  4. Transparency: Transparency involves ensuring that AI systems are understandable and explainable. This enables stakeholders – including users, regulators and the public — to comprehend how AI decisions are made, and assess their reliability and fairness. In this way, transparency leads to trust.

These pillars of AI governance have enabled innovation across all industries, creating opportunities for startups and entrepreneurs. Beyond that, new job roles are emerging that involve the oversight, interpretation and management of AI systems; for example, the demand for data scientists, machine learning engineers, AI ethicists and AI trainers has surged. With the proper measures in place, AI has the potential to augment human capabilities and create new job opportunities — a promising reality that contrasts fears that AI will eliminate jobs.

An example of responsible AI innovation in practice

Generative AI solutions for customer service provide a useful example of the type of responsible, human-centric innovation that can be fostered by methodical AI governance frameworks.

For call centers, managing high call volumes can pose a significant challenge: comprehensive interactive voice response systems can lead to lengthy phone menus for customers, but routing a call to a human agent is far more expensive and can cause longer wait times. To handle the breadth of issues customers brought to the table, agents often have to keep dozens of applications open on their screens, constantly switching between them. And many of the calls handled by human agents could have easily been handled by automation in the first place. AI and machine learning could provide a solution.

How would it work? Machine learning could be used to predict what a customer is calling about based on all of that customer’s previous interactions with the business. Based on the subject of the call, it could then be routed to a generative AI chatbot to communicate with the customer and resolve the problem, or routed to a human agent, who would also receive necessary context in advance. If machine learning can correctly predict the reason for the customer’s call before the customer states the problem to the agent, it could lead to first-call resolution, and significant reductions in operating costs and handling time. 

By using Snowflake Cortex, an enterprise could enable that the training data, metadata, gen AI model and prompts would remain private. Imagine the amount of sensitive information these calls contain — these agents are dealing with everything from private health information to Internal Revenue Service data. 

Maintaining privacy is crucial for building responsible and trustworthy AI. And by focusing on aiding rather than replacing human agents, an ML and generative AI-powered solution stands as a stellar example of how responsible AI innovation can work toward improving society while reducing related risks. 

A race to regulate AI

Countries around the world are taking different approaches to AI regulations. Some have adopted a “wait and see” strategy, believing that a soft AI regulatory approach will attract and retain AI businesses and investment. Others are adopting proactive strategies, prioritizing established best practices and aiming to usher safe and ethical AI systems. The EU AI framework takes a definitive stand that requires critical decision-making with humans always in the loop.

Regardless of the chosen regulatory approach, governments are weighing the real benefits and consequences of AI systems. An emerging theme across global principles, frameworks and laws is the recognition that AI systems affect the lives of every citizen. This also highlights the opportunity that AI systems can measurably improve user productivity. 

Commitments to enhancing AI regulations underscore a proactive approach to harnessing the potential of AI while mitigating its risks. By focusing on safety, security, equity and transparency, these regulatory efforts aim to foster an AI ecosystem that is not only innovative and competitive, but also ethical and accountable to society at large.

 

LATEST ARTICLE

See Our Latest

Blog Posts

admin October 2nd, 2024

Privacy is no longer a growing requirement for doing business — it’s the new status quo. The stakes for not […]

admin October 2nd, 2024

There’s no question which technology everyone’s talking about in retail. Generative AI continues to promote incredible levels of interest with […]

admin October 1st, 2024

Credit: Hadlee Simons / Android Authority The Personal Safety app on Android will soon get a new option to add […]