AI Regulatory Compliance: Navigating the Evolving Legal Landscape
As organizations strive to harness the transformative power of artificial intelligence, they face an increasingly complex web of regulatory requirements that varies significantly across jurisdictions and industries. The race to implement AI capabilities while maintaining legal compliance has created a strategic inflection point for business leaders worldwide. Understanding this regulatory terrain isn’t merely a compliance exercise—it represents an opportunity for organizations to establish a competitive advantage through responsible AI governance that builds trust with stakeholders and enables sustainable innovation. AI regulatory compliance is a big focus for any organization developing and implementing a compliance framework.
The leadership imperative for AI compliance has elevated this issue from a technical consideration to a boardroom priority. With regulations evolving in real-time and enforcement frameworks taking shape, forward-thinking organizations proactively develop governance structures rather than wait for regulatory clarity. This proactive approach acknowledges that AI governance represents risk mitigation and a strategic business capability that can accelerate responsible adoption while protecting corporate reputation and stakeholder interests.
Antony Cook, Corporate Vice-President and Deputy General Counsel, Microsoft and Nasser Ali Khasawneh, Global Head of AI at Eversheds Sutherland recently joined to share their perspectives and experience of helping organizations navigate the emerging AI era. Diligent has captured some key insights below, and you can watch the full webinar on-demand here
“One thing that’s changed in the past 18 months is the attention that boards are putting on making sure they have the right approach to governance and that it reflects the implication of the technology across their organisation…” — Antony Cook, Corporate Vice President and Deputy General Counsel, Microsoft
Global Regulatory Complexity: A Mounting Challenge
The EU AI Act stands as the most prominent regulatory framework, but it represents just one piece of a complex global puzzle. Currently, the OECD is tracking 61 countries actively developing AI policies. Beyond these national frameworks, approximately 393 sector-specific initiatives are underway, alongside 760 governance initiatives being monitored by the OECD
This extraordinary regulatory activity underscores the universal recognition that AI adoption requires appropriate guardrails. However, significant divergence exists in regulatory approaches across jurisdictions, creating compliance challenges for multinational organizations.
Cook identified three predominant regulatory approaches emerging globally:
Safety-First Approach
Some territories prioritize mitigating potential harmful outcomes from AI applications. The White House commitment on AI safety and the UK’s Bletchley Park AI safety summit exemplify this approach, convening stakeholders to address AI threats and containment strategies.
Comprehensive Legislation
The EU AI Act represents the most ambitious attempt to create broad legislation covering numerous AI-related issues. This approach aims to establish protective guardrails without impeding innovation and progress that AI can deliver.
Targeted Regulatory Focus
Countries with limited resources for comprehensive legislation, or those preferring to assess AI’s impact before enacting broad regulations, are addressing specific issues individually. Japan, for example, has amended intellectual property laws to address copyright infringement concerns in AI training data.
Many jurisdictions are employing hybrid approaches or shifting strategies as political leadership changes, as seen in the UK
Cross-Border Harmonization: The Critical Challenge
While AI regulation develops along jurisdictional lines, harmonization will ultimately be essential for multinational compliance. As Khasawneh explained: “We always have to work within jurisdictions, within national laws, but it’s fair to say that AI knows no boundaries. It is a technology that flies across boundaries so the need for harmonization could not be greater as we consider various aspects of law that are affected by AI”1.
The UK’s Bletchley Park initiative represents a positive step toward standardization and international cooperation. Khasawneh contemplates whether “we will move towards a global body that is the AI equivalent of the World Intellectual Property Organization”1. However, he acknowledges that geopolitical tensions will likely impede the development of a comprehensive global AI treaty1.
Priority Legal Concerns for Organizations
As Global Head of AI at Eversheds Sutherland, Khasawneh has unique insight into the common legal challenges organizations face. These include:
Governance Policy Development
Organizations need comprehensive guidance on appropriate AI use, including frameworks to help employees minimize harmful consequences when using or developing AI systems.
AI-Specific Contracting
Companies require guidance on structuring contracts with partners and suppliers that address the unique characteristics and risks of artificial intelligence and generative AI.
Intellectual Property Compliance
Organizations developing or using AI systems seek clarity on legal risks related to intellectual property rights, copyright considerations, and potential infringement issues with their platforms.
Data Privacy Management
Businesses must understand data privacy risks introduced by AI systems and providers to avoid infringement and protect proprietary data exposed to AI processing.
Workplace AI Implementation
Companies want to leverage AI benefits to support employees while mitigating risks around worker rights and addressing potential bias in applications such as recruitment and screening.
These diverse challenges highlight the need for specialized expertise in this rapidly evolving field, making external counsel increasingly valuable for many organizations. Beyond tactical guidance, businesses must focus on developing comprehensive frameworks for responsible AI governance.

Microsoft’s Approach to Responsible AI
Cook shared Microsoft’s journey toward establishing responsible AI practices. The company recognized that while engineers and developers view AI systems through a technological lens, a broader perspective incorporating diverse viewpoints was essential for establishing globally applicable ethical parameters.
Microsoft assembled a multidisciplinary team including lawyers, humanists, sociologists, and computing engineers to explore responsible AI development and implementation. This collaborative effort established principles centered on reliability, safety, privacy, security, accountability, and transparency—creating an AI standard applied consistently across the organization and operationalized through engineering practices.
Board Accountability and AI Leadership
The EU AI Act already mandates AI literacy among boards and leadership teams. Khasawneh believes “AI accountability is going to become an absolute requirement for boards to comply with, and for CEOs to lead with”1. He has observed increased board attention to governance frameworks that address AI’s cross-organizational impact: “One thing that’s changed in the past 18 months is the attention that boards are putting on making sure they have the right approach to governance and that it reflects the implication of the technology across their organization because this is a technology which is changing go-to-market, it’s changing research and development, it’s changing supply chain management, it’s changing employee productivity and workforce development. So it has a very broad implication across organizations, which I think means that boards are just much more focused”.
“We really need to instill a self-learning culture. You can’t expect to arrive at a board meeting once a month and then learn about AI at the board meeting. We all need a commitment to double down on AI literacy because that really puts you in a position to make informed decisions if you’re a boardroom, for example, or a CEO.” — Dale Waterman, Principle Solution Designer, Diligent
Cook acknowledges the challenge boards face in assimilating AI information but cautions against delaying action while seeking perfect understanding: “The technology is so important to competitive differentiation and opportunity, so companies need to be involved in AI. The question is how do they do that appropriately?”
He recommends boards leverage expertise from AI industry leaders like Microsoft and trade associations: “There’s a lot of the trade associations, which are creating the sets of materials you can leverage in order to be able to get yourself across the issues. Making sure that you’re aware of what the technology is doing and how it’s being used in your organization is a big way that you can manage the sorts of risks that you may be exposed to”.
The Risk of Inaction in AI Regulatory Compliance
Perhaps the greatest business risk regarding AI today is doing nothing. Organizations can determine their preferred approach, but every company needs a considered strategy and clear ambitions rather than passive observation. As Cook notes, “This is not a fad, it’s not going away”.
