Series
Blogs
Digital Regulation -
What every board needs to know
Digital Regulation - What every board needs to know
8 April 2026
Series
Blogs
8 April 2026
Author: Georgina Kon
Digital regulation is growing in every region in the world. China has some of the most sophisticated AI laws, US states are introducing new data and AI regulation, and the EU's Digital Package continues to generate significant compliance activity globally.
While EU digital laws are not necessarily shaping the global discourse on innovation in the same way as GDPR, for example, shaped attitudes to data regulation, they are still garnering much attention from global governments and Boards alike.
Where Boards recognise their companies as heavily reliant on technological innovation, they are increasingly demanding briefings on how new digital laws – not just in the EU but across the globe – are being leveraged and managed.
Geopolitics is shaping digital regulation. The US and UK are pursuing an explicitly innovation-friendly approach, while China seeks to balance innovation with State oversight, and the EU prioritises individual protection and competition through principles-based legislation.
While EU frameworks have become common currency for discussing technology regulation, global businesses are not uniformly embedding EU principles globally: many are calibrating their approach to avoid unnecessary constraints on growth.
The Board's role in this landscape varies by business, but there is one common thread: companies investing heavily in technology, or those acknowledging significant exposure to cyber risk, are increasingly likely to have Non-Executive Directors or dedicated subcommittees focused on monitoring and managing the impact of digital regulation on short, medium, and long-term plans — and even considering structural changes to capture opportunities and mitigate risk.
Many businesses now have, or are putting in place, a senior AI committee to set AI strategy. The EU AI Act, alongside the explosion of generative AI, has prompted organisations to create inventories of AI in use, define ambitions for generative and agentic AI adoption, and in some cases publish shareholder and public facing statements on AI values and policies.
However, for global groups, an AI strategy overly focused on EU AI Act compliance may be short-sighted. A wide range of existing laws already intersect with AI: sectoral regulation, data protection, IP, product liability, health and safety, cyber, employment, and ESG, to name a few.
Boards must understand how new AI initiatives interact with existing strategies in those areas, and how controls and risk appetite are being embedded from the top down. Key Board questions include: What are our current and future uses of AI, and what are the barriers to adoption? Are policies, key use cases, and spend being properly scrutinised? How are compliance and risk mitigation programmes performing, and what AI-related items sit on the risk register?
In the EU, NIS2 has significantly expanded the definition of critical infrastructure (e.g. extending to waste management, chemical production, manufacturing, the food sector etc.) and has caught Board attention.
This is partly because of provisions requiring management to approve cyber risk-management measures, oversee their implementation, and undergo cyber training. More crucially, management can be held personally liable for breaches and disqualified from their roles in certain circumstances.
Vendors supplying businesses within the scope of NIS2 or DORA are also finding more stringent controls being flowed down to them, generating significant compliance costs.
Alongside cyber reporting obligations, e.g. under the US SEC rules and the UK Corporate Governance Code (including Provision 29, which requires boards to report on the effectiveness of material internal controls), this has driven much closer Board engagement with cyber posture, risk appetite, and control frameworks. Boards are commissioning privileged cyber risk and governance assessments, ensuring that subcommittee structures, minutes, response plans and BCDR plans demonstrate robust cyber controls, and demanding regular metrics, training, updates and opportunities to influence. The surge in AI adoption and the emerging risks posed by quantum computing have made the consequences of cyber attacks more acute than ever.
At Davos this year, data sovereignty dominated discussion, and the conversation has shifted well beyond data location. The focus is now on strategic control frameworks encompassing compute, data, models, and platforms, and on reducing reliance on non-domestic clouds and vendors. Secure domestic compute capacity, data localisation, and strong enforcement of protections against data flows to potentially non-allied jurisdictions are now seen as strategic imperatives.
In the coming months, Boards must monitor regulatory developments closely. New US legislation (including rules on Preventing Access to Americans' Bulk Sensitive Personal Data) sits alongside GDPR concerns and data localisation regimes in China, India, the Middle East, and elsewhere.
Over time, Boards may need to invest in new infrastructure, re-think supply chains, and impose controls on certain cross-border data transfers. At the same time, for some businesses - particularly those in domestic infrastructure and dual-use technologies - the surge of investment in sovereign capability represents a significant commercial opportunity.
It is impossible in a short article to capture the full breadth of digital regulation. Laws promoting competition in the cloud market (e.g. the EU Data Act), protecting gig economy workers (e.g. the Platform Work Directive), and ensuring online safety (e.g. the UK's Online Safety Act, estimated to apply to 100,000 businesses) all carry far-reaching consequences. Encouragingly, regulators are receiving feedback on the risks of overregulation stifling economic success (for example, prompting the EU to consider targeted deregulation in certain areas) and initiatives such as the EU Health Data Space are opening new commercial avenues in life sciences and health technology.
The pace of digital regulatory change shows no sign of slowing. Boards that treat this as a compliance matter that is simply delegated entirely to legal or IT teams risk being caught off-guard or failing to take advantage of new opportunities. Key action points are:
Digital regulation is no longer a background risk. It is a Board-level strategic priority.