The executive order is a comprehensive plan to shepherd the nation through the unfolding AI epoch as more robust and potentially perilous AI models make their entrance
With a room filled with anticipatory laughter, President Biden recently veered from his scripted address to share his personal encounters with deepfake technology, moments before signing an all-encompassing executive order on artificial intelligence (AI). The occasion was marked with a blend of amusement and the uneasy reality of the emerging era of synthetic media. The narrative further unfurled as Biden laid out a comprehensive plan to shepherd the nation through the unfolding AI epoch, a milestone event that transpired ahead of a global AI summit in the United Kingdom.
The executive order, stretching over a hundred pages, is a blueprint for the United States federal government’s approach toward regulating the burgeoning field of AI as more robust and potentially perilous models make their entrance. The order strives to mitigate present-day adversities like algorithmic discrimination while also forecasting and planning against future threats, notably those aiding in the creation of novel bioweapons.
Making companies legally accountable
While the executive order isn’t a legislative powerhouse, it’s bolstered by the invocation of the USDefense Production Act, rendering it enforceable as long as it remains in effect. This move places companies in a position of legal accountability should they neglect their newfound responsibilities.
The cornerstone of this order is the establishment of new safety requirements predicated on computational power. Models nurtured with 10 to the 26th power floating point operations (flops) now fall within the regulatory ambit. This is a leap beyond the current computational requisites needed to train frontier models like GPT-4. The mandate engenders a baseline of transparency and safety, fostering a government-industry dialogue as models burgeon in power and potential for misuse over time.
Ascertaining content authenticity
Addressing the murky waters of deepfake technology, the order designates the Commerce Department to helm the development of standards for digital watermarks and other means to ascertain content authenticity. It also compels creators of advanced models to vet them for potential assistance in bioweapon development, alongside mandating agencies to assess risks associated with AI in the realm of chemical, biological, radiological, and nuclear weaponry.
The executive order also envisages the potential of AI in elevating government services, from bolstering cyber-defence to propelling renewable energy development, embodying an optimistic outlook towards leveraging AI for the greater good.
Skirting around key issues
However, the narrative doesn’t end here. The order conspicuously skirts around certain key issues such as the transparency in AI development and the ongoing debate between open-source and closed AI development paradigms. The latter has pitted industry stalwarts against each other, igniting a discourse on the potential monopolisation of AI by a handful of entities versus the democratisation of AI technology.
The dialogues outside the White House echo the dichotomy of views within, as elucidated by Arati Prabhakar, director of the Office of Science and Technology Policy. The myriad perspectives on open-source AI, ranging from it being a catalyst for innovation to a proliferator of potential risks, resonate the complex narrative that the AI community and policymakers are entangled in.
As the dust settles post the executive order, the AI industry finds itself at a crucial juncture, regulated largely on its own terms yet under a new veil of governmental oversight. The narrative now shifts to the global stage, where the impending European Union’s AI Act might steer the regulatory helm in a new direction, potentially rendering the United States as a more lenient market for AI developers.
Challenges of Regulating Generative AI
However, these efforts to regulation AI are met with significant challenges, as generative AI poses unique risks and complexities compared to traditional AI systems.
- Defining “Harm”:
- Generative AI, capable of creating images, audio, and text, presents difficulties in defining and identifying harm. Unlike traditional AI, where harms are directly tied to specific predictions, the harms of generative AI are more nebulous and can accrue over time.
- Societal Level Impact: The cumulative effect of inaccuracies and misinformation can lead to widespread misinformation and representational harms, particularly towards marginalised groups.
- Assessing Damages:
- Determining the specific harms caused by generative AI and establishing appropriate penalties is a complex task. The ambiguity in assessing damages is analogous to challenges seen in privacy law.
- Legal Precedents: Drawing parallels from privacy law cases, such as the Boring family’s lawsuit against Google, illustrates the difficulty in quantifying damages and imposing penalties in the realm of generative AI.
- Policing Speech:
- Many dangers of generative AI are tied to speech, raising issues around censorship and freedom of expression, particularly in western democracies where speech is rigorously protected.
- Software as Speech: The conceptualisation of software as a form of speech adds an additional layer of complexity to regulation, requiring careful navigation of legal and ethical boundaries.
A Better Alternative: Rethinking the Law
- To effectively regulate generative AI, a reimagining of legal enforcement and compliance demonstration is necessary. Learning from innovative regulatory approaches, such as those by the U.S. Food and Drug Administration and the UK’s Financial Conduct Authority, can provide valuable insights.
- Technological Solutions: Approaches like “Constitutional AI,” which enables AI systems to police each other’s content, offer promising avenues for pre-emptively addressing potential harms.
While the challenges of regulating generative AI are significant, innovative regulatory and technological solutions provide a path forward. Balancing the need for oversight with the unique characteristics of generative AI is crucial to mitigating risks without stifling innovation. Acknowledging the potential pitfalls of techno-solutionism, a careful and considered approach is necessary to ensure that the governance of generative AI is both effective and adaptable to its evolving nature.