Evaluation of the US and European AI Pact: Limitations and Implications
This is it! Or is it? Today the European Commission will announce the draft of the highly anticipate AI Code of Conduct. While this is a regulation that should have existed a decade ago, it has only come to urgent need due to what generative AI have achieved since November, 2022. We welcome the simplification of repetitive tasks through technology and AI has been an important thus far.
Preface: The AI Act, proposed by the European Commission on April 21, 2021, is a regulation intended to establish a unified framework for regulating artificial intelligence (AI) within the European Union. Its objective is to cover all sectors, excluding the military, and apply to various types of AI. Rather than granting individual rights, the AI Act focuses on regulating AI system providers and entities utilizing AI in a professional capacity. It is designed as a product regulation measure.
Introduction:
The US and European AI Pact, signed in January 2023, represents a significant collaborative effort between two major global powers to address the challenges and opportunities presented by artificial intelligence (AI). While the pact demonstrates a commendable commitment to fostering cooperation, innovation, and ethical AI development, it also carries certain limitations that warrant careful consideration. This evaluation will assess the pact’s strengths and weaknesses, with a particular focus on its limitations, and provide relevant citations to support the analysis.
- Limited Scope and Global Representation: The US and European AI Pact primarily focuses on transatlantic collaboration, leaving room for the exclusion of other significant global stakeholders in AI development and governance. As a result, this limited scope may hinder the pact’s ability to set universal standards and guidelines for AI. For instance, AI advancements made by countries outside the pact may not be subject to the same ethical considerations or regulatory frameworks, potentially creating an uneven playing field. Therefore, to achieve a more inclusive and comprehensive approach, the pact should actively seek partnerships with other regions and nations.
- Implementation Challenges and Ambiguity: Although the pact outlines broad principles and aspirations, it lacks specific details regarding implementation strategies, timelines, and enforcement mechanisms. This ambiguity could lead to difficulties in translating the pact’s goals into actionable policies and regulations. Without clear guidelines, the pact may struggle to address emerging AI technologies adequately. To overcome this limitation, the pact should establish concrete mechanisms for monitoring progress, sharing best practices, and enforcing compliance among its signatories.
- Balancing Innovation and Ethical Considerations: The US and European AI Pact aims to strike a delicate balance between fostering AI innovation and ensuring ethical considerations. While this objective is crucial, it can be challenging to achieve in practice. Rapid advancements in AI technology may outpace the development of regulations and ethical frameworks, leading to potential gaps in addressing emerging issues such as deepfakes, algorithmic bias, and autonomous weapons. To mitigate these limitations, the pact should promote ongoing collaboration between policymakers, researchers, and industry experts to anticipate and respond to evolving ethical challenges.
- Potential Regulatory Discrepancies: The US and European AI Pact, by nature, reflects the differing regulatory approaches and cultural perspectives of its signatories. This divergence may result in discrepancies and complexities when attempting to harmonize AI regulations across jurisdictions. The varying definitions of AI and divergent regulatory frameworks could impede interoperability, hinder international data sharing, and create barriers to trade and innovation. To mitigate this limitation, the pact should actively promote harmonization efforts, foster international dialogue, and encourage the development of common frameworks to ensure interoperability and facilitate global AI governance.
Concerns from BEUC
While scanning the web for details other than appraisals of the EU-US AI Pact, I came across an emphatic call for consideration of overlooked concerns from BEUC. It’s vital to encapsulate the fundamental details that have been brought fourth here, while the AI Pact will have revisions in years to come, the EU Parliament which will reveal the draft today (14th June 2023) must attempt to get it right the first time. AI is a fast-growing disruptive technology, in both industry and society. Thus, sidelining the views of key stakeholder will only mean a disastrous next 5–10 years. Let’s dive into a summary of concerns in BEUC’s letter to the European Commission.
a) An EU-US AI code negotiation must not be launched before the finalisation of the AI Act.
The passage discusses several concerns regarding the launch of a voluntary code of conduct for artificial intelligence (AI) and its potential implications.
Firstly, there is a risk of conflict between the European Commission, which plays an executive role, and the Council and Parliament, which are legislative institutions. The Commission’s involvement in trilogue negotiations may compromise its impartiality when negotiating a code of conduct with third countries and industry, particularly since the problems consumers face come from the very companies involved in the negotiations.
Consumer and digital rights groups, including BEUC and others from the EU and US, have repeatedly urged the Trade and Technology Council (TTC) not to interfere with legislative processes. The announcement of the voluntary code of conduct during a TTC meeting raises concerns that these pleas were not heeded.
Secondly, it is unclear what requirements can be included in a voluntary agreement when the legal requirements for AI actors in the EU are not yet defined. There is a risk that the voluntary commitments will not align with the final legal text. The European Parliament is set to adopt its opinion on the AI Act, which includes specific rules for foundation models and generative AI systems. Additionally, the Parliament will vote on additional rights for consumers and a fundamental rights impact assessment for high-risk AI uses. These crucial elements will need to be negotiated in trilogue with the Council to reach a final agreement.
Thirdly, the initiative to launch a voluntary code of conduct not only influences EU legislative negotiations on the AI Act but also has the potential to discourage the US from regulating AI. Companies may pressure the US government and the Federal Trade Commission (FTC) to rely solely on the voluntary code of conduct, which could negatively impact consumers on both sides of the Atlantic. The EU should continue advocating for legislative solutions in collaboration with the US and at the international level to lead the way in AI regulation and consumer protection globally.
b) Instead of relying on voluntary industry commitments, strengthen enforcement
The second part of the BEUC’s concern emphasizes the importance of effectively enforcing existing EU laws, such as consumer protection, data protection, and product safety legislation, both before and after the AI Act comes into effect. The European Commission is urged to prioritize enforcement as it plays a crucial role in addressing emerging technologies like generative AI.
In March and April 2023, BEUC called on the Consumer Protection Cooperation Network (CPC) and the Consumer Safety Network (CSN) to investigate and take necessary actions regarding the safety and consumer protection risks associated with ChatGPT and other AI chatbots. A press release was also issued, urging EU and national authorities to conduct an investigation. The European Data Protection Board (EDPB) established a task force to examine ChatGPT and generative AI in response to the temporary halt imposed by the Italian data protection authority, with other data protection authorities exploring similar paths.
The letter urges support for enforcement authorities in utilizing existing laws to ensure that AI companies provide legally compliant products that do not harm individuals, and that these companies are held accountable. Member states are encouraged to allocate adequate resources and motivation to enforcement authorities to carry out their tasks effectively.
Even after the AI Act is implemented, effective enforcement of other horizontal EU legislation for consumer and citizen protection will remain necessary. It is noted that a significant number of AI products, not categorized as high-risk or generative AI, will not be specifically regulated under the AI Act framework.
c) Participation of civil society
The inclusion of BEUC, Transatlantic Consumer Dialogue (TACD) and other civil society organizations in this call for an effective regulation of generative AI. BEUC believes that the AI regulation extends well beyond the trade agenda and has significant implications for our society, values, and fundamental rights. They further go on to state the following;
While we support the EU’s leadership in this process, we believe it should take place in a different forum than the TTC, which primarily focuses on facilitating trade. The drafting of the AI code of conduct should be transparent and actively involve civil society in a multistakeholder forum that holds democratic legitimacy.
Conclusion
The US and European AI Pact represents a significant step toward fostering transatlantic collaboration and addressing the challenges posed by AI. However, several limitations must be addressed to ensure its effectiveness on a global scale. These limitations include its limited scope and global representation, ambiguity in implementation strategies, the need to balance innovation and ethical considerations, and potential regulatory discrepancies. By actively working to address these limitations, the pact can strengthen its impact and serve as a foundation for broader international cooperation in AI governance and development.
Concerns raised by BEUC are legitimate, there is a lot to be covered under the upcoming AI Act, if the EU and US do not include other major stakeholders including Big Tech firms, the Act will have to undergo major revisions before it becomes fully useful. Nonetheless, this is a major milestone in protecting societies from the potential harm that AI can lead to. After all, we do not want to create tech will can eliminate humanity like the T 800 (Terminator Movie).
References
BEUC, EU-US AI voluntary code of conduct and an ‘AI Pact’ for Europe (June 5th 2023) https://www.beuc.eu/sites/default/files/publications/BEUC-X-2023-071_EU-US_AI_voluntary_code_of_conduct_and_an_AI_Pact_%20for_Europe.pdf
Artificial Intelligence Act, Wikipedia, https://en.wikipedia.org/wiki/Artificial_Intelligence_Act