The AI Act has extraterritorial jurisdiction. If enacted, enforcement would be out of the hands of EU member states. A European government could be compelled by third parties to seek conflict with American developers and businesses.
Open-source developers, including open-source LLMs, would have to register their “high-risk” AI project or foundational model with the government and expensive risk testing would be required. Registration will also require disclosure of data sources used, computing resources (including time spent training), performance benchmarks, and red teaming.
APIs — meaning, third parties implementing an AI model without running it on their own hardware — would be essentially banned. That would include implementation examples like AutoGPT and LangChain.
However, European small businesses are exempt from undergoing a stringent permitting review project before launch.
Third parties would have the ability to file lawsuits to force a national AI regulator to impose fines.
American experts said that “this is a deeply corrupt piece of legislation; the most likely effect of such a policy is to create a society where the elite have access to R&D models, and nobody else – including small entrepreneurs.”









