5 Comments
User's avatar
Di Wang's avatar

Have to say that EU officials are almost obsessed with regulating everything. Banking, online speech, cookies, charging ports for electronics, and now AI.

The EU is systemically set up that way, and not sure if anything other than abandoning it all together can change it.

Expand full comment
Daniel Florian's avatar

I agree that the EU's claim that regulation creates legal certainty and legal certainty accelerates the uptake of new technologies is increasingly put in question, especially in AI. While the risk-based approach makes sense in principal, the scope is still pretty wide, it overlaps with sectoral regulation and without guidelines (which are expected only in Q2 2026) it is hard to speak about legal certainty. I would slightly disagree with your depiction of how the rules for GPAIs emerged - there was more at play than just the emergence of ChatGPT ...

Expand full comment
Privacat's avatar

I think you were more right than wrong on this one -- and I liked the framing on incentives for the EU policymakers -- that was something I hadn't thought of when I've complained about a similar problem in the past --https://insights.priva.cat/p/privacy-nihilism-is-pervasive-and

There's no question that the EU has set itself up for failure with respect to AI, and as I see each member state step on itself (Ireland is now up to 15 DIFFERENT regulatory bodies for AI - https://www.williamfry.com/knowledge/ireland-establishes-comprehensive-ai-regulatory-framework-under-eu-act/_ ) I am further convinced that the response by industry will either be to ignore the act (and hope nobody enforces, much as is the case for the GDPR), or pull out of the EU entirely (as they are starting to do more b/c of the DSA/DMA).

The high risk categories are a mess. The criteria for compliance is a mess. It's just unworkable and offers few benefits to anyone other than regulators and 'experts' selling compliance.

Expand full comment
Antonio Ferreiro Chao's avatar

I wonder if artificial intelligence itself, trained not only with specific historical data (regulatory items vs. open developments) but also open to creative exploration of the future, could help us establish an efficient compromise between regulation and investment in this field of AI development. Could such a process of questioning AI be fed with charts like the one on "GDPR and Transatlantic Venture Capital," but extended to capture the most sensitive pieces of GDPR vs. specific AI ventures?

Expand full comment
Craig Willy's avatar

"The Council will agree too; they want to do something, without fighting entrenched interests back home." Why would this be the case? I could imagine national governments wanting to support businesses or make sure new regulation is not too onerous. It doesn't take all that much to create a blocking minority.

Expand full comment