It's quite impressive to read an article as alarmist as this and find that basically every regulation it's complaining about is actually very sensible.
Of course an AI tutor is a high risk scenario: having an AI that children are supposed to trust is a highly risky endeavor. If you don't offer cyber-security guarantees, for example, someone could hack it and start directly talking to those children and go into various very dangerous subjects. If you don't log the interactions between the children and the AI tutor, parents that suspect the tutor is malfunctioning can't review records and verify this, even if they sue. And regulations like this ensure that businesses willing to do it well will not be easily outcompeted by people moving fast and breaking thing.
And complaining about the fact that enforcement will be federated and not centralized at the EU level is simply a misunderstanding of the EU. Basically no one actually living in the EU wants to put power into EU authorities, everyone wants to keep most of the power in their own countries. The staffing problem is an issue, but it's also an excellent opportunity: creating AI jobs in every region of the EU, that EU funds could help pay for, is a great chance for the engineering workforce in EU countries.
Overall, I'm happy that I read this article. It left me very happy with the existence and form of the EU AI Act.
Great to hear agreement that all this article manages to show is how sensible the AI Act is. No, you can't be irresponsible with AI in Europe, we've learnt the hard way what happens when there's no regulation early on.
The more the author lays out the detail, the more I like what they're describing.
We don't want to (further) sleepwalk into a society where we're governed by algorithms we don't (or can't) understand. There is a real risk we build systems where humans end up just blindly accepting what their computers are telling them.
Computers are powerful, but ultimately people work on incentives. Even a rigorously tested system fails in the presence of misaligned incentives. Adding in AI so you can't even rigorously reason about the system further obscures the real issue of misaligned incentives.
If we get AI "wrong", then we forever bake wrong incentives into the systems and our societal fabric. Attempts to correct these will be hampered by those same systems.
All “high risk” uses listed in the article (“Systems that are used in sectors like education, employment, law enforcement, recruiting, and essential public services,
Systems used in certain kinds of products, including machinery, toys, lifts, medical devices, and vehicles.”) seem to me pretty high risk and in need of regulation. If that’s what EU’s AI act is really about, I cannot blame the EU at all. Quite the contrary.
It's telling that the author doesn't even attempt to argue why these shouldn't be considered high risk - they only argue that doing so will hinder business interests, and that apparently is convincing enough for them.
>Imagine you have a start-up and have built an AI teacher — an obvious and good AI use case. Before you may release it in the EU you must do the following:
I'm not sure I agree with what the author finds obvious...
I got into the habit of reviewing the the background of people critiquing particular policy (or advocate for one). In this case the author has very interesting participation in "Special Competitive Studies Project" which describe itself as think tank to "make recommendations to strengthen America’s long-term competitiveness as artificial intelligence (AI) and other emerging technologies are reshaping our national security, economy, and society" [1]. This is a data point that might be helpful when engaging with the article.
This is helpful (I had assumed this guy was a straight up VC), interesting to know he is Schmidt-aligned.
That said, this approach assumes the neutrality of the underlying legislation, but everything a government does is by its nature a political act (i.e. non-neutral).
Lots of people just going "I agree with this so it's right."
Really what it'll lead to is one of two things. Either only the biggest players (generally American tech companies) will be able to compete, essentially killing AI start ups. Or, AI startups will start in another country, build out a product, then eventually jump through EU's hoops. Neither of these two cases help EU companies.
Though another huge problem is just how general the requirements are. Things like "Build a comprehensive ‘risk management system’" "Ensure the system is trained on data that has ‘the appropriate statistical properties’" are not explained any further in the Act, and pretty much leave opportunity to go after anyone you want. Allowing for selective enforcement.
I appreciate the reasonable rules for regulating AI vs the moral panic attempts at regulation for the sole purpose of regulatory capture that companies in the US have called for.
> Once a model is designated as a general purpose model, then the firm must give an overview of all the training data that is specific enough that copyright holders may identify that their data was used, who then have the right to reserve to withdraw participation.
Oh no, accountability for one's actions - we can't let that get in the way of profits!
I wonder how it will play out with the data which was already stolen by neural network companies a long time ago. Most likely such cases will be swept under run.
The article reads like a vanture capitalist is upset that they can't operate in an unregulated space. I have zero objection to regulating AI. To date most AI products are simultaneously frivolous, terrible and ecologically disasterous. That's that right kind of problem for regulation to fix.
For the record, "batshit regulations" means things like (from the article):
> Rather than let schools try and improve their quality by bringing in AI tutors, Europe preemptively says that there must be impact assessments, authorized representatives, notified bodies and monitoring.
Funny that opponents of the AI Act seem to implicitly agree that these systems won't survive "impact assessments, authorized representatives, notified bodies and monitoring". They apparently can only work without oversight or accountability.
The issue isn't with the systems not being able to survive those steps, it's how long they take - it's killing inovation and it's one of the reasons Europe stays behind. You can't expect us to have an AI startup boom if every minute thing must be passed through regulators and assessements and representatives (these things take months to years).
I've also had a startup related to compliance in the EU and it's mindboggling how poorly these regulations are actually implemented and checked. It also takes months to set up a meeting with the responsible authorities to clear up any doubts, only for them to tell you that they don't really know either.
Excusing entrepreneurship, we also take way longer to get features like the improved voice capabilities of ChatGPT. This is a smaller thing, but AI has had a major impact on my productivity and I'd rather not be 3 months behind American developers on everything.
It's quite impressive to read an article as alarmist as this and find that basically every regulation it's complaining about is actually very sensible.
Of course an AI tutor is a high risk scenario: having an AI that children are supposed to trust is a highly risky endeavor. If you don't offer cyber-security guarantees, for example, someone could hack it and start directly talking to those children and go into various very dangerous subjects. If you don't log the interactions between the children and the AI tutor, parents that suspect the tutor is malfunctioning can't review records and verify this, even if they sue. And regulations like this ensure that businesses willing to do it well will not be easily outcompeted by people moving fast and breaking thing.
And complaining about the fact that enforcement will be federated and not centralized at the EU level is simply a misunderstanding of the EU. Basically no one actually living in the EU wants to put power into EU authorities, everyone wants to keep most of the power in their own countries. The staffing problem is an issue, but it's also an excellent opportunity: creating AI jobs in every region of the EU, that EU funds could help pay for, is a great chance for the engineering workforce in EU countries.
Overall, I'm happy that I read this article. It left me very happy with the existence and form of the EU AI Act.
Great to hear agreement that all this article manages to show is how sensible the AI Act is. No, you can't be irresponsible with AI in Europe, we've learnt the hard way what happens when there's no regulation early on.
The more the author lays out the detail, the more I like what they're describing.
We don't want to (further) sleepwalk into a society where we're governed by algorithms we don't (or can't) understand. There is a real risk we build systems where humans end up just blindly accepting what their computers are telling them.
Computers are powerful, but ultimately people work on incentives. Even a rigorously tested system fails in the presence of misaligned incentives. Adding in AI so you can't even rigorously reason about the system further obscures the real issue of misaligned incentives.
If we get AI "wrong", then we forever bake wrong incentives into the systems and our societal fabric. Attempts to correct these will be hampered by those same systems.
All “high risk” uses listed in the article (“Systems that are used in sectors like education, employment, law enforcement, recruiting, and essential public services, Systems used in certain kinds of products, including machinery, toys, lifts, medical devices, and vehicles.”) seem to me pretty high risk and in need of regulation. If that’s what EU’s AI act is really about, I cannot blame the EU at all. Quite the contrary.
> machinery, toys, lifts, medical devices, and vehicles
All of those seem high-risk except toys.
It's telling that the author doesn't even attempt to argue why these shouldn't be considered high risk - they only argue that doing so will hinder business interests, and that apparently is convincing enough for them.
>Imagine you have a start-up and have built an AI teacher — an obvious and good AI use case. Before you may release it in the EU you must do the following:
I'm not sure I agree with what the author finds obvious...
Aye; much of the various restrictions and requirements place on it seem pretty sensible.
I got into the habit of reviewing the the background of people critiquing particular policy (or advocate for one). In this case the author has very interesting participation in "Special Competitive Studies Project" which describe itself as think tank to "make recommendations to strengthen America’s long-term competitiveness as artificial intelligence (AI) and other emerging technologies are reshaping our national security, economy, and society" [1]. This is a data point that might be helpful when engaging with the article.
[1] https://en.wikipedia.org/wiki/Special_Competitive_Studies_Pr...
This is helpful (I had assumed this guy was a straight up VC), interesting to know he is Schmidt-aligned.
That said, this approach assumes the neutrality of the underlying legislation, but everything a government does is by its nature a political act (i.e. non-neutral).
Lots of people just going "I agree with this so it's right."
Really what it'll lead to is one of two things. Either only the biggest players (generally American tech companies) will be able to compete, essentially killing AI start ups. Or, AI startups will start in another country, build out a product, then eventually jump through EU's hoops. Neither of these two cases help EU companies.
Though another huge problem is just how general the requirements are. Things like "Build a comprehensive ‘risk management system’" "Ensure the system is trained on data that has ‘the appropriate statistical properties’" are not explained any further in the Act, and pretty much leave opportunity to go after anyone you want. Allowing for selective enforcement.
The EU has the tendency to over regulate every now and then I'd say... But in this case I almost appreciate it
I appreciate the reasonable rules for regulating AI vs the moral panic attempts at regulation for the sole purpose of regulatory capture that companies in the US have called for.
EU regulations are mostly sensible and harm-reducing, but in aggregate they have stifled innovation and growth.
Maybe it's okay that Western civilization is taking this barbell approach to risk in the US (embrace risk) vs EU (reduce risk).
For me personally, I am glad I'm on the risk-taking side.
> Once a model is designated as a general purpose model, then the firm must give an overview of all the training data that is specific enough that copyright holders may identify that their data was used, who then have the right to reserve to withdraw participation.
Oh no, accountability for one's actions - we can't let that get in the way of profits!
I wonder how it will play out with the data which was already stolen by neural network companies a long time ago. Most likely such cases will be swept under run.
The article reads like a vanture capitalist is upset that they can't operate in an unregulated space. I have zero objection to regulating AI. To date most AI products are simultaneously frivolous, terrible and ecologically disasterous. That's that right kind of problem for regulation to fix.
[flagged]
For the record, "batshit regulations" means things like (from the article):
> Rather than let schools try and improve their quality by bringing in AI tutors, Europe preemptively says that there must be impact assessments, authorized representatives, notified bodies and monitoring.
Funny that opponents of the AI Act seem to implicitly agree that these systems won't survive "impact assessments, authorized representatives, notified bodies and monitoring". They apparently can only work without oversight or accountability.
The issue isn't with the systems not being able to survive those steps, it's how long they take - it's killing inovation and it's one of the reasons Europe stays behind. You can't expect us to have an AI startup boom if every minute thing must be passed through regulators and assessements and representatives (these things take months to years).
I've also had a startup related to compliance in the EU and it's mindboggling how poorly these regulations are actually implemented and checked. It also takes months to set up a meeting with the responsible authorities to clear up any doubts, only for them to tell you that they don't really know either.
Excusing entrepreneurship, we also take way longer to get features like the improved voice capabilities of ChatGPT. This is a smaller thing, but AI has had a major impact on my productivity and I'd rather not be 3 months behind American developers on everything.
Just flag the yelly comments, replying to them mostly makes things worse.
[flagged]