The EU and the not-so-simple macroeconomics of AI
My Podcast with Epoch AI
I was interviewed by Andrei Potlogea and Anson Ho of Epoch AI for a long and wide-ranging Podcast on the economics of artificial intelligence. We covered the macroeconomic effects of AI, both the promise of productivity gains and the messy transition costs. Part of the discussion centered on what I called in a post here the “AI Becker problem”: as AI takes over routine tasks, it destroys the currency with which junior workers have traditionally paid for their training. The macro data is starting to confirm this—two recent papers show junior hiring in AI-exposed occupations falling off a cliff while senior employment holds steady (references below).
We also discussed Europe’s predicament. The EU AI Act and GDPR have created a regulatory environment that makes AI implementation costly and slow, precisely when Europe needs growth most. The continent faces an“R without G” risk, paying higher global interest rates driven by the AI investment boom while failing to capture the productivity gains that would make those rates sustainable. Our tricky demographics make the problem worse. We discussed where value will accrue in the AI stack, why a “smart second mover” strategy might be Europe’s best option, and whether the new geopolitical pressures from the US might finally wake European policymakers from their regulatory slumber. I’m not optimistic on that, but the conversation was fun.
We discussed a lot of recent research, which is linked in the podcast notes below the transcript, at least nine papers published in 2025.
Watch the episode here on Youtube, Apple, or Spotify, or else you can read the transcript below. I hope you enjoy it.
Also listen on:
Spotify
Transcript:
Will AI trigger explosive growth? [00:00:00]
Anson
Hi, I’m Anson, I’m a researcher at Epoch AI. Today I’m joined by my co-host Andrei, who is an assistant professor at the University of Edinburgh. And I’m also joined by Luis Garicano, who is a professor at LSE studying economics. Luis, thanks for coming on the podcast.
Luis
That’s my pleasure. It’s great to be here.
Anson
I’d like to start with explosive growth very briefly. One thing that we briefly discussed on Twitter was whether or not we’re likely to see a massive acceleration in Gross World Product growth rates. And one point that I think is somewhat underrated by economists is that if we look at the last 200 years, growth has been exponential. But if we look much longer throughout history, it seems like there has been an acceleration. So shouldn’t we think that accelerations in growth aren’t that implausible after all?
Luis
The probability that we get a very large acceleration of growth exists —I’m not going to dismiss that. You were arguing that was potentially the case with your task-based model. My view was that there were several things that are likely to make that take a long time or slow it down. So the first obstacle I was pointing out — and in R&D for example it’s very clear — you can develop as many new ideas for proteins as you want so for biotech and for solutions to biological problems. But if you don’t manage to get them approved by the FDA, you don’t have a medicine. And if you don’t get doctors to use it and you don’t get people to learn it — so there are a lot of bottlenecks that slow things.
So that was my first objection, that people in Silicon Valley who are only observing the very best application of technology, which is coding. They’re extrapolating from which tasks we have, how many tasks we’re performing, and run the risk of overestimating how easy it is for organizations and institutions to accommodate these tasks.
Andrei
Just a question. I think we’re kind of on the same page that sustained explosive growth is perhaps not that plausible. What about kind of an explosive growth spurt, a shorter run thing where you have five, ten years of much, much faster growth than we’ve recently expected? Just because, you know, we start from an initial condition where AI seems to be good at exactly lots of things that humans are bad at. So you start with this high productivity sector being initially relatively large. Could we have that?
Luis
I think so. I’m an optimist in AI in spite of our disagreement on that. I do believe that, unlike people like Daron Acemoglu or others who in not just ten years, but even in the longer run, their models don’t predict large growth spurts. I think the good way to see it is: does it get autonomous? I think the key distinction is between autonomous and non-autonomous AI. As long as the AI needs your supervision because it makes lots of mistakes, then the bottleneck is the human. And the human is not improving much. I mean yeah, the AI is helping the human do it a little bit faster, but the human is bottlenecked by their own time. I’m a better lawyer, I’m doing better at my tasks, but okay, that’s just a qualitative difference.
The moment you get the AI lawyer, the moment the AI becomes autonomous, I think there you get a jump, a discrete jump. So we could easily have a situation where we see very small steps where the AI is helping us. We’re doing a little bit better. Think of Brynjolfsson’s customer support chatbots. There the chatbot is helping the juniors be better customer support agents. They suggest answers, the junior uses them, but it’s still a junior doing it.
We know because the paper is published in 2025 (but the experiment is from a little bit before). We know now that the chatbot is precisely one of the areas where it’s likely — and in fact, we are already seeing in some of the data — that the humans can be earlier removed from the production function. Because at the end of the day, there is a set of questions that are relatively repeated and common, and then you can do a lot of the customer service fast, reliably, etc. And you could always have a layer like in my knowledge hierarchies type work where basically you have the routine tasks done by some agents and the exceptions done by experts. That’s kind of how stuff is produced. The high value tasks are done by the high level consultant, and the entry level analyst does the routine jobs. You could still have that layer of people who get big leverage. If all of these tasks that are more junior get replaced and you get that expert that you’re expecting.
So I would think that it could easily be that we are all thinking that nothing’s happening… and then boom! Something happens in one particular profession, something major, like the type of spurt that you’re mentioning.
Short-run macroeconomic effects [00:06:26]
Andrei
We’re all working in this kind of long-run macro way of thinking about the effects of AI, but what about the short-run macro of it? So what would we expect to happen to things like unemployment, inflation?
Luis
I think the short-run macro is the problematic one. Let’s suppose that we have two sectors, let’s say sectors A and B, and sector A basically gets produced for free. So the price of sector A is zero. So the short run effects are you need to reallocate the labor and the capital to sector B. Now the first thing that is clear, I think we will all agree, is that welfare is going to improve. If, for example, let’s say sector A is medical services and legal services. This is autonomous AI. We get medical and legal services have zero price.
So first, we get a huge increase in consumer surplus. Fantastic, right? All my illnesses I can diagnose myself, I can get all my legal problems. You know, I need to buy a house, the AI does it, you sign it, it all goes in the crypto chain. It’s all automatic and perfect. So consumer surplus goes up.
But what happens to GDP and what happens to employment? Let’s talk about the short-run. Let’s say that you need a neurosurgeon to become somebody in sector B who is maybe a plumber, just to make the extreme example clear to our listeners. Then you have somebody who has very specific human capital, who has been completely depreciated, who was used to earning several hundred thousand dollars, now has to start working in a new sector that doesn’t have any — I don’t think any of this human capital is going to be very valuable. The machines, all the things that were complementary with the lawyer or the doctor are useless. We need to do depreciate it, and we need to redeploy them.
So we have an increase in supply in sector two. We have an increase in demand. In the short run, only the increase in demand. So the supply is reassigning itself. It’s really hard to get these machines to be useful. So in the short run, I would imagine that prices in sector two are going to go up. But in the long run, I don’t know.
I wouldn’t talk about this as inflation. This is a change in relative prices in sector two. I mean, we could have deflation if all of these people are unemployed, etc. But when it’s a price shock, I am reluctant to talk about inflation. It’s really just the price shock, which is that all of those skills and all of that capital is worth nothing. And people in this new sector have to accommodate this extra demand and this extra labor and capital. That would be how I would see this situation.
Obviously, the problem in the short run of my scenario is that the very short run completely contradicts it, which is the lawyers will get the bar association to say it’s illegal to sell your house without a lawyer signing, and the doctors will get the medical association —
Andrei
But I guess one of the intuitions I had and I struggled to reconcile in my head is like, you know, you have this situation where in sector A productivity has gone nuts and the price is almost zero, but wouldn’t we actually be worried that in the short run we’d have a recession, right? I mean, all these people would be worried about their jobs and would stop spending. So there’s this demand side thing happening in the short run. How do we reconcile those?
Luis
That’s why I said deflation, if you want to call that price shock as — because in this first sector there is a lot of consumer surplus. But in terms of actual P — so we have the price of sector one times X in sector one plus the price in sector two plus the quantity in sector two. The price in sector one is zero by assumption. So that part of GDP has fallen off a cliff and that capital and labor is unemployed. So yes, I think the short run effect until you get this reallocation is, I think, a big increase in welfare. Probably still, a lot of people are very happy. You are in Ghana and you don’t have access to good medical services in some rural village, and you suddenly can just get a doctor, an AI doctor. That’s great. But that increasing welfare doesn’t necessarily translate into a GDP increase. Indeed. And definitely those people who have to be reassigned could be in long term unemployment, a lot of them, because many of them might, depending on what their old skills were, might find it very hard to readjust to the new world.
The decline of junior jobs [00:11:29]
Andrei
So one thing that I also wonder about is the distributional consequences of these potential shocks. And here I sense a little bit of tension both when I read the news about the entry level job market and what’s happening to the entry level job market, and also when I read papers worrying about deskilling. So on some level, we expect AI to be bad for entry level workers and less skilled workers, at least within skilled professions. On the other level, we’re worrying about deskilling. So will AI be good for less skilled workers than skilled professors or bad for them? How do we think about that question?
Luis
It’s a great question and one that is really being played out right now. I joked in the NBER conference on AI in Stanford a few weeks back about Notion versus Brynjolfsson. There is a Stanford AI—
Andrei
That sounds particularly problematic.
Luis
I think we can reconcile it. There is a Stanford economist who had two really important papers. One is the one I was referring to before, which is in the Quarterly Journal of Economics earlier in the year, giving software chatbot assistants to the customer service support agents. And indeed, he finds big increases in the productivity of the most junior ones, because basically, you get into the job and you get already a tool that allows you to solve most of the problems. They actually also get trained faster, they seem to learn faster. So when you eliminate these, you turn it off. They seem to have picked up stuff. So in all dimensions they provide more quality, the clients are happier, etc. You get the more junior of them are helped. And there is also a field experiment. So this is one field experiment.
There’s another interesting field experiment with software developers (by Cui et al.) that goes also into that direction. Finds some gigantic increase in productivity, maybe 20 something percent, from August this year. So it says, look, we gave in three companies these tools. And we saw the software developers increase productivity a lot, particularly junior ones. So that’s your side. That’s like okay, it’s not deskilling.
Then when we look at the aggregate data, two very recent papers, one by Erik Brynjolfsson and co-authors find something very different already. So this is not in the big macro data that the Fed finds and the Fed economists haven’t found it. These are not big shocks that we would have expected in 2022. Let me tell you the two findings. So one is from early September. This paper by Lichtinger and a co-author (Seyed Mahdi Hosseini Maasoum) is called Seniority-Biased Technological Change. And basically what it finds using something like 62 million workers — so it’s very significant — in the AI exposed occupations, you don’t see anything happening to senior employment (you see it growing). But you see junior employment really dropping. And the way it’s dropping is through hiring. It seems like a lot of people are not hiring junior employees.
The logic behind it seems to me clear if you talk to a McKinsey partner, which I have done on exactly this question in person, recruiting for them. He was telling me things like the deep research does the job that the junior researcher could do. The PowerPoint slides, you can do them automatically quite well. A lot of the junior tasks can be done by the software. And so you get this replacement of juniors that you don’t hire anymore. And we’ll talk later probably about some work I’ve done on this training, the missing training ladder.
So these junior jobs are gone. And so you’re hiring less. You’re not hiring people. So that’s why I say this is subtle. This is the seniority based technological change. The Erik Brynjolfsson paper from August this year is the Canaries in the Coal Mine. Basically, it finds something similar. It finds for workers between 22 and 25 years old—so again, let’s look narrowly. Let’s be careful and let’s look at AI exposed versus not AI exposed provisions. We again see pretty clear drops and pretty robust with aggregate data.
Now how do we reconcile this? I would reconcile it with the following two ideas.
One is this idea that I was arguing before that you get like, “oh, I’m a better customer support agent” for a while… then “oops, I don’t have a job!”. Because the AI has been helping me become better until the moment the AI is sufficiently better that I am not needed anymore. That is one idea that autonomy kind of — we start with non-autonomous AI that enhances and complements our skills. So Ide and Talamas have a recent Journal of Political Economy paper, where they contrast autonomous and non-autonomous AI at different levels of the skill distribution. And basically part of the argument is the autonomous AI is going to basically pin down the wage distribution. It replaces people at that point and produces an enormous supply shock on that point. Everybody below that is going to have to compete with AI. It’s going to have to earn less than the AI charges or the AI is worth. And so the moment it becomes autonomous, things change. And that’s, I think, one way to reconcile it, autonomous versus non-autonomous.
And the other way to reconcile it is, of course, the level of the AI, which is very related to autonomy. As the AI advances, I think we’re going to see the complementarity in some of these lower end jobs become substitutability. Now, this does not necessarily yet affect the higher end jobs.
I think if you’re on the higher end, your leverage increases, the knowledge hierarchy becomes more productive. You have this superstar effect where if you are an AI — we see these salaries for the AI engineers that have been offered $100 million and things like that, like football players. When Messi is watched in the World Cup final or in the Champions League final, he’s watched by 500 million, a billion people. So being a little bit better of a player gives you a huge market size because many people are going to want to pay a little bit more. Multiply that by 500 million people, orwhatever it is. Now that gives you superstar effects. Sherwin Rosen, who was a very important labor economist, makes this point. When there is limited substitution between quality and quantity (I cannot substitute 20 players by Messi, I cannot substitute 100 players by Messi. There is 11 and there’s only one field that is like that. Any number of players is not going to replace Messi.), and when you have markets that have joint consumption, that one person can reach a lot of people (we can all consume the same football game), hen you get the superstar effects, and these superstar effects are affecting the top of the wage distribution. A very good AI programmer with lots of AI, developer with lots of actual AIs, LLMs that are being deployed by him or by her, can have enormous leverage and can reach very large market size. So the extra skill they can add is really very valuable.
So I think on the top distribution we could see this bifurcation between on the bottom getting this substitutability, on the top getting this complementarity. And I think of course as the supervisory threshold, the threshold of things that the AI can do on its own goes up, those that are actually getting the superstar gains will become smaller.
The missing training ladder [00:20:21]
Anson
So one thing I’m curious about is that if I’m an entry level worker and I want to try to figure out how I can get into this job and learn the skills I need to be valuable in this job, there’s sort of like a strange situation, right? It’s like if I get to the point where I can be valuable, you know, to become an expert, then that’s great. But there’s like a period in between where I would normally do these routine tasks, but then right now I’m not able to do them as often because the AIs are doing them for me. So how do I know when it’s worth it for a company to hire me if I’m an entry level worker?
Luis
Yes, that’s a question I’ve been thinking about with Luis Rayo, my coauthor from Kellogg. I like to think of this as an AI Becker problem. So let me tell you, Gary Becker was a famous economist who developed the theory of human capital. And he made this distinction between general and specific training by companies. And he said, look, a company can always give you specific training because they’re going to appropriate it, but are they going to give you general training? Well, general training can only be given if the company can recover it afterwards. But once you’re trained, you can just walk and get all the benefits from the training.
So he would say, how is this going to work? Well, either there’s a market failure because we don’t get enough training in the economy or basically somehow the workers pay for the training. And with Luis Rayo , we basically wrote an analysis that appeared in the American Economic Review. We basically say, look, the way that these contracts are going to work is that there’s a master and apprentice, and the master is going to basically slow down the training so as to extract all the value of the surplus from the apprentice, while the master is giving little nuggets of training. So I’m giving you just enough that you want to stay because you want to be an expert, but not so much that I train you very fast and you walk out. So that’s kind of the solution that we proposed.
Now in that solution, the AI, as you are hinting, is going to create a problem, which is that it basically devalues the currency with which the apprentice is paying. The apprentice is basically paying not in dollars — it’s paying in menial tasks.
Suppose you’re a lawyer and you’re working for Cravath, and it really is not worth your time to spend all your time reviewing all these contracts. I mean, it’s boring as hell, but okay, you’re learning something and you’re receiving, but it’s basically menial work. Or suppose you’re in McKinsey and you’re the smartest person in your class or an investment bank, and you’re the smartest person in your generation. And there you are, doing silly spreadsheets that many other people could do. But that menial task is the way you pay for getting this training.
Now, if the AI can do the basic research at McKinsey, can do the contract review at Cravath or whatever law firm this is, and can do the basic accounting at an accounting firm or basic programming, then how do you pay for your training? So our argument is that the AI devalues the currency with which you pay. And as a result, makes the firm reluctant or the expert reluctant to get the worker in the first place. [In the past they could say,] “okay, I get this worker is going to be a pain and so on, but you know, I’m going to get paid for training them through their work.” But now it’s so cheap to do with an AI that the value of the worker is devalued.
So basically we show in the paper we build a very simple model in which these exchanges are happening. And we show that there are two basic things that are happening, and the ratio between those two is what is crucial. One is the AI, the substitution aspect of the AI that is basically devaluing the currency with which the worker is paying. So basically the AI as it gets better, the worker basically has less to add to this production function of the partner or the more expert person. But at the same time, the fully trained worker is worth more. So that means that the traineeship is still worth it.
So the basic result that we have is that there is a ratio, a key ratio, which is how much the AI complements the expert. An expert, fully trained expert with AI. How much has that gone up relative to how much the AI replaces the untrained person? If the expert with AI value is going up a lot, then even though the untrained person is not worth a lot, you can extract so much from that value that the contract still exists. So basically that ratio determines whether you are going to want to employ that worker and to train.
In the absence of that, then the training ladder disappears, and we have a big societal market failure. Imagine all of this tacit knowledge, a lot of this training that happens on the job is not in any manual, right? If it was in a manual, it would be taught in law school. It’s about how you deal with the client. It’s about how you are really precise with the contract. It’s hundreds of things that are hard to describe. Tacit knowledge is the idea that there is a lot that we know but we can’t describe. And if the worker is not acquiring this tacit knowledge, because all this training is not taking place from the master, this transfer of knowledge from the master directly — he’s the one, she’s the one who has this knowledge. Then the economy has a problem in the longer run.
To the extent that the AI is not perfect, we don’t have those experts that can supervise the AI in ten years or in fifteen years. Then we have a hole in our growth model. Growth depends on human capital, and suddenly this pipeline of intermediate people acquiring skills has disappeared. And that’s actually a potentially big consequence of AI, a problem that AI could eliminate those lower rungs on the ladder.
And as I was arguing before with the Canaries in the Coal Mine and the Seniority-Biased Technological Change papers, I think there is a lot of anecdotal evidence from these companies that these very junior employees are not really being hired. But there is in these two papers—there’s already from August and from September — starts to be systematic evidence that this could be happening.
Anson
What do we know about the value of this ratio? Like, do we have any empirical evidence?
Luis
No, it’s a theory paper and we are suggesting that people should look into this empirically. We are inviting people to analyze it empirically. I think we’re seeing both. I think we are seeing senior people really complimented and more productive. Look at the $100 million checks that we were referring to. Senior AI experts, AI engineers are getting big paychecks, which would be unimaginable without AI. So they’re being complemented. I think that in our jobs, we already can see that the productivity is increasing with AI. We are also seeing substitution. So the question is how big is that ratio in different professions. And the larger the ratio, the more the training ladders will remain.
Anson
One thing I’m a little worried about with trying to estimate this is that, you know, if we had tried to do this exercise of estimating the ratio three years ago, the models were so different and so much worse that the ratio might have been pretty different. And I’m worried that if we try to do it today, three years in the future, it’s going to be also similarly irrelevant.
Luis
I think you’re right. But this is true for all AI, right? It’s also true for all the micro models that are trying to estimate how much is compute transforming to advance. We have some general patterns in some general scaling laws, but these things are—we don’t really know how much we can extrapolate. We are in a period of massive technological change. And the good news is that it’s massive. And the bad news is that we have to peek into the future with really just in the dark with a little bit of light. You guys are trying to help people see further into the future and we are all trying to use the best tools that we have. But the truth of the matter is, if this is as revolutionary as we expect, the future could give us big surprises. Yes, I do agree with that.
Anson
How much does this model depend on the tasks that are hard for humans also being the tasks that are hard for the AIs as opposed to some kind of different skill distribution for the AIs, which seems to be the case. Like it’s kind of like Moravec’s paradox in AI. The things that are easy for humans are hard for the AIs.
Luis
So I think the paradox is a huge discovery for all of us. We discover it every day, right? Things that we find impossible to do, the computer is doing perfectly. And then we end up spending time fixing some stupid mistake the computer, the AI is unable to fix. And so it goes the opposite way in some sense, as you’re suggesting. We are indeed studying a situation where the AI is little by little replacing things that the lower skilled worker can do. I think the reason why it makes sense in this context is because the AI makes mistakes. And I like to refer to this cutoff as this supervision threshold. So you need to be smarter than AI in order to be able to correct the AI.
Think of a kid who is now going to school and they can do the ChatGPT. They can make ChatGPT make the essay much better than them. So ChatGPT — they just do the essay and they hand it in. They can’t see where the mistakes are or the things that are actually not perfect, so they are never going to arrive at the supervision threshold. They’re never going to arrive to the point where they are able to read the essay and see the mistakes, because they basically spent all their years — and you have a young kid. My kids are already out of this, but you have a young kid, and this is going to be an issue, right?
I have a friend who’s a high school teacher of English, and he tells me, you know, how do I make these kids want to write and read? They read Hamlet quickly in the morning with the LLM. They take the key questions that have to be answered in the class, and they kind of BS their way through their answer and they don’t read anything. So I think that the reason that we are thinking of this is we are in a context where in a law firm, in a consulting firm, etc., as you are acquiring this seniority, you are acquiring the ability to add value and be above what the AI can do.
To the extent that it’s the opposite, to the extent that AI is doing all the difficult tasks and anybody can do the correction of the points, then this would be a different world. Indeed, I think companies will have to think of training in different ways. Maybe they have to think of, okay, we’re going to train the workers by — maybe we hire less of them, but the ones we have, we train them by going over the AI output and reviewing it. So that there is actually a way that you’re still improving, but you’re not going through all these routine tasks that, at the end of the day, don’t have any value at all anymore.
Andrei
So in response to this AI Becker problem, could there be more of an equity type arrangements involving human capital, where firms have some sort of exposure to the human capital they help create?
Luis
I mean, the human capital is inherently with the person and the person — you have a big moral hazard problem, right? So once somebody has invested in you, you are going to decide how much you work. You could decide not to work because you are not getting the upside. The company is getting the upside. So it has been historically very hard to find market solutions to this.
It’s similar with loans. I mean, there are loans for MBAs for certain high end things. But loans again, it’s hard to see how you secure the loan against the human capital. You cannot secure with the human beings because of slavery being forbidden! And you cannot pledge yourself as collateral. So human capital transactions — I don’t say they are impossible because they exist. Often these loans are government programs. In the US, there’s a lot of government guarantees. In the UK, government guarantees. I think that equity has proven really hard.
Maybe with football players, right? Maybe with football players you get the upside. You train a football player, you sell it to another team, etc. It has an equity-like arrangement, but it’s the only context where the firm trained the football player, which is the — I don’t know if it happens in the US, professional sports is able to get a fee, a transfer fee for having trained that person. But it’s a very unusual context, I would say. Equity is hard. Debt is more promising. But even debt is tricky because of moral hazard and repossession and all that.
Andrei
Going back to the bigger picture a bit on AI and training. Do we have a sense at the moment if AI is making training of humans—I should note—easier or harder? Because on some level you were mentioning that there’s all these learning tools, AI powered learning tools that you could tailor to the student, assuming regulation allows you, and that could be helpful. But on the other hand, you know, I’m an instructor myself and I can’t get my students to read anything. I can get them to read the AI summary of the AI summary of something. That seems bad. Is there any evidence you know of?
Luis
I haven’t seen evidence. I think we are all observing exactly what you’re observing. We’re all observing that students have been using AI for cheating. Let me tell you what I do with AI. So my view on education is summarized by what I do with AI in my two classes. I teach the microeconomics class in the first year of a master’s program, and my view is that if you want to be thinking in the future, you need some basic models, some basic facts, and some basic tools, and that is not going to change. Otherwise you cannot think, right? We are trying to triangulate is $400 billion big or small. Is that big valuation or — you need to have something in your brain to use things.
So at a basic level, I want them to use the old blue books, write the problem sets, and the exam is going to be written. And I tell the students, these are the basics you need in order to operate in life. So there I think AI is an enemy of us because with AI, I can do the problem set automatically. Why would I go through the problem set? And then you get to June and you have the exam and you’re like, “oh, what is this exam about?”. So there is an enemy. But indeed there are tools. And I try to tell the students, you can ask for help. You can ask Claude to explain. If you don’t understand what it is, you do it two ways, you do it three ways until you learn it. Okay.
On the other side, let me tell you what I do to my second year class. My second year class is — it could be called “what I learned in politics that I didn’t know before as an economist”. So I start from the policy. What is the policy that you’re looking at? So a group is looking at Tegucigalpa. They have a huge water problem. The water runs out all the time and there’s only a few hours of water a week. Okay, that’s the economics, the economic policy. But now you want to look at the politics. What is the political economy? Who are the interest groups? Who are in favor? Who are against? You want to then talk about the narratives. How do you discuss this in public? How do you give a speech? What is the message that you give? What do you want people to hear? People don’t hear what you say. They hear something else. What are the preconceptions?
And then you want to talk about implementation. How are you going to implement your solution? Well, in this class I tell students AI use is obligatory. They need to for all of these things: the analysis, the politics, the narrative, all of these things. They need to build models. They need to understand the data. They need to actually figure out stuff that three years ago would have been unthinkable. They couldn’t have been doing. So I think that my view on how AI is working in education is that we need to make sure that they are learning the basics and that is going to be a struggle. And I agree with you. But at the same time, we need to be able to get our students to do enormously more than they could have done.
So if you’re teaching a macro international class like you do, and the students can actually do a trade model of the Ukraine sanctions, they could actually change the substitution. I mean, they could do these things that before, the amount of computing and programming that you would need would be the task of a PhD! So I think that the way that training is going to work has to radically change in using the AI tools to learn and using the AI tools to get much further. But at some basic level, we need to be able to persuade the students. That’s the difficulty — that they need to learn the basics. Maybe your papers will be written by an AI, but if you don’t learn to write, you’re not going to learn to think. I know that argument is difficult to make, but if I have a child, a seven year old like you have, I would try to hammer home that argument somehow.
Maybe for this part, what we are going to have to do is homework in the classroom, right?
Andrei
Just notebooks and adults in rooms.
Luis
Maybe we have two hours and, you know, in the library of the school from 2 to 4, which is homework time, no phones, no computers. And you guys have to do homework for these basics part. And then we need to also use the AI. I mean I believe in both. I don’t think it’s either or.
Europe’s AI regulation problem [00:39:31]
Andrei
So this is fascinating. Should this make us a little bit pessimistic in the sense that… my sense is that there was this more optimistic line of thinking that I would associate with Daron Acemoglu, which is that we have options. There is directed technical change — we can choose to develop technologies to keep them complements with human labor, and then we won’t have so many problems. Whereas here it sounds like almost something inherent is happening, where as the AI gets more advanced, it becomes a substitute — so we don’t have a choice. We either accept advanced AI, but we accept substitution, or we don’t accept advanced AI. So there’s this advanced AI, and “no substitution” might not be on the menu.
Luis
I think that’s my view. Indeed. I think Daron Acemoglu has excessive optimism about two aspects of this. One is how much can we control this runaway train? We are in a game theoretical situation between China and the US. And I mean, there is a strategic interaction between them. If the US decides not to develop, then China is going to develop anyway. So I don’t think the possibility of slowing things down exists.
Second, I always think when he says—so actually it was two parts, but I want to make three. So one is the interaction [between the US and China]. The second is the “we can direct technology” — because who is “we”? Is “we” China? Is it the US? Is it firms? Is it workers? Is it lawyers? Is it truck drivers? Who is “we”?
All of those people have very different interests. Is it the people in the AI industry, which is now generating a big part of the growth in the US? Is the US not going to want to have this growth? So “we” is always kind of hidden away a little bit. Though as somebody who is as super sophisticated about political economy, hee knows better than me. He’s written a whole book and lots of papers about the institutions and how they mediate this “we”.
The other thing is that I think that the risk of trying to interfere is many unintended consequences. So I want to tell you about Europe because that’s what I know well. Apart from being an economist, I spent a few years as a politician. I was in the European Parliament and Europe has made a very Acemoglu effort. In fact, let me tell you. Let me tell you that this letter that Acemoglu and Elon Musk and many others signed—the Future of…
Anson
Future of Life Institute.
Luis
The Future of Life Institute was this February or March 2023, something along these lines? This letter actually came in the middle of the elaboration of the EU AI Act. So the draft was finished in November of 2022, the two drafts. But then the drafts have to be reconciled and the act was passed, I think, in the spring of 2023. In between, when they were just finishing, there was ChatGPT. The introduction, if you remember, of ChatGPT was in November of 2022, and that moment was the moment where there was this existential risk pandemic. Everybody was like, oh, we’re going to get turned into paperclips and humans won’t exist anymore. And so they wrote this letter and there was a moment of panic in Europe.
And the person who actually wrote the law from the Commission has given an interview to a Swiss newspaper. I wrote about it in my blog. If somebody wants to see it in my Silicon Content blog, it’s called “Why is the EU AI Act So Difficult to Kill?”. And basically he argues that it was a bad moment for that letter because really, Europe decided, “this is too risky. Let’s put all these guardrails all over the place.” And the consequence for Europe is that, as you were hinting, a lot of the productivity gains that we could be getting from AI are not possible to get.
So let me give you an example. The AI Act is built on four risk categories. So there are forbidden uses, which includes detecting emotions — that’s not allowed — or government controlled surveillance and point system social scoring systems. That’s forbidden. But emotions detection is also forbidden. Second, social scoring by the government’s public sector.
Second, high risk uses which involve energy infrastructure decisions that the legislature says AI shouldn’t be taken without a lot of steps. Now those decisions include education and health. So in education you would very much want, for example, your students in Edinburgh, you would want them to take an AI quiz and to help you see how they’re doing. Probably eventually, it’s going to be possible for them to do the problem sets in a customized way so they can jump a step, etc. But the AI Act says these things are high risk, and so the fact that they are high risk means that when you train a system, you have to make sure that all the data are correct, that all the data is free of errors to the extent possible, that it is unbiased, and that you have the relevant data.
Now error-free training data doesn’t exist. The training corpus right now is the internet. Errors must be all over the place. Somehow, for some bizarre reason that I don’t know if anybody understands, after all of this is aggregated, all the errors get washed out like in a law of large numbers kind of effect. So it kind of works. But the training data has to be unbiased and free of errors. Now you need to keep detailed logs on these high risk applications. You need to keep your records. You need to keep documentation of everything of the system for ten years. You need to prove accuracy and security. You need a conformity assessment and you need to register with the EU authorities.
Now there are 55 EU authorities that will do that. And these authorities are supposed to have personnel that are highly qualified in AI, highly qualified data protection, etc. Now if you’re an entrepreneur, you’re starting your company, your little startup with education — you have to do all this, plus GDPR! The General Data Protection Regulation. Businesses know it because of the cookies. It’s a pain, right? I mean, you know, in economics people often disagree about things. I can tell you there’s been 15 papers on the GDPR, and all of them find less venture capital investment, less startups, higher compliance costs. Every single thing tells you the GDPR has been bad for EU business, and now we’re adding the EU AI Act for startups.
So part of the risk is you try to control the technology and you end up without technology, which is kind of the world where Europe has a risk of finding itself. We don’t have foundation models. We have great researchers. We have a huge savings pool. For many reasons that we could go, if you guys care to go into that. We have the researchers, we have the ideas. Businesses don’t scale in Europe. We don’t have foundation models. I think there are something like two foundation models in Europe compared to 15 in the US. The numbers are really very disproportionate. And we have very little AI implementation. So that’s a problem.
Anson
I’m curious what you would say to a person who is like, “oh no, but I actually think that these risks are really serious.” And even if we don’t think all the way to immediately turning all humans to paperclips, they think that, “oh, actually, if you have a ton of AI systems that are deployed throughout the economy, if they’re not optimizing for the things that humans care most about, then you could slowly, gradually just shift things off the rails.” And so maybe they would say, “well, the Act’s most serious risks, the systemic risk for AI systems or general purpose AI systems, these are the models that require over 10 to the 25 training FLOP and maybe a bunch of other requirements. And so surely this is just applying to the most capital intensive or most capital rich companies. And so maybe for most other people, this particular thing doesn’t matter so much, but it’s just this particular group of actors that need to be subject to additional scrutiny.” What would you say to that?
Luis
The systemic risk category is maybe a different story. I was talking about systems more broadly. There is a systemic risk category indeed. As you’re pointing out, I think it’s 10 to the 24 FLOP, but it’s… well it’s 24 or 25.
Anson
I think it’s 25. It’s based on GPT-4.
Luis
So indeed GPT-4 is above. Llama is above. So we have already—
Andrei
The previous generation systems are already above.
Luis
The previous generation are already above. And yes, they have to be subject to adversarial tests. You have to prove and make sure that they are safe. To me, this kind of existential risk issues in these very large systems — they do deserve additional scrutiny.
Andrei
So in that context, are you more or less optimistic about the EU when you think about AI? Do you become more or less optimistic about the European Union?
Luis
I am desperately worried. I think that we are in a situation where if these kinds of effects that we were discussing before of big productivity growth, big welfare gains for many citizens that can get AI to do (for example) their driving. They’re not going to get killed in a car crash. They’re going to be able to do their contracts. They’re going to get legal advice to do smarter things that they would have done, negotiating with their landlord. Many, many things that are not going to happen potentially because they will not be allowed. I think that productivity growth will suffer. Growth will suffer. Welfare gains will not happen. And I think that Europe has a demographic problem and a high debt problem. So Europe needs growth more than many other places to pay its bills.
Europe is in a very tricky situation. Look at what’s happening in France. On the one hand it’s not growing and on the other hand they have this big debt and explicit debt and implicit debt for the pension liability. So it desperately needs growth. And I fear that the European Union has overregulated itself and is not going to get that growth.
Anson
I’m curious how much of what you say also applies to the UK, because, for example, the UK does have a frontier lab, which is Google DeepMind. So how much of what you say also applies to the UK?
Luis
Let me just start by saying that I don’t like Brexit and I don’t think Brexit is a good idea. But Brexit was bad for the UK, but it was bad for Europe because the UK was the force that was pushing Europe in a more free market and open minded way. It was. The UK was the motor for the single market project that was about making sure that Europe had an integrated market. And so once the UK left, we had this divergence.
The UK has a very pro-AI posture. It hasn’t diverged in other areas, in environmental areas, etc. It’s still applying the EU rules, but in other areas it’s actually taking a more positive posture. I’m a professor here. I’m at the London School of Economics because I actually believe in the UK. I do actually think that the UK has a very bright future. It’s just the governments are not making the decisions at the speed necessary to profit from them. But if you think of AI, you have capital, you have nuclei of talent, which are Oxford and Cambridge, which are at the cutting edge. You have DeepMind, you have all of these other labs around it. I think the UK could be Silicon Valley. I don’t see why that could be impossible. Maybe the risk taking mentality is the one that is missing. It’s not quite there.
Who captures AI value? [00:52:46]
Andrei
So thinking a bit about the AI value chain, you were saying that there was this infrastructure layer, this lab layer and this implementation layer. How do you think about where the value will go? How will the value be distributed across those layers, and how do the prospects of different parts of the world depend on which layer gets the value?
Luis
That’s a great question. So I’ve been arguing that Europe could try to get the value of the AI. We’re not going to get it from the lower layers. We could get it from the implementation. So the idea for Europe could be if we managed to make the other layers competitive, interoperable, then we could get value on implementation. So let me split it bit by bit.
So on the hardware layer it’s clear that China and the US are capturing the value. So on the hardware layer, if the hardware layer is where the value is, then that’s clearly going to benefit the US. And I think that it looks like learning curves are very steep. Look at Intel. It had a competitive advantage in PC hardware for four or five decades until just this last generation where they got hammered. But they’ve had decades. It’s very hard. The learning curves are very steep. You need to keep it clean. You need to print it carefully. You need to design very complicated things. It’s very hard to enter. I don’t think there is really an entry possibility. So a lot of capture will get into the hardware layer. I think the evidence is pretty strong regardless of what happens upstream. Profits will go in there.
For cloud computing, I think there could be big switching costs, moving your data from one cloud to another. Europe is really trying to avoid that, and it’s really trying to make sure that the data is yours and you can move it. But the cloud players can add features to make sure that you want to stay and that if you move it, you lose some value. So there could be quite a bit of switching costs. I think we need to make sure that in cloud computing. the data is encrypted and it remains on servers that are located geographically here in Europe, so that not all the value going back to the US. But I think both geopolitically and economically, the risk is clear that also on the cloud layer, the US will have extraterritorial reach because those are American companies.
On the LLM layer, on the foundation model layer, it seems to me that what we are observing is that there is very strong competition and it’s very hard to obtain a competitive advantage. It seems to me that what we see is all the time one company gets some feature. We all love it for three months, and then we suddenly start trying the other one because it just got a little bit better feature. I am basically between Gemini, Claude and OpenAI. I’m switching between all of those. It seems very hard to get an advantage.
Also, there is a big open architecture possibility. Llama was trying. Mistral is building on that. All those weights are out in the open. So we’re going to be able to have some, at least some of the applications that are more energy efficient or smaller. They can go on the old systems, and they can actually enjoy the fact that these are open architectures. So I would think that that layer remains quite competitive, with one caveat, which is the introduction of switching costs through memory. If the system starts to remember you and starts to know who you are, then switching systems is going to be costly.
I think that all of that data, we should make sure we should do our best to make sure that the data is yours, and that it’s very easy to port. I think the portability is crucial. Think of the example with social media. In social media, there’s no portability. The data of my graph and of my everything about me belongs to—
Andrei
Meta.
Luis
Or Meta or to Twitter. I will never say X. This is my one principled objection to Twitter. And if you are in disagreement with them, you start again from zero. Okay, I have what, a hundred and few thousand followers on Twitter. If I want to abandon them and start somewhere else, that’s my problem. But if not, then I stay there. So imagine a world where I send a message and everybody who likes me can follow me from any platform, but it’s completely interoperable. Market power would change radically, right?
And I think the regulation you were talking about, optimal regulation, would do its best to make sure that interoperability exists, that we don’t fall into the same trap that we fell with social media on some of those verticals, to appropriate quite a bit of the value at the European level. Now, how do you do that and avoid extraction by all those other upstream players that we have been talking about, from hardware to infrastructure to the LLMs. Well, we have to move fast, which we’re not doing, and we have to keep the markets competitive. We have to do our best to keep those markets competitive through interoperability and all these other demands that the data can be moved and the clouds are not proprietary, etc. I think that it’s possible, but it’s tricky because the truth of the matter is, if you don’t have the hardware, everything else flows downstream.
Andrei
So you think it’s kind of one of the key points here seems to be that you think the EU should be using the leverage it has to move as much of the value to the implementation layer, because that’s the layer where Europe is strong.
Luis
Yes. I’ve been pushing a “smart second mover” strategy for Europe, which is a strategy that basically has Europe trying to free ride on this gigantic investment boom in the development and the data center development that is already taking place. We take it for granted. We’re not going to try to imitate because we’re too far behind. And let’s use all our scarce resources in securing the autonomy, encrypting the data, having the data centers locally based, but mainly in developing a strong implementation layer.
Andrei
And in that context, would you worry that Europe would have some of the same problems it has had with regulating the tech giants in the past? Because I’m guessing this becomes a geopolitical game pretty quickly.
Luis
It does become a geopolitical game quickly, that’s the problem. The US government is really throwing its weight behind these big giants. And it’s going to be very hard for Europe to insist on level playing fields and interoperability, etc. And we are seeing it now. Of course, there was a digital tax, there was the OECD pillar two that we were going to harmonize aspects of corporate taxation. Trump said that’s off the table. I mean, it’s going to be very difficult to do certain things that rely on mutual acceptance. The US is going to throw its power, and we’re going to have to just basically swallow it.
And consider the Turnberry “agreement” between Donald Trump and Ursula von der Leyen this summer. That was a trade dispute. In every trade dispute until now, the way they work is okay, you put the tariffs, I’m going to reply with the same here. Europe comes out of the room saying, “huge victory. He’s putting all these tariffs and we’re not doing anything.”
“Sorry. What’s the victory?”
“They’re not going to do any other things.”
“Where does it say that Donald Trump is not going to do any other thing?”
“No, no. They’ve promised not to do any other thing with our cars.”
And of course, there’s no promise. So we accept the tariffs. We don’t do anything in retaliation. And on top of that, we didn’t really get any commitment not to have any further actions from the US. The truth of the matter is that geopolitically, we are very dependent. And the Ukraine war, which would take us in other directions, is part of the reason we need the US defensive umbrella. And we are going to be struggling a lot to get that defensive umbrella to continue.
Anson
Given that, I’m curious how you think about economic security because I think a lot of the reasons for this smart second mover strategy is that it’s a lot harder to build out huge amounts of energy infrastructure and data centers. But then very common in these kinds of discussions about data centers is to think that we want to have some kind of sovereign compute. If this is so important to the economy, then we want to make sure that we have our own ability to have data centers in the EU. And if people need to use AI, then we need data centers there. How do you think about that?
Luis
I don’t think the public investment in this is going to be the big solution. So the EU has two sets of rules. One is the factories and programs and the gigafactories. The gigafactories are five big factories that are AI data centers. But the level of investment that is being put now into these one gigawatt plus centers which are really extremely costly—we are going to have one of those in Portugal. It’s private sector investment. So this is going to be one data center that will be local.
We’re going to have more local infrastructure. So basically Portugal, Spain and northern countries because of energy issues are getting some big data center investments in Spain by the Ebro River. I don’t know what the Ebro is in English. The Ebro, the big river that comes through the north below the Pyrenees, taking all the Pyrenees’ water. There’s going to be two big investments there. So we will have kind of “sovereign data centers”. But these are not truly sovereign because in some sense the ones in Spain are basically Google and Amazon. They’re Azure and Amazon Web Services investments. But if they are local we get some control.
I mean ideally eventually there will be some local European companies doing this. I don’t think that public investment is the solution because the numbers that we’re talking about—we’re talking about hundreds of billions of build up per year, up to a trillion for 2030. These are numbers that are really very large. And of course, public investment is not at that level. All of these companies — Amazon, Microsoft, Apple, etc. — they’re all spending in R&D more than any government, just one company in Europe. So it’s not going to be possible to keep up through public investment. The private sector has to want to do it. And in order for the private sector to want to do it, regulation is crucial both in terms of permitting, but also in terms of all these regulatory obstacles that we seem to be throwing all over the place.
Andrei
So on net from this sort of game… because I think a lot of people in Europe are upset by the relatively aggressive stance that the US government is taking on a number of these issues. On net, is this good or bad for Europe? Because on one level, I guess aggressive US government action means that we’re less likely to be able to move value to the layer that’s convenient to Europe. But maybe this sort of aggressive action would also make it less likely that we will be too risk averse, right? Because, you know, we want to — our instinct is to stop a lot of things that the tech giants don’t want us to stop and that the US government might not want to stop. So will the US government save us from ourselves?
Luis
So that’s — I mean, I think that consequence would be welcome, or at least to some extent welcome. We had a year of wake up calls. We said wake up call number one, Trump gets elected. Wake up call number two, the sofa scene where Vance and Trump ambushed Zelensky, the Ukrainian president. All the time it’s like this is a wake up call for Europe. We cannot trust our old ally, the US. We need to act together. And then we go back to sleep. People have these wake up calls, and every time you’re going back to sleep. They wake up calls don’t seem to be waking us up at all.
To some extent, what has happened this year should have unleashed a wave of “we’re going to invest in AI and destroy all this legislation”. And one post that I wrote on this I mentioned before on the Silicon Content blog was exactly asking, why is it so difficult to undo this thing? And Europe doesn’t have a very easy error correction mechanism. In Europe, the same European Commission that had this explosion of legislation, that was the Green Deal and the digital legislation over the five years between 2019 and 2024, is now tasked — the same president, Ursula von der Leyen — is now tasked with undoing it. “Oh, we went too far. Let’s undo it.”
Well you know, the rapporteurs, the people who wrote the legislation in Parliament and in the Council, the people in the Commission who pushed it, all of these three institutions — the governments, the parliament and the European government, which is the Commission — all of these institutions are going to be tasked with undoing a lot of rules that they themselves pushed. And they saw big victories when they put them. So now we’re saying, “oh, you know what? We thought the Act was great. But now that we realize it’s going to slow us down and Trump is going to be a risk, let’s continue.” That is very hard to happen.
The coalition that runs Europe involves the center right, the center, the center left and the Greens. All of those parties basically were the same ones that passed the first legislation, and they’re the same ones that have to now undo it. And there are many differences inside that coalition as to what can happen. The very first piece of legislation should have been removed, which had to do with excessive corporate reporting and paperwork. It was guaranteed to pass. Everybody thought it was going to pass, and then Parliament turned it down because a lot of people were invested in the existence of that legislation. So I hope Trump partly saves us from ourselves or the US partly saves us from ourselves in some of these aspects. But I am not very hopeful.
AI, interest rates & fiscal future [01:08:17]
Andrei
So one direction I was also hoping to bring back into this discussion is a bit more of the macro finance angle, right? There’s been quite a bit of discussion about the potential impact of AI on things like interest rates and things like that. How do we think about that in the context of fiscal sustainability, macro financial stability? These are hot topics in general and hot topics in the European Union in particular. Any thoughts on that?
Luis
Yes. So I wrote a post that I titled R Without G, talking about how the European Union could get high interest rates and not growth. Let me unpack this a little bit, but not do it for Europe yet and then apply to Europe. So there was a very recent paper by Auclert and some co-authors that was presented at NBER this summer. People can get a link. Maybe we can post the links to the papers that I mentioned. The paper provided a very simple demand and supply framework and applied to AI.
So basically they talk about the price of assets as being the result of a demand and supply equation. And when there is a lot of demand, prices go up. The tricky thing is prices go up — everybody has to remember in our audience that means interest rates go down. So those two things go in opposite directions. So they argue that over the last 40 years, the demand has greatly outstripped supply. And so the prices go up, and the interest rates go down. So we have a very long secular drop in interest rates.
And they basically say in their calculation the asset demand has multiplied by four. So a big increase in asset demand because of all of these things that have to do with slow growth, with demographic change, people need assets for when they retire, and whenthey are old they need safe assets in particular. All of that has led to a very big drop in interest rates. It’s been a godsend for everybody who was in debt, particularly countries that were in trouble — they could issue debt for free!
But they argue that AI is going to change this, and that AI is going to increase interest rates. First, because of the impact of what we have been discussing of higher productivity. High productivity growth means the supply is going to increase. They’re going to supply equity. They’re going to have to raise equity. That’s a supply. They’re going to have to raise equity to pay for the AI investments for all the AI labs, for all this. And at the same time, the demand might go down because younger workers think, oh, well, the economy is growing a lot, so I don’t really need assets because the economy is going to grow so much. So they say this is going to lead to a drop in price and an increase in interest rates. And that’s their argument — so a drop in asset prices, and as a result an increase in rates.
So basically their view is that we will have bigger G, higher growth rates and bigger R, but the growth rates will be higher than R. So no problems for fiscal sustainability. Remember that sustainability depends on R minus G. R is how much do you have to pay when you issue debt. And G is how much is the pie growing with which you pay. So if R grows a lot, but growth doesn’t go up, each time I have to be paying more and more and more, and I’m squeezed. If my growth rate is growing a lot more than what I used to pay the debt, I can afford it.
So they say the growth rate probably grows a lot and it grows more than R. So overall that’s sustainable. So what I worry about Europe is that you are going to have the bad part of having to pay higher rates without having the good part of having higher growth rates. If you’re putting obstacles in the way of how you are adopting the AI — the taxi drivers oppose self-driving cars. The legal profession opposes AI in the law, and the doctors don’t want AI. And you get these human bottlenecks everywhere. Then you’re not going to have increases in growth rates, but you will have to pay the global, higher global interest rates that everybody is facing because of the AI revolution and the higher productivity of capital that comes with that and higher investment boom and all that.
So as a result, what you could have is that you make much worse debt sustainability problems which plague our welfare state in the European Union. So we have countries that have not only high explicit debt, almost 120% of GDP in France. And they also have high implicit pension debt, 3 or 4 times GDP, probably more in some countries. And all of this has to be financed with the G, and you have to pay all these increasing R on it. And if you don’t get a G and you get a higher R, you’re going to be in big trouble in terms of sustainability.
So you asked before whether Trump would be somebody who would wake up Europe. This is another reason to wake up. We have a problem demographically. And this is not just continental Europe, it’s also the UK. And we need more growth, and we need to take a much more aggressive pro-growth stance, much more aggressive pro-growth stance.
Andrei
So in terms of this overall problem and that piece in particular, I’m a little bit surprised by the fact that in the economics profession, there seems to be, if not a consensus, a very strong majority view that AI will lead to an increase in interest rates. But couldn’t you make an equally strong case that it could lead to a decline in interest? I mean, precautionary saving. You know, I get this AI thing—
Luis
I’m a bit scared.
Andrei
I’m very scared. Right? And I think in the Valley, people are having this discussion about, you know, I need to do well in the next five years. Otherwise I’ll be a serf forever or something along those lines. Couldn’t people just save because they want some exposure to the companies that will own the economy? And that’s kind of a first order thing. So they really want to buy assets now because their human capital will depreciate.
Luis
So I think that it’s not impossible that we have that—you’re right that now the world seems very uncertain. Also, it’s possible that inequality grows very much. And that’s going to go in the opposite direction as well. Indeed, because the rich people save more, they don’t consume as much. I mean, at some point, Elon Musk is not going to consume his $1 trillion package if that happens. So yes, there are a couple of forces. The precautionary saving is one. And the other one is the inequality increase that could push in the other direction. I would side with the consensus of the economic profession. But you’re right that there is a question mark over it.
As a first order approximation, the slowdown in growth over all of these years led to this drop in R. And an acceleration in growth, if that’s what we think is going to happen as a first order effect, I think will increase the return on capital and will lead to an increase in R.
Andrei
So empirically, those things go together as opposed to—
Luis
I would expect that. But as we said, we are peering into the unknown and we have to be all modest and humble.
Andrei
And the other thing that surprised me a little bit was the fact that you were tying this increase in R with problematic implications for Europe in particular, and the reason why I was a bit surprised by that is that I think of Europe as a continent of creditors. We run huge net surpluses with the rest of the world. So it would seem to me, okay, we’re creditors. R will go up.
Luis
So we are exposed to these gains.
Andrei
We will get richer. So in some sense, we will get richer. Our governments will have more problems, but we’ll get richer. So as long as the government finds ways to tax—
Luis
So let me unpack that. That’s a great point. So it’s true that we are net savers. And that means that as a continent we should get some exposure to the good side of the AI. So it’s true that—
Andrei
Even if it happens in the rest of the world, right?
Luis
And Enrico Letta was writing a report about how Europe is doing badly, and he came up with this expression that European savers are exporting their savings into American companies that are employing the European workers and entrepreneurs who cannot make their — so this is a lot happening. You are down in the West Coast and you see all these Europeans running, Indians and all these other nationalities.
So it’s true that the savings should capture some of these additional. We should be benefiting. We should be on the good side. Now, the distributional impact is a bit tricky, because who is going to benefit from those savings rates? For example, Holland has big pension funds with big exposure to interest rates. But in places like Spain and France, essentially the state is doing the whole pension through a pay-as-you-go system. So the overwhelming majority of the population has zero financial wealth. They have housing which could also go up. And so if you’re long on housing, you’re going to benefit probably from this run-up. But they’re not exposed to financial assets. Only the very top 3, 4 or 5% of the population will have significant exposure to these financial assets. So the distributional issues are not obvious. But you’re right that there is a net saver income effect that is positive income effect, meaning Europeans are wealthier, as you put it, when R goes up.
Andrei
One thing you were hinting at when discussing the macrofinance question was demographics. And I must say that I worry about demographics quite a lot in the European context and in the global context as well. So should demographic change change our view of the trade off between the benefits and the risks of AI? I think a lot of people are in this mindset that, at least in the rich West, things are pretty good as they are. So we can kind of afford continuing with things as they are and be quite risk averse when we write down AI regulation and do things like that. But aren’t we actually on a burning platform? Aren’t things going to get worse unless something turns up like AI?
Luis
So you’re very right. And my colleague and co-author and friend Jesus Fernández-Villaverde has been sounding the alarm about the fact that demography is turning. Total fertility rates are plummeting, not just in the developed world like we expected, but actually in the developing world. Colombia, Tunisia, Turkey — they’re seeing collapsing rates, which is very strange because normally… they’re saying they’re going to grow old before they get rich. Normally the opposite happens. The countries get rich and then they start aging. So the demographic collapse is really problematic. And it’s true that you will have a need of a care economy. For example, in the scenario, in the positive scenario that you were describing, where AI does many of the tasks that we think are human, care is something that maybe can be helped by AI. And so replacing some humans in those professions, which are going to be where the enormous share of population that will be old, will be useful.
And I remember discussing this with Joshua Gans and I said, “oh, people are not going to want a robot to care for them.” He said, “are you kidding? I would much rather have a robot take care of my needs — cleaning or showering me or whatever it is — than a human.” I was like, “oh, actually, maybe that makes sense. That robot is gentle. Maybe you can do it.” So he was arguing that the robots will have a big value as carers and potentially that people will want them. I don’t know, we’ll have to see. But if not in caring, in many other things. And again, there is evidence that people are doing therapy with AIs. So maybe there is more and more range. But if not in therapy, in many other things, we need growth. We need the labor that we’re not going to get from this lack of fertility. And so let’s have a more AI positive posture. Absolutely.
Of course, it could be that AI leads people to want to have AI companions. And I don’t know if—
Andrei
That makes the fertility crisis worse!
Luis
It makes the crisis worse! But okay, that’s a consumption choice that we cannot predict how that will play out. But it does seem like people like to have AI friends. I think that that is happening.
Andrei
So I think the question, one of the things I had in mind when I asked the question was about stuff like R&D and things like that, right? So you know, R&D, we think of R&D as being done by relatively young people, although I appreciate that it’s changing. It’s not so much AI helping us with the care economy, although that’s important as well. You know, in semi-endogenous growth models we need population growth to get any growth. And I guess with fertility declines that’s problematic. So at the very least you need to be able to shift more humans into R&D.
Luis
And not only humans, but AIs too. But AI as well. Some of the work by Jones and the paper by Philippe Aghion — the recent Nobel Prize winner — and Ben Jones and Chad Jones argues that basically, to the extent that AI is just capital, it’s not going to make a big deal. Where it really makes a big impact is in R&D. To the extent that AI can accelerate the production of ideas, then AI can really accelerate growth. That’s, I think, the scenario where you will see the big growth acceleration, having taken into account all the caveats that I put about, you need regulatory approval and so on.
But I agree with you that in terms of generating ideas, which is really the driver of growth, if we don’t have the scientists, we better have the AI generating ideas or we need to move more people into scientific production. I really am pretty optimistic about how AI will help in the production of ideas. Somebody like Terence Tao —he was arguing that AI was helping him collaborate with many collaborators. So basically he says, look, you always have small teams of mathematicians because you need to trust each other, because you don’t know if one key step in the proof wasn’t well done. But now with AI, we use Lean to do the little bits of the proofs. We decentralize it, and then we can check each other’s work and somehow we have bigger teams.
Other mathematicians are saying that AI is already helping them to make a proposition, or not quite yet. I don’t think there’s an AI theorem, but I think there are some results already. So combinatorics, the protein folding Nobel Prize — we do see some impact of AI in accelerating research, which could be crucial. Indeed, given our demographics, we need to have the research sector produce somehow.
Anson
All right. I think that’s a great place to end. Thank you, Luis and Andrei, for coming onto the podcast.
Luis
It was a lot of fun. I appreciate it.
References
I. Silicon Continent Posts
On EU Digital Regulation:
On Macroeconomics & Interest Rates:
On AI and labour markets:
On Europe’s Second Mover Strategy
II. Academic Papers & Research Mentioned
Labor, Productivity, and The “Training Ladder”
The “Chatbot” Paper (Customer Support):
Reference: Brynjolfsson, E., Li, D., & Raymond, L. R. (2023/2025). “Generative AI at Work”. NBER Working Paper / QJE.
Software Developers Field Experiment:
Cui, A., et al. (2025). The Effects of Generative AI on High Skilled Work: Evidence from Three Field Expermients from Software Developers. MIT Feb 2025
Aggregate evidence of substitution
Brynjolfsson, Erik, Bharat Chandar, and Ruyu Chen. (2025). “Canaries in the coal mine? Six facts about the recent employment effects of artificial intelligence.” Digital Economy Stanford.
Lichtinger, Guy, and Seyed Mahdi Hosseini Maasoum. (2025). “Generative AI as seniority-biased technological change: Evidence from us resume and job posting data.” Available at SSRN.
Autonomous vs. Non-Autonomous AI:
Reference: Ide, E., & Talamas, E. (2025). “Artificial Intelligence in the Knowledge Economy”. Journal of Political Economy
Superstar Effects:
Rosen, S. (1981). The Economics of Superstars. American Economic Review.
The The AI-Becker Problem:
Apprenticeships: Garicano, L., & Rayo, L. (2017). “Relational Knowledge Transfers”. American Economic Review.
Garicano L and L Rayo. (2025) “Training in the Age of AI: A Theory of Apprenticeship Viability”. CEPR, also here.
Macroeconomics of AI & Growth:
Asset Pricing and Interest Rates:
Reference: Auclert, A., Malmberg, H., Rognlie, M. and Straub (2025). “The Race Between Asset Supply and Asset Demand”. SSRN
AI and Economic Growth (R&D):
Aghion, P., Jones, B. F., & Jones, C. I. (2017/2024). “Artificial Intelligence and Economic Growth”. NBER.
Bottlenecks/ORings: Jones, B. “Artificial Intelligence in Research and Development”. NBER. (2025)
Acemoglu, Daron. (2025). “The simple macroeconomics of AI.” Economic Policy.
Johnson, Simon, and Daron Acemoglu. Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. Hachette UK, 2023.
III. Reports & Open Letters
Future of Life Institute Letter:
Title: Pause Giant AI Experiments: An Open Letter.
Date: March 2023.
The Letta Report:
Title: Much More Than a Market.
Author: Enrico Letta (April 2024).
The Draghi Report :
Draghi Report on European Competitiveness (Sept 2024),
Visit EpochAi podcast hub to to see all episodes, subscribe and more.



