Transcript: Rethinking the AI boom, with Daron Acemoğlu

This is an audio transcript of The Economics Show with Soumaya Keynes podcast episode: ‘Rethinking the AI boom, with Daron Acemoğlu’.
Soumaya Keynes
In the 2010s, I remember reading a lot of chin-stroking about whether the robots were going to take our jobs. Now it’s the mid-2020s, I feel the better question is how the robots are going to reshape our jobs. Now I’m using the term robots a little loosely here. I really mean technology, automation, and of course, artificial intelligence. In this week’s episode, we are going to be talking about the economics of AI. How transformative is it going to be?
[MUSIC PLAYING]
This is The Economics Show with Soumaya Keynes. Acemoğlu, Professor of Economics at MIT and author of the book, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. Daron, hello!
Daron Acemoğlu
Thank you, Soumaya. It’s great to be here with you.
Soumaya Keynes
I am so excited that you are here for this conversation. OK, so let’s start off with a stupid question. So imagine a scale from one to 10. So one is you really don’t think AI is going to have much practical effect at all. Where 10 is you think it’s going to radically transform our lives in pretty much every dimension. So where are you on that scale?
Daron Acemoğlu
I think there are many possible futures, and it’s for ours to choose. One is possible because I think the capabilities of these AI systems are not as great as their proponents claim. Minus eight or nine is possible because we can really misuse these systems both in the production process and in communication, manipulate people with them. Create much more inequality, much more dominance over a few tech companies.
And I think perhaps seven or eight positive is possible if we use these in a way that is actually going to help workers, help better communication, help people control their own data and their own sort of ecosystem much better. But if you ask for one single number, which is where we’re likely to end up on the current policy trajectory and the current emphasis and market structure of the tech sector, I would say about minus six.
Soumaya Keynes
OK. So I think what you’ve done there is you’ve made an augmented spectrum where kind of minus 10 is very transformative and terrible. And 10 is very transformative and awesome.
Daron Acemoğlu
Exactly.
Soumaya Keynes
And zero is nothing’s happened.
OK, so I guess my follow-up question to that is, have you moved at all on that scale over the past, you know, two to three years? You know, over which time we’ve seen surprising advances in large language models, that sort of thing?
Daron Acemoğlu
Well, I’m very obstinate.
Soumaya Keynes
So no.
Daron Acemoğlu
Well, you know, yes. So look, I would be absolutely lying if I said that I wasn’t surprised by some of the demonstrations that others and myself performed with ChatGPT when it came out in terms of its ability to give seemingly intelligent human-sounding and somewhat sophisticated answers to a few queries.
So yes, I was surprised. But then I reverted back to my default position that it’s just a few-trick pony, so it can do a few things very well, which you would expect from a single architecture based on a very simple structure of generating knowledge. And no, it’s not going to be able to expand that to doing things that are much more sophisticated, for example, like many of the tasks that we need to perform in the process of production.
Soumaya Keynes
OK, well, let’s try and make that a bit more specific now, because earlier this year you had a working paper out. It was a bit of a reality check on some of the more outlandish numbers that were being thrown around. Your number was very pessimistic in terms of the effect that you thought that AI would have on productivity growth, GDP, those sorts of things. So could you just start by walking us through the process by which you got to that quite conservative number?
Daron Acemoğlu
Oh, I would say my number wasn’t pessimistic, it was realistic. So look, I think at the end, the impact of any technology, including AI, on the macroeconomy is going to depend on two things. What fraction of the things that we do in the production process, which economists call tasks, are impacted, and how much productivity gain or cost savings we’re going to get out of that impact. In the case of AI, unless there is an amazing breakthrough, which is quite unlikely within the horizon of about 10 years, AI is not going to impact things that involve a major physical component all that much, because it’s not going to get integrated with robots.
So it’s mostly solo information processing tasks that you can do in the office and in front of a computer that are going to get impacted. And, those are not a huge fraction of the tasks that the production process involves. And the way that I do that more formally is I base my approach on the numbers that other people have derived on the basis of detailed analysis of what are the capabilities of the current large language models, computer vision technology, with some allowance for how it’s going to change over time.
And I take those numbers, projected them on to the occupations, and then I calculate how important these occupations are in the economy. And then I take another set of numbers, based on a relatively careful, randomised control trial type of assessment of how much productivity gains there will be from AI. There are some that just are done almost like online lab experiments. And there are some that are done using the fact that some companies share their data as they were adopting these, chatbot-type activities, and then you can see how, for example, their customer service agents improve their productivity in serving customers, et cetera.
And then on the basis of that, I calculate that about 4.6 per cent of economic activities are going to be impacted, and it’s going to lead to something like a 15 per cent cost savings, and if you combine these two numbers, you get something like, just over 6 per cent increase in total factor productivity, the economist’s favourite measure of productivity, or if you want to translate that into GDP growth, about a 1 per cent GDP growth. So, that’s over 10 years, so that means that you’re getting something like 0.1 per cent GDP growth a year, which is, you know, I’ll take it. It’s not bad, but it’s not transformative.
Soumaya Keynes
So others have found much bigger numbers, right? I mean, there’s the likes of Goldman Sachs, McKinsey. Why do you think they’re more optimistic?
Daron Acemoğlu
So there are three ways you can get much bigger numbers. One is you conjecture or estimate that a much bigger fraction of tasks are going to be impacted. You can feed in much bigger productivity gains or cost savings. Or third, you can say that this way of approaching it misses other major things, for example, suddenly the whole process of scientific discovery, new material, new goods, new services, is going to be revolutionised, so we’re going to get systemic effects everywhere, because everything we do is going to become more productive.
So there are people who claim that last one. So, even McKinsey Global Institute or Goldman Sachs numbers are not the most extreme ones because there are people who think we’re going to reach singularity within 10 years, reaching boundless output. We can have many, many more new ideas, more than exponential growth, or AI could reach a stage where it can consistently and persistently and sustainably self-improve so it doesn’t need us. Or, you know, in some scenarios, it becomes completely integrated into our brains, expanding our capabilities. Science fiction is great, but, it also leads to some weird ideas. So, Goldman Sachs, for example, and the IMF, use numbers that imply a significantly larger fraction of the economy is going to be impacted by AI, for instance.
Soumaya Keynes
Yeah, so I was looking into this with Goldman Sachs. So I’m not actually sure that the inconsistency is as great as it might seem at first, because essentially in your analysis, you’re looking at what might happen over the next 10 years. Goldman Sachs’ number describes what could happen over a 10-year period, but that 10 years doesn’t necessarily start now, right? So they’re saying, look, when this thing hits, it could have a really big effect. I think they’ve got a 15 per cent increase in your productivity. But they are saying that that decade could start after 2030, right? We’re not there yet. So in a sense, their optimism could be consistent with your, you know, you call it realism for what’s going to happen over the next 10 years.
Daron Acemoğlu
That’s one thing. Another one is that I think Goldman Sachs is a very sophisticated, very large, and very heterogeneous organisation. There are people within the Goldman Sachs ecosystem who are as pessimistic or as realistic as I am, and there are others who are more optimistic. So there’s that complication as well.
Soumaya Keynes
What do you make of the historical examples that Goldman Sachs draws out? Because they’re essentially looking at history and saying, well, look, there was the electric motor, there was personal computing, and over the 10 years where they were adopted, you did actually get these fairly dramatic productivity gains. Why do you think that AI would be so different to those two?
Daron Acemoğlu
That’s an excellent question, but there are also many differences. First of all, in all of these cases, you have to be very careful. Where they are really adopted becomes relevant. So the first 20 years of the computer revolution didn’t have much of a productivity gain. So it was a very, very slow process. The same thing is with the electric machinery. Electric machinery started being adopted en masse only about 15, 20 years after the first prototypes were being used. So it’s a complicated thing. Second, not every technology that people claim to be general purpose is really general purpose. So what made the internet, for example, in my mind, so special is that it really affected many different sectors and many different services while also offering the possibility to do many new things.
I don’t think AI will measure up to the internet in my opinion. AI has some great capabilities, but it does not have the same breadth of impacting pretty much everything we do and creating lots of new things yet. It might, but when it does, perhaps we’ll call that a new technology, perhaps that will be another ten years, and so on.
Soumaya Keynes
Can I ask about whether there are any use cases that you think are more exciting? I mean, there was an article recently in the FT about coding, right? So the use of AI to make coders more productive, is that an area where they’re . . .
Daron Acemoğlu
Wonderful. Yes, that’s one case which already is happening and AI is already improving productivity by more than my median estimate because that’s just one task that is very apt for AI. You know, essentially, not all of coding. In fact, there are some very holistic, judgment-based parts of coding, especially the ones that require different processes to come together. More sort of what’s the objective, how you’re going to achieve it, etc. There’s some routine parts of coding that could be very time-consuming. And it’s very easy for AI because it’s essentially you take something from the library that’s very well curated, and you know how to modify that given some basic prompts on the objective of the coder. So that works very well.
Then there are many other things that I think AI can have an appreciable, positive effect if it were implemented right. So, for instance, I think we can use AI for making government services much more efficient and rapidly delivered, especially in the developing world, where there is a shortage of people who can take notes. There’s a shortage of information about how the court system works. There is a shortage of information about how healthcare information can be transferred. So, if you do it in a well-targeted way, you can do it.
I think there’s a bigger, sort of project where you can use AI in the education sector the right way. For example, for helping teachers to understand which types of students are having difficulty with which type of material, and how you can change the curriculum or the pedagogical approaches in real time. That’s not something we can do right now because no AI company is investing in that.
Soumaya Keynes
Can I just push back on that? I thought there were companies, like, you know, edtech companies trying to use, say, AI-powered textbooks, to try . . .
Daron Acemoğlu
Yeah, that’s very different, though. There is a big push from companies like Khan Academy and others. That’s the edtech space. And they have an approach, which I think it’s not going to work. Their objective is to substitute for teachers. They want to do what teachers do more cost-effectively and, of course, monetise it.
I am a big believer that you do need the teachers. Learning is about human contact, human experience, so you want to empower teachers. So, my vision, in many different areas, not just in education, is we want more of the human and we want AI to enable more and better human contact. Whereas the tech industry is, get rid of the teachers, we’ll give you LLMs, we’ll give you automated textbooks, we’ll give you automated grading. So that’s substituting for teachers create a direct contact between technology and the student or sometimes parents, technology, and students. And that has not worked so far. And I don’t think it’s going to work.
Soumaya Keynes
OK, can I take us back to history again and ask what you think the best historical parallel is to the moment that we’re currently experiencing in terms of, you know, expectations about AI and what effects it could have?
Daron Acemoğlu
Well, I think there is no perfect analogy, but there are many historical episodes from which we can learn. I think the most relevant one is the British Industrial Revolution, and in part because it’s a very interesting story, and in part because it’s a story that’s often mistold and it is absolutely true that we are incredibly fortunate to be living in a time 250 years after the Industrial Revolution, which started the process of better machinery, scientific information, and other things being applied to the production process.
But it’s also true that the first 80 years of the Industrial Revolution were horrible for the working people. They brought huge inequality, little revolutionary productivity gains, very, very, very difficult hardship, and hard working conditions. And there was nothing automatic about that very long period. Three generations coming to an end and somehow real wages started increasing for most workers and better outcomes in terms of health, education, and so on appearing. It was a very conflictual process. It was one in which fundamental institutional change, fundamental changes in the labour market, and fundamental changes in the intention and direction of technology needed to take place for that better outcome to be realised.
So I think the right reading of the Industrial Revolution is you have a disruptive technology, you can misuse it, and if you misuse it, very bad things can happen for a significant fraction of the people. And then, you really need to get your act together in terms of institutions, democracy, labour rights, and a better focus on technology for it to get better.
Soumaya Keynes
All right. Well, look, I want to ask what that means in practice specifically now, but I’m going to do that after the break.
[BEHIND THE MONEY PODCAST TRAILER PLAYING]
Soumaya Keynes
We are back from the break. So Daron, you’ve written a lot about the importance of making technological innovation work for workers, right? There are different ways of doing it and you just gave this historical example of the Industrial Revolution where you I think argued that, you know, the way it worked was too slow, that the workers benefited too slowly. What’s the policy lesson now? What should regulators specifically be doing right now to make sure that those lessons from history have been learned?
Daron Acemoğlu
Thank you, Soumaya. You summarised it extremely well, but I would go one step further. I would say it wasn’t just slow, it was not necessarily going to take place automatically unless we made policy and institutional adjustments.
I think those policy and institutional adjustments ended up taking power away from the most powerful elements of society. The new industrialists, for example, who definitely did not want to share political power with workers or even middle class or other lower middle classes.
So, what we need to do, I feel, today, is to take the same kind of steps to take power away from tech companies. I don’t think humanity has ever seen any other corporation that is as powerful as the tech companies today, and they are extremely influential because they have a huge amount of soft power. They have captured both politics and media in the United States. Journalists continue to be mesmerised by them even when they write articles that are critical. They have a very, very coherent, uniform view that does not allow much diversity when you look at the very upper echelons of the tech companies and they have multitudes of ways of shaping technology, of communication technology, of production and very different aspects.
So that power is not conducive to shared prosperity. It’s not conducive to the right type of experimentation with new technologies. It’s not conducive to competition. So I think that needs to be broken.
Soumaya Keynes
So I suppose, you know, one can accept that technology companies have huge power, right? But then the next question of, well, what do you do about it? It just feels like the really tricky one, right? I mean, is this a question of just taxing the companies or is it, I mean, don’t you need to get very fine-grained in terms of the specific regulatory actions you want to take? What would you do?
Daron Acemoğlu
Well, there isn’t a silver bullet. I think you have to do a multitude of things, and I would put them in three buckets. First, you have to reduce their power very broadly. Second, you have to discourage via taxation and regulation the most harmful things that they do. And third, you have to encourage more productive directions for research. So by reducing their power, I think one thing is to break up the tech companies That sounds very radical, and I would generally not favour very radical policy action, but I would say that in this case, it’s not as radical as it sounds because part of the reason why the tech companies are so big is because they have acquired a lot of their competitors. And US lawmakers and US courts have allowed that. That was a silly, misguided policy stance, and that just needs to be reversed.
So all of these things is just a reversal that’s both would be good for economic power, meaning that it will allow a more decentralised way in which new technologies can come in and it will be also good for political power that it would actually reduce their political clout.
Soumaya Keynes
I guess another pushback, would be that, you know, some of the advancements that have been made in AI have been enabled by huge amounts of data, huge amounts of processing power, right? So there are huge economies of scale, right? And it’s just very challenging for the smaller players to catch up. And I guess breaking up one of the big AI companies to two smaller ones could actually more than halve their output. So don’t you want a certain amount of scale there to really make sure that you’re on the cutting edge and can push that forward?
Daron Acemoğlu
So, I do believe that the beneficial effects of economies of scale have been much exaggerated. There are economies of scale, there are network economies, but they cut both ways. They are often a very powerful barrier against entry. And with the right market structure and the right way of sharing data, for example, or ensuring portability of data some of the benefits can be exploited without the harms.
So in particular, for instance, social media networks have a huge scale advantage because, you know, they are a closed system. If there was a way of turning them into an open system, that would encourage more competition. For example, media ecosystems where less digital ads or less manipulative content is offered could have a better way of becoming bigger players, and people can take their data and their social network from one to another. So those are potentially ways of benefiting from some of the network economies socially without creating the same entry barriers.
Soumaya Keynes
I guess it feels like there’s a bit of a disconnect, you know, one of the things that you see most worried about is the impact of AI on social media and our information sphere. And people who are more optimistic about it just see that as quite a small share of the stuff that it’s going to influence.
Daron Acemoğlu
I don’t think social media is the only place I worry about. But I’m just using that as an example because it’s very clear to see how AI can be manipulative and to see some of the negative uses of the tech.
My interest in this area started from the production sector. My research focuses a lot on inequality and productivity from various types of automation technologies. So yes, my focus is very much on that as well. And that’s where I think we have an even bigger need for new business models and new players. Because as I was trying to explain in the context of the education example, for instance, we do need new technologies that are going to be more complementary, and if I think we do that, we’re going to be able to increase the quality of education, productivity of teachers, we’re going to have better outcomes in terms of job creation inequality, and we’re not doing that.
And I think the way to do that is to give better market structure opportunities for new entrants, and at the same time, also give a helping hand to technologies that are socially more beneficial. I draw an analogy to, for example, the energy sector. We’re not in a great place today in terms of combating climate change, but we are in a far better place than we were 20 years ago.
I never believed that we would be able to do an energy transition or reduce carbon emissions without a realistic alternative to fossil fuels. Twenty years ago, we did not have that. Today we do. How did we get there? First, we reduced the power of the oil companies a little bit. Second, we used, you know, what economists would call corrective taxation or Pigouvian taxation. So we used a combination of regulation and carbon taxes in Europe. More regulation in the United States, especially in California. And we also generously gave subsidies to more socially beneficial technologies such as renewables.
So I think it’s exactly the same thing when it comes to production technologies. Breaking up the tech is reducing the power of Big Oil, that’s the analogy.
We need to do the following, at least minimal corrective taxation, I would claim, which is to get rid of the fairly large subsidies that we’re giving for excessive automation in the United States and other industrialised nations. For example, our tax system subsidises automation because we tax workers and worker income much more than capital income. And we need to generously provide funding, opportunities, and research in more human complementary technology.
Soumaya Keynes
Yeah, I guess on that last point, you know, when you hear the idea, in abstract, of we need to push technology in the direction where it enhances workers rather than kind of replaces them or, makes their lives worse. Then the next question is, OK, well, how do you do that practically, right? And, from what you’re saying, you change funding for research and development. I suppose I’m slightly sceptical that that’s going to have a first-order effect.
Daron Acemoğlu
Well, let’s go one step at a time. So you know, I’ve been, I’ve been on this for a long time and if there is today a slightly more openness to at least consider subsidising, supporting, giving a helping hand to human complementary technologies, I would take that as a big victory. Then comes, of course, the hard part of turning that intellectual victory into a real practical policy agenda. And I don’t have a magic bullet there. I think there are many ways in which you’re going to get things wrong in exactly the same way that we sometimes subsidise the wrong renewables.
So some of that is going to happen, but it’s not the end of the world. If you have an AI application that’s truly automation and you pretend that it’s really going to be human complementary and you get a few million dollars from the federal government. OK, fine. I’m sorry about that, but it’s not the end of the world. If some of that money really triggers new ideas and new technologies, I’ll take that. And then there are other things that we can do which is about changing the agenda. I think if you want to change the tech sector, you have to change the people in the tech sector.
I think having this conversation, and really making it a central part of the public debate that there is a technically feasible and socially beneficial different direction of technology would have a transformative effect on the tech sector. So I think it’s a process. I don’t definitely have a magic bullet.
Soumaya Keynes
Just to finish, could you give me an example of, you know, a technology where it’s going in the wrong direction you feel, but, you know, a policy change could push it in the right direction?
Daron Acemoğlu
Yeah, well, I’ll give you two. One of them I already discussed, education. We’re going more and more in the direction of sidelining teachers, and we can do much better by providing better tools to teachers to improve and better personalise education. Another one is in the production process. Today, we have a great shortage of what you would call crafts skills in the industrialised world. We need better electricians, especially as the grid gets electrified. We need better plumbers, better carpenters. And many of the things that these workers need to do is improve problem solving. Better information which can be provided via AI tools can hugely help, but we’re not developing those sorts of tools. Instead, what are we trying to do with AI? We’re trying to automate the tasks that they are performing in a way that actually what I’ve called so-so automation. Yeah, you can automate it a little bit. But you’re not going to get big productivity benefits. Sometimes you’re actually going to lose some of the skills and judgment of the workers.
Soumaya Keynes
But what would a policymaker do to help that?
Daron Acemoğlu
I think what the policy can do here is get rid of the bias that exists for using automated machinery. I estimate that’s about 25 per cent. So that’s a huge subsidy to firms when they automate rather than hire workers or train their workers. And we can also provide better incentives for firms to actually do the kind of AI that will help the workers, rather than more LLMs, more human-sounding bots, etc. You know, for instance, if we can balance the scales against AGI type of research where a lot of tech companies are interested in general purpose human-sounding bots. So we give a pot of money so that you provide more targeted, small-scale AI tools that are going to be useful for workers.
Soumaya Keynes
OK. So essentially directed government subsidies?
Daron Acemoğlu
Directed, but general, meaning the government doesn’t select which technology works, but makes available a pot of money or competition or other tools to support any type of attempt that goes in a broad direction.
Soumaya Keynes
OK, well I’m going to wish a lot of luck to whatever poor official’s job it is to design something like that. Daron, thank you so much for joining me. This has been super interesting.
Daron Acemoğlu
Thank you, Soumaya.
[MUSIC PLAYING]
Soumaya Keynes
That is all for this week. You have been listening to The Economics Show with Soumaya Keynes. If you enjoyed the show, then I would be eternally grateful if you could rate and review us wherever you listen. It really helps spread the word.
This episode was produced by Edith Rousselot with original music from Breen Turner. It is edited by Bryant Urstadt. Our executive producer is Manuela Saragosa. Cheryl Brumley is the FT’s global head of audio. I’m Soumaya Keynes. Thanks for listening.
Comments