May 2024
The last 12 months have seen Artificial Intelligence (AI) and the discourse around it continue to rapidly evolve.Â
Following up onlast year’s research, we ran 4 extensive new nationally representative polls of adults across the US and the UK, asking the public their views on a range of AI issues: their feelings towards it, how they used AI today, how they expected it to evolve, and what they wanted the Government to do in response. This report explores the findings from our US survey work.
We asked them their views on everything from AI agents to misinformation, whether an AI could pass the Turing Test and how important it was for the US to maintain a technological lead ahead of China.
Here are some of the more interesting things that we found:
Since the debut of ChatGPT in 2023, there has been a surge of interest in artificial intelligence (AI). The possibilities of this technology are immense. Imagine every student having access to an empathetic tutor, every patient being able to instantly contact a medical expert, and every traveler having a real-time translator at their fingertips. As AI becomes more helpful, intuitive, and reliable, the technology will have a profound impact on virtually every area of the economy and society, from how people work and learn to how they play and socialize. In addition to boosting productivity and competitiveness, thereby raising standards of living, AI has the potential to address major global challenges, such as public health, clean energy, and environmental concerns.
Yet despite so many bright spots on the horizon, it is tempting to fear change. Some worry about how AI may affect their lives or livelihoods or agonize over the risk that AI may unleash bias and misinformation. Others see elements once relegated to science fiction, such as self-driving vehicles, virtual companions, and intelligent agents, becoming part of their world and fear that the darkest elements of these tales—machines destroying human civilization—may also come to pass.
It is with this bifurcated vision of the future that policymakers must confront the key question of how the United States should respond to the rise of AI. Should it prioritize accelerating innovation to maximize benefits or slowing down innovation to minimize risks? Indeed, the question is not merely rhetorical as hundreds of experts issued a demand last year for a six-month pause on AI research. In this environment, public opinion on AI is crucial. Elected officials are chosen to represent the will of the voters, and their policies and priorities tend to reflect public sentiment. At the same time, elected officials are chosen to lead and make informed decisions based on their constituents’ long-term interests.
This poll shows Americans have a multitude of views about AI. They are curious, interested, worried, and amazed. They think the technology will help advance science, make jobs less mundane, and improve health care, but they also worry that it could help create misinformation, enable hacking, and cause job losses. Overall, they are divided between AI optimists who believe the technology will make things better, and AI pessimists who think it will make things worse. And many have not yet made up their mind.
To remain a global leader in AI, the United States faces two big challenges. First, it must stay on the frontier of AI development, which requires maintaining a robust ecosystem of AI skills, chips, data centers, and models. Second, it must lead in AI adoption, especially in trade sectors of the economy where it faces global competition, and areas like education and government where there are significant rewards. Both efforts will require extensive coordination and cooperation between the public and private sectors, as well as a regulatory environment that fosters responsible innovation.
Policymakers have their work cut out for them, not only to design the right policies for AI that keeps the United States on course in the global AI race, but also to build public support for these initiatives. This task is especially challenging because AI, and the companies that make it, are regularly vilified in the media. As more Americans use and experience the benefits of AI and realize that their worst fears have not come to pass, hopefully public support will follow.
Equally, in a new question, when we asked about Americans’ expectations of the future impact of AI we saw a mixed picture. On average, across a range of personal and societal categories, we saw that Americans were moderately more likely to have positive expectations than negative ones – but this gap never exceeded 10 percentage points, and a significant proportion were unsure, particularly as it affected them personally.
For ChatGPT, we can also compare usage year on year – with the proportion who say they have used it multiple times increasing from 20% to 35%.
0%
of Americans using LLM based chat bots say they have become an essential tool they use regularly
0%
of Americans using LLM based chat bots say they find them helpful
0%
of Americans using LLM based chat bots say they use them from time to time, but would not miss it if it didn’t exist
When we asked what use cases people had tried, the most common was to help explain something – with around two thirds of users saying they had done this. After that, around a half of users said that they used them to help brainstorm ideas or write text.Â
AI is likely to be one of the most significant economic drivers in the next twenty years. The IMF this year estimated that AI could boost productivity in an advanced economy like the UK by 1.5%,1 similar to predictions last year by Goldman Sachs for the US.2Â
In our polling, when we asked about the potential benefits from AI we saw an interesting dichotomy: while the most widely recognised benefits were accelerating scientific advancement and increasing productivity across the economy, respondents were much less likely to believe that this would translate into increased wages for workers, with this being the least popular choice.
When it came to personal use cases, however, we saw a widespread interest in at least giving AI a try in a variety of roles: from basic research to giving early warning of a new medical condition.
The current wave of AI hype was largely driven by the arrival of ChatGPT – but to what extent are people actually using LLM based chatbots like it at work?
In our poll, just over a quarter of American workers told us that they had used a chatbot at work – but about two-thirds of those who had used them said they found them helpful or very helpful.
0%
of American workers say that they have used an LLM chatbot tool at work
0%
of American workers using LLM based chatbots say that they find them helpful or very helpful
0%
of American workers using LLM based chatbots say that they have become an essential tool they use regularly
Those workers who are already using AI tools seem to be classic early adopters: around half of them said they had respectively decided to use these tools on their own, worked out how to use them and say they learn best from exploring and experimenting themselves.
0%
of American workers using LLM based chatbots say that they worked out how to use those AI tools themselve
0%
of American workers using LLM based chatbots say that they decided to use those AI tools themselves
0%
of American workers using LLM based chatbots say that they learn best from exploring and experimenting with AI tools themselves
While overall more workers wanted to teach themselves rather than have formal training, this was not true for everyone: workers over 55 years old were significantly more likely to express a desire for AI skills training.
Alongside the economy, one of the most significant other opportunities from AI is to speed up the diagnosis and treatment of health conditions.
Given the many sensitivities in this space, when first asked Americans are understandably unsure about using AI to diagnose, with just over a third (37%) saying they would support this.
0%
of Americans support using AI to diagnose patients
0%
of Americans say that they oppose using AI to diagnose patients
0%
Ever since science fiction writers first conceived the idea of an artificial intelligence, we have been inundated with stories about the many ways they can go wrong.Â
Given this, perhaps unsurprisingly, we saw a reasonably high level of self-reported familiarity across a range of risks, with the most common being the potential for unemployment. Interestingly, one potential risk that did not seem to have cut through yet was the potential for AI to significantly increase electricity consumption, with only around a third of Americans saying they were familiar with this.
Whether they were familiar with risks or not, how worried were they about them?
When we changed the question to this, we saw concerns from unemployment fall slightly down the list, while misinformation went to the top – perhaps reflecting its perceived greater urgency.
Many of these worries were relatively nonpartisan. Both Republicans and Democrats were almost equally worried about the potential of AI driven unemployment or misinformation. Republicans, however, were more likely to be worried about the potential for AI to be biased against people with different political views, or being used by criminals or for Government surveillance. Democrats, by contrast, had greater concerns over personal deepfakes, increased electricity consumption and bias against marginalized groups.
While AI may create some risk, how much is the risk it creates additional to the risks that already exist? After all, some have argued, if you want to, you can already create fake images of someone, while new AI driven technologies may actually help reduce risk in many of these areas.
In our polling, Americans were relatively unconvinced by this argument: across the range of risks we presented them, from embarrassing videos to human extinction, they seemed to believe that AI represented a significant increase in risk.Â
Perhaps unsurprisingly, voters were more likely to think that the other side would benefit most from misinformation: Biden supporters thought Trump would be helped most, and Trump supporters thought Biden would.
Similarly, the groups that our respondents were more worried about using AI to manipulate people were not domestic politicians but criminals, terrorists and foreign governments.
New technologies have always changed the structure of the economy – but one of the more unusual things about AI is that there is significantly more uncertainty about who it is likely to affect and how.
When we asked people to give a score out of 10 on how likely they thought it was an AI could do their job as well as them in the next 20 years, we saw a widespread range in views – with an average score of 4.7.Â
Neither did this score vary very much by income level or education – although those with a Bachelor’s Degree or Masters were slightly more likely to believe that AI could do their job than those with just a High School education.
When we asked our poll respondents to rate what jobs in general in the economy they thought might get automated, they gave a ranking very similar to many expert views today: with computer programming, routine manufacturing jobs and customer services agents at the top. By contrast, Americans were less convinced that AIs would be able to take on the roles of scientists, musicians, actors or doctors.
Corresponding with this, in general the people in our poll thought that AI was likely to reduce the relative importance of data analysis, coding and graphic design skills – while raising the importance of persuading other humans.
0%
of Americans say that they think it likely AI will increase unemployment
0%
of Americans say that governments should try to prevent human jobs from being taken over by AI or robots
0%
of Americans say that the Government and companies should offer formal retraining and skills programs to people like me to help them to transition to different careers
While the 2030s are not very far away, this would suggest that the public are roughly aligned with prediction markets – which also suggest that a date in the 2030s is most likely.
As in last year’s report, we saw many people did not see intelligence in purely analytical terms – with over 40% believing that an AI would have to be capable of feeling emotions to be as smart as a human. This is only a small amount below the level that thought an AI would have to feel emotions to feel conscious.
If a superintelligent AI was created – an AI significantly more intelligent than any human – what would this mean for the world? Such an AI could potentially develop many new powerful technologies, but could in itself be a significant risk.
In our polling, we saw that Americans were largely more wary than welcoming of the idea of a superintelligence:
0%
of Americans say that trying to create a superintelligence is a good idea
0%
of Americans say that trying to create a superintelligence is a bad idea
0%
of Americans say that trying to create a superintelligence is dangerous
0%
of Americans say that they were not aware many leading AI labs are trying to create a superintelligence
Given both the potential benefits and risks of a superintelligence, only a small minority of Americans thought we should try to accelerate its development – while around a third thought respectively that we should stay at today’s pace or actively slow down.
0%
of Americans say that given the potential benefits and risks from advanced AI say that we should look to accelerate development of this technology
0%
of Americans say that given the potential benefits and risks from advanced AI say that we develop it around the same pace as we are now
0%
of Americans say that given the potential benefits and risks from advanced AI say that we should look to slow its development
As part of our poll, we asked our respondents their views on a wide range of policies that other people have suggested: everything from clear labeling to a pause on new research.Â
In order to get a better view on how urgent a particular issue might be, we asked not just if a policy should happen or was a bad idea, but allowed them to say that while they didn’t think it was necessary, they were open to it later on.
Across the population we saw a majority of the respondents supporting a wide range of policies that they believed should happen now:Â
Despite supporting this range of policies, 60% of Americans also agreed however that we needed tomove cautiously before creating new laws and regulations to avoid creating unintended consequences.
The only policies which more people saw as a bad idea than it should happen now were:
While AI may create some risk, how much is the risk it creates additional to the risks that already exist? After all, some have argued, if you want to, you can already create fake images of someone, while new AI driven technologies may actually help address reduce risk in many of these areas.
In our polling, Americans relatively unconvinced by this argument: across the range of risks we presented them, from embarrassing videos to human extinction, they seemed to believe that AI represented a significant increase in risk.Â
While a majority of other words might initially say they support more regulation overall, how strong is this support? Most pressingly, do they maintain this view even if it would have a material impact on AI progress overall – and threaten other countries taking the technological lead?
When we asked people to make a forced decision between the two, we saw much more mixed opinions:
0%
of Americans thought that the US should seek to stay at the technological frontier, developing new AI systems rapidly to ensure it has the world’s most powerful systems
0%
of Americans thought that the US should develop new AI systems responsibly, even if this means slowing down and letting other countries like China take the lead
0%
This divide did not seem to be overwhelmingly driven by any particular demographic or ideology. Older, wealthier, better educated and more right wing Americans were slightly more likely to prioritize staying at the technological frontier, but even among those groups we say around a consistent third choosing responsible development instead, and a high level of people who didn’t know.
Similarly, when we gave people a list of arguments for both sides – prioritizing staying at the lead, or responsible development – we saw almost equal agreement across all of them.
As part of the poll, we also asked to explain in a sentence or two why they chose one way or the other. In general, we saw that those who believed it was important that the US remain at the technological frontier had relative similar views on why it was dangerous to let other countries such as China get ahead – whereas those prioritised safety, had a broader range of reasons why feared moving too fast with AI.