Artificial intelligence could help make it easier to build chemical and biological weapons, Prime Minister Rishi Sunak has warned.
Mr. Sunak warned that in the worst situation, we might not be able to turn off AI if it goes out of control. Some people disagree about how much harm it could cause, but we shouldn’t ignore the risks of AI, he said.
The Prime Minister emphasized in a speech that the UK wants to be at the forefront of AI technology. He also mentioned that AI is already making new job opportunities.
He mentioned that advancing this technology would boost the economy and make things more efficient, but he also acknowledged it would affect jobs.
The Prime Minister’s talk highlighted both the abilities and potential dangers of AI, such as cyber attacks, fraud, and child exploitation. He pointed out that one significant concern is that terrorist groups could use AI to create even more fear and chaos.
He stressed that preventing AI from posing an extinction risk to humans should be a worldwide priority, although he assured that it’s not an immediate worry and didn’t want to cause unnecessary alarm.
He expressed overall positivity about how AI could make our lives better.
For many, the more immediate concern is how AI is already changing the job market.
Mr. Sunak mentioned that AI tools can efficiently handle administrative tasks like creating contracts and assisting in decision-making, which were traditionally done by employees.
He believes that education is the key to preparing people for the changing job market. He added that technology has always brought changes to how people earn a living.
While automation has changed the nature of work in places like factories and warehouses, it hasn’t completely eliminated the need for human involvement.
The Prime Minister emphasized that it’s too simple to say that artificial intelligence will “take people’s jobs.” Instead, he encouraged the public to see it as a “co-pilot” in the everyday activities of the workplace.
What is AI and is it dangerous?
Warnings have been issued in reports, including information released by the UK intelligence community, regarding potential dangers from AI in the next two years.
As per the government’s report titled “Safety and Security Risks of Generative Artificial Intelligence to 2025,” AI could potentially be employed for the following purposes:
- AI could make it easier for terrorists to spread messages, convince people to join them, gather money, create weapons, and plan attacks.
- AI might lead to more instances of fraud, fake identities, ransom demands, stealing money, taking sensitive information, and mimicking voices.
- Increase child sexual abuse images
- Make a plan and execute attacks using computers and the internet.
- Make people not believe true information and use fake videos to control what people think and talk about in society.
- Collect information about violent attacks carried out by groups or individuals who are not part of a government. This includes attacks using chemicals, germs, or radiation.
Experts are divided about the threat posed by AI and previous fears about other emerging technologies have not fully materialised.
Rashik Parmar, the chief executive of the BCS, The Chartered Institute for IT, said: “AI won’t grow up like The Terminator.
“If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.”
In his speech, Mr Sunak said the UK would not “rush to regulate” AI because it was “hard to regulate something you do not fully understand”.
He said the UK’s approach should be proportionate while also encouraging innovation,
Mr Sunak wants to position the UK as a global leader on the safety of the technology, which would put it at the centre of a stage on which it can’t really compete with huge players like the US and China in terms of resources or homegrown tech giants.
So far, most of the West’s powerful AI developers seem to be cooperating – but they are also keeping a lot of secrets about what data their tools are trained on and how they really work.
The UK will have to find a way to persuade these firms to stop, as the prime minster put it, “marking their own homework”.
Prof Carissa Veliz, associate professor in philosophy, Institute of Ethics in AI, at the University of Oxford, said unlike the EU the UK had so far been “notoriously averse to regulating AI, so it is interesting for Sunak to say that the UK is particularly well-suited to lead the efforts of ensuring the safety of AI”.
She said regulation often leads to “the most impressive and important innovations”.
Labour said the government had not yet set out concrete proposals on how it would regulate the most powerful AI models.
“Rishi Sunak should back up his words with action and publish the next steps on how we can ensure the public is protected,” Shadow Science, Innovation and Technology Secretary Peter Kyle said.
The UK is hosting a two-day AI safety summit at Bletchley Park in Buckinghamshire next week, with China expected to attend.
The decision to invite China at a time of tense relations between the two countries has been criticised by some. Former Prime Minister Liz Truss has written to Mr Sunak asking him to rescind China’s invitation.
She believes “we should be working with our allies, not seeking to subvert freedom and democracy” and cites concerns around Beijing’s attitude to the West about AI.
But, speaking earlier Mr Sunak defended the decision, arguing there could be “no serious strategy for AI without at least trying to engage all of the world’s leading AI powers”.
The summit will bring together world leaders, tech firms, scientists and academics to discuss the emerging technology.
Professor Gina Neff, Director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, has criticised the focus of the summit.
“The concerns that most people care about are not on the table, from building digital skills to how we work with powerful AI tools,” she said.
“This brings its own risks for people, communities, and the planet.”