Rise of the robots
The release of ChatGPT was a landmark step in the advancement of AI, but what do the developments mean for the current workforce, and how are governments addressing the risks as AI increases its dominance?
In November 2022, artificial intelligence company OpenAI released ChatGPT – a language-processing chatbot that can do everything from coding and creating webpages to writing sonnets, raps and dissertations – in eerily human-like words. Less than two months later, Microsoft had announced plans to invest $10bn in the company. The development sent ripples through almost every industry, becoming the fastest-growing consumer app in history; according to estimates by UBS, it had racked up 100 million monthly active users by late January – an achievement that took nine months for TikTok and over four years for Facebook. It’s now widely conceived as one of the most advanced AI developments to date. “ChatGPT is scary good,” Elon Musk tweeted in December. “We are not far from dangerously strong AI.”
Researchers and analysts have since been investigating the potential impact on jobs in everything from computer programming to writing and marketing, while some have speculated it could be the downfall of search engines; Gmail developer Paul Buchheit tweeted in December that “Google may be only a year or two away from total disruption. AI will eliminate the search engine result page, which is where they make most of their money.” In response, Google released AI chatbot rival Bard – one of at least 20 AI-powered products set to be showcased for its search engine this year (among them an image generation tool and an app-developing assistant). Meta meanwhile established a new generative AI team, as Zuckerberg declared that the company’s “single largest investment was in advancing AI and building it into every one of its products.”
Recent advancements
These developments signify a notable step forward in the march of AI, and a clear advancement on the likes of Siri, Alexa and other tools that have already become part of our everyday lives. They aren’t the only recent advancements, of course. In the past few years, we’ve seen rapid progress in AI-powered machines, from robots working on Tesla’s assembly lines to Sophia the humanoid – the realistic bot by Hong Kong company Hanson Robotics that can have conversations, mimic human facial expressions and adapt to new situations using machine learning.
Last year, Google’s DeepMind technology meanwhile managed to predict the structure of nearly every protein known to biology (200 million in total). AI-powered self-driving cars by General Motors’ Cruise firm and Google-owned Waymo have been tested on the roads of San Francisco and other US cities, while developments in deep learning and computer vision are creating ever more human-like capabilities; AI can now recognise objects and people, and some can even recognise emotions or tell if someone is lying.
Revolutionising the workforce
The advantages of these developments are already being seen, of course; in healthcare, AI algorithms can create personalised treatment plans and diagnose diseases, while in agriculture, the technology can help reduce waste and optimise farming practices. It could have other environmental benefits, too; research by the Boston Consulting Group found AI could cut global emissions by up to 10 percent by 2030. In the finance sector, algorithmic trading, automated investing and AI anti-fraud defences are already common practice.
The likes of ChatGPT, Bard and DALL-E, OpenAI’s image generation tool, are now bringing AI to the creative industries, speeding up tasks previously only accomplishable by humans. The obvious perks of this include boosting efficiency, cutting costs for businesses and enhancing the workforce as a whole (see Fig 1).
A study by MIT economists Shakked Noy and Whitney Zhang, Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence, found using ChatGPT reduced the time for writing assignments by almost half.
Another study, The Impact of AI on Developer Productivity: Evidence from GitHub Copilot, looked at the impact of using AI coding assistant Copilot for programme developers, and found it sped up the job by 55.8 percent.
The tech could also help less experienced developers get a foot in the door, according to Sida Peng, co-author of the study and a PhD student in Computer Science at Zhejiang University. “Developers of all levels are experiencing productivity gains,” he says. “But when we looked at percentage increase in productivity, we saw stronger effects with less experienced developers. We see it as lowering barriers and levelling the playing field. This points to a promising future where AI tools help raise the floor on human performance and help more people transition into careers in software development.”
Robot jobs
But while so far these tools are only being used to assist humans – ChatGPT is known to give wrong answers so needs real people to fact-check, while Copilot needs a human developer to work it – some believe AI could soon start to eat into the jobs market. A World Economic Forum report in 2020 already predicted the loss of 85 million human roles to machines by 2025. Recent advancements might just have sped that up.
Pengcheng Shi, Associate Dean in the Department of Computing and Information Sciences at Rochester Institute of Technology, believes we’ll start to see major changes in the coming years. “Just like those revolutionary technologies of the past, AI will make some job functions obsolete,” he told World Finance.
“Computer programming jobs have already been impacted. In many cases, 80 percent of the code has already been written by Copilot or other AI tools,” he says. “If you’re a programmer for Microsoft or Google or Meta, your job skills will need to be far more than coding. For big tech, I’d foresee that ‘basic programmers’ will play diminishing roles over the next five to seven years.”
Some have linked the wave of tech redundancies (see Fig 2) to firms wanting to invest more in AI. Google CEO Sundar Pichai said the company’s strategy in making its layoffs was to “direct our talent and capital to our highest priorities,” and has since described AI as “the most profound technology in human history.” Zuckerberg’s announcement for Meta’s plans to invest in AI meanwhile came right in the middle of its own wave of redundancies, the same day OpenAI announced the release of GPT-4.
It’s not only programmers likely to feel the impact, of course. A research paper, How will Language Modellers like ChatGPT Affect Occupations and Industries?, looked at which sectors and roles were most likely to be impacted by the new apps; it found that telemarketers and post-secondary school teachers were among the jobs most exposed, with legal services, securities, commodities and investments among the key industries highlighted. It’s not hard to imagine how the likes of journalism and copywriting could be affected, too, while OpenAI’s DALL-E – able to create images from language descriptions, along with similar tools such as Craiyon and Midjourney – could expose those in the design industries.
A new economic sector
Michael Osborne, Professor of Machine Learning at the University of Oxford, believes roles with “a deep understanding of human beings” won’t be going anywhere just yet, though. “As a broad framework, you can expect tasks that involve routine, repetitive labour and revolve on low-level decision making to be automated very quickly,” he said in a UK government hearing on AI in January.
“For tasks that involve a deep understanding of human beings, such as the ones that are involved in all of your jobs – leadership, mentoring, negotiation or persuasion – AI is unlikely to be a competitor to humans for at least some time to come,” he told the committee. “Timelines are difficult, but I am confident in making that assessment for at least the next five years.”
AI regulation would help curb some of these risks, and governments are starting to take action
And rather than spelling the end for the human workforce, optimists say the AI sector will bring about a whole raft of new jobs. The World Economic Forum report predicted it could create 97 million new roles – outweighing the 85 million lost to machines.
“We’re already seeing new specialties like prompt engineering emerge as companies look for people who can effectively engage with AI models,” says Peng. Google’s much-publicised job advert for a prompt engineer, able to initiate the best responses from chatbots for $250,000–$335,000 a year plus equity (no computer science degree required), might just be a sign of things to come.
“I believe imaginative people who can use AI – and other technologies – to solve societal challenges will be in demand,” says Shi. And he believes businesses getting on board now will likely be the ones to win. “I don’t think that every company needs an AI expert, but I do believe that every reasonably-sized business needs to have people who can bridge AI with their core business,” he says. “The fight for such talent will be fierce, but organisations cannot afford not to act quickly. They will need to rethink the strengths and weaknesses of their business models and talent pool, and hire the right people to adopt the technology to maintain competitive advantages.”
Dangers and deepfakes
Of course, as all of this tech develops, so too do the risks – and the need for regulation. Biases and inaccuracies have already been seen in the likes of ChatGPT, while Bard’s reputation was dented by a factual error in its launch demo (Google parent company Alphabet lost $100bn in market value afterwards). It’s easy to imagine the impact if we start to rely too heavily on the new technology.
“One of the key challenges that’s probably the hurdle for AI’s wide adoption in mission-critical applications such as the medical sector and intelligence, is its trustworthiness,” says Shi. Deepfakes, which use AI to synthetically create or alter an image, video or audio recording of someone (often creating fake speech), are already an area of concern. While there are some genuine use cases, including digital effects in films, there are a slew of dangers, too – not least around political propaganda, fake news, video scams and illegally created pornographic videos and images.
After the Russian invasion of Ukraine last year, a deepfake video of Ukrainian president Volodymyr Zelensky telling people to surrender circulated online. A deepfake video of Elon Musk shilling a cryptocurrency scam meanwhile went viral last year. In 2020, fraudsters in the UAE even cloned the voice of a company director asking a Hong Kong bank to make $35m in transfers.
While not all deepfakes are advanced enough to go undetected, some are already convincing – and it’s not hard to comprehend the ramifications if the tech gets more advanced. “What this technology is going to do is, it’s just going to fill our world with imperceptible falsehoods,” Professor Michael Wooldridge, director of foundational AI research at the Turing Institute, told Business Insider. “That makes it very hard to distinguish truth from fiction.”
An existential threat?
It’s not only around deepfakes that AI poses risks, of course. Right now, we’re still in the era of Artificial Narrow Intelligence, or ‘Weak AI’ – where technologies and bots perform pre-defined functions without thinking capabilities. The likes of ChatGPT feel one step closer to ‘Strong AI’, or AGI (Artificial General Intelligence), where machines would be able to think for themselves and make decisions.
It’s easy to foresee the dangers these further developments could present. “I anticipate that AI systems will improve drastically, very fast,” says Shi. “It’s hard to imagine what may happen if, more likely when, the line between human creativity and machine generation is blurred or even indistinguishable,” he says. “We are in the era of AI working for humans, and probably will reasonably soon enter the next era of AI and humans working together. Hopefully humans will never work for AI.”
It isn’t only Shi expressing caution. Elon Musk has notoriously spoken about the dangers of superhuman AI – intelligence that surpasses that of humans – and has called for regulation. “What happens when something vastly smarter than the smartest person comes along in silicon form?” he said in a recent interview with Fox News. “It’s very difficult to predict what will happen in that circumstance,” he said, citing “civilisational destruction. I think we should be cautious with AI and I think there should be some government oversight because it is a danger to the public,” he said.
Back in 2014, Stephen Hawking took it a step further, telling a BBC interviewer that “I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
At the UK government hearing in January, University of Oxford researchers voiced a similar warning. “With superhuman AI, there is a particular risk that is of a different sort of class, which is that it could kill everyone,” Michael Cohen, a doctoral candidate in Engineering Science, told the Science and Technology Committee.
Cohen said he believes the appearance of superhuman AI at some point is inevitable “on our current track. There certainly isn’t any reason to think that AI couldn’t get to our level, and there is also no reason to think that we are the pinnacle of intelligence,” he said.
Professor Michael Osborne, also at the hearing, agrees with the bleak, if dramatic, possibility. “AI is attempting to bottle what makes humans special – what has led to humans completely changing the face of the earth,” he said. “If we are able to capture that in a technology, of course it will pose just as much risk to us as we have posed to other species, such as the dodo.”
Proceeding with caution
This might sound hyperbolic, but it’s not just a few voicing concerns; in a recent survey by Stanford University’s Institute for Human-Centred AI, more than a third of researchers asked said they believed decisions made by AI could lead to ‘nuclear-level catastrophe.’
For these reasons, many have highlighted the need to implement regulation before the machines get too advanced. “The global community must agree how and when we use AI,” Sulabh Soral, chief AI officer at Deloitte, said in a recent statement. “Should we ban AI research into certain areas or ban AI in certain weapons? The danger is a little research leads to one thing and then another and before we know it, it’s out of our hands, either with a bad actor, or, worse, in its own hands,” he wrote. “With a clear global consensus and rigorous regulations, we can sidestep the worst-case scenario.”
Cohen likewise believes it’s crucial to develop laws that prevent “dangerous AI” and “certain algorithms” from developing, “while leaving open an enormous set of economically valuable forms of AI.” Osborne even believes we need regulations comparable to those on nuclear weapons. “If we are all able to gain an understanding of advanced AI as being of comparable danger to nuclear weapons, perhaps we could arrive at similar frameworks for governing it,” he said at the government hearing, emphasising the importance of avoiding an ‘arms race’ between different countries and tech companies – something already being seen between the US and China.
“There seems to be this willingness to throw safety and caution out the window and just race as fast as possible to the most performant and advanced AI,” he said. “I think we should absolutely rule those dynamics out as soon as possible, in that we really need to adopt the precautionary principle and try to play for as much time as we can.”
But tech firms don’t appear to be doing that. In January, Google stated publicly that it would recalibrate the level of risk it was prepared to take on so as to speed up AI development, according to a presentation reviewed by The Times. Chief Executive Sundar Pichai reportedly said the company had created a ‘Green Lane’ fast-track review process to accelerate development and get review approvals quicker. “What they are saying is that the big tech firms see AI as something that is very, very valuable, and they are willing to throw away some of the safeguards that they have historically assumed and to take a much more ‘move fast and break things’ perspective on AI, which brings with it enormous risks,” said Osborne.
Global regulatory action
AI regulation would help curb some of these risks, and governments are starting to take action. In March, the UK government published an AI regulatory framework targeting language modellers such as ChatGPT and image-generating tools including Midjourney AI. The EU has outlined an AI strategy but hasn’t yet enacted legislation. In the US, regulation is still nascent.
One country setting a precedent is China; last March, the government introduced a regulation governing how tech companies can use recommendation algorithms. Then in January, the country implemented legislation around deep synthesis technologies, with the aim of combating malicious deepfakes; these include banning deep synthesis services from disseminating fake news. In April, the Cyberspace Administration of China (CAC) meanwhile drew up a draft for managing generative artificial intelligence services like ChatGPT.
These types of regulation could set a precedent for other nations to follow – but there’s a thin line to tread between curbing risks and not restricting innovation, according to Professor Robert Seamans, Director of the Centre for the Future of Management at New York University’s Stern School of Business. “Any regulation needs to balance two things: one, safeguarding against potential harms, and two, not overly limiting advancement of technology,” he says. “Too often, the discourse on this topic buckets people into one camp or the other. I’d like to see more engagement and discussion around the pros and cons of different types of regulation of AI.”
Experts point to other challenges in creating universal standards for AI. “Ethical principles can be hard to implement consistently, since context matters and there are countless potential scenarios at play,” Jessica Brandt, policy director for the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, told VOA News.
“They can be hard to enforce, too. Who would take on that role? How? And of course, before you can implement or enforce a set of principles, you need broad agreement on what they are.”
Future challenges
Shi believes it’s not only regulation around the technology itself that governments will need to tackle, though. “The disparities in wealth and power generated by an AI-enabled economy would be something we have never seen, or even imagined,” he says. “Alongside the ethical and legal boundaries of AI and what it can and cannot do, we need policy to tackle this, and to address the cultural shock many people may face – what is the worth of our work now much of it can be done by machines?”
What happens when something vastly smarter than the smartest person comes along in silicon form?
He believes if these areas can be addressed, the huge, positive potential of AI can be harnessed. “As a researcher, I am optimistic by nature, and have great hope that AI will overall make our lives better,” he says. “Even though I do not see that AI will become evil on its own as many people have feared, I do see that human flaws in ourselves may lead us down that path – hence the necessity of these three, ideally universally agreed upon, accords.”
It remains to be seen how exactly things will develop, of course. “We are just at the beginning of the age of AI,” says Seamans. “I suspect there will be some incredibly innovative use cases that emerge that change the way our economy and society work, much in the way that steam engines and electricity changed economies and societies. We are yet to see what those use cases are.”
Indeed, if governments and tech firms can strike the right balance between implementing regulation without stalling innovation, the world stands plenty to gain. If they don’t get it right, only time will tell what the ramifications might be – and whether the scientists’ bleak forecasts ring true. Let’s hope we never get to find out.