From ChatGPT to the AI Safety Summit: The year in AI

Artificial intelligence has become one of the biggest issues in tech in 2023, driven by the rise of generative AI and apps such as ChatGPT.
Since OpenAI rolled out ChatGPT to the public in late 2022, awareness of the technology and its potential has exploded ā from being discussed in parliaments around the world to being used to write TV news segments.
The public interest in generative AI models has also pushed many of the worldās largest tech companies to introduce their own chatbots, or speak more publicly about how they plan to use AI in the future, while regulators have increased debate around how countries can and should approach the opportunities and potential risks of AI.
In 12 months, conversations around AI have gone from concerns over how it could be exploited by schoolchildren to do their homework for them, to British prime minister Rishi Sunak hosting the first AI safety summit of nations and technology companies to discuss how to prevent AI from surpassing humanity or even posing an existential threat.
In short, 2023 has been the year of AI.
Much like the technology itself, product launches around AI moved quickly over the last 12 months, with Google, Microsoft and Amazon all following OpenAI in announcing generative AI products in the wake of ChatGPTās success.
Google unveiled Bard, an app it said would have the edge over any of its rivals in the new AI chatbot space because it was powered by the data from Googleās industry-leading search engine, and established Google Assistant virtual helper, found in its smartphones and smart speakers.
On a similar note, Amazon used its big product launch of the year to talk about how it was using AI to make its virtual assistant Alexa sound and respond in a more human fashion ā able to understand context and react to follow-up questions more seamlessly.
And Microsoft began the rollout of its new Copilot, its take on combining generative AI with a virtual assistant on Windows, allowing users to ask for help with any task they were doing, from writing a report to organising the open windows on their screen.
Elsewhere, Elon Musk announced the creation of xAI, a new start-up focused on work in the artificial intelligence space.
The first product from that start-up has already appeared in the form of Grok, a conversational AI available to paying subscribers to Musk-owned X, formerly known as Twitter.
Such large-scale developments in the sector could not be ignored by governments and regulators, and debate around regulation of the AI sector has also intensified during the year.
Earlier this month the EU agreed on its own set of rules around AI oversight, although they are unlikely to become law before 2025, which will give regulators the power to scrutinise AI models and be provided with details on how models are trained.
In November, Rishi Sunak hosted world leaders and industry figures at Bletchley Park for the worldās first AI Safety Summit.
Mr Sunak and Technology Secretary Michelle Donelan used the two-day summit to discuss the threats of so-called āfrontier AIā, cutting edge aspects of the technology which, in the wrong hands, could be used for nefarious means.
The summit saw all the international attendees, including the US and China, sign the Bletchley Declaration, which acknowledged the risks of AI and pledged to develop safe and responsible models.
And Mr Sunak announced the launch of the UKās AI Safety Institute, alongside a voluntary agreement with leading firms including OpenAI and Google DeepMind, to allow the institute to test new AI models before they are released.
Although not a binding agreement, it has laid the groundwork for AI safety to become an increasingly prominent part of the debate moving forwards.
Elsewhere, the AI industry witnessed some major boardroom soap opera to end the year, as ChatGPT maker OpenAI sensationally ousted chief executive Sam Altman in late November.
But it sparked backlash among staff, nearly all of whom signed a letter pledging to leave the company and join Altman on a proposed new AI research team at Microsoft if he was not reinstated.
Within days Altman was back at the helm of OpenAI and the board had been reconfigured, with the reasoning behind the saga still unclear.
The coming year is likely to see scrutiny of the AI sector continue to intensify.