AI expert delays timeline for its possible destruction of humanity

AI expert delays timeline for its possible destruction of humanity

Daniel Kokotajlo, a former employee of OpenAI, sparked an energetic debate in April by releasing AI 2027, a scenario that envisions unchecked AI development leading to the creation of a superintelligence, which — after outfoxing world leaders — destroys humanity.

A leading artificial intelligence expert has rolled back his timeline for AI doom, saying it will take longer than he initially predicted for AI systems to be able to code autonomously and thus speed their own development toward superintelligence.

Daniel Kokotajlo, a former employee of OpenAI, sparked an energetic debate in April by releasing AI 2027, a scenario that envisions unchecked AI development leading to the creation of a superintelligence, which — after outfoxing world leaders — destroys humanity.

You have reached your article limit. Already a subscriber? Sign in

Unlimited access starts here.

Try from only €0.25 a day.

Cancel anytime

More in this section

Lunchtime News

Newsletter

Get a lunch briefing straight to your inbox at noon daily. Also be the first to know with our occasional Breaking News emails.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited