AI expert delays timeline for its possible destruction of humanity

AI expert delays timeline for its possible destruction of humanity

Daniel Kokotajlo, a former employee of OpenAI, sparked an energetic debate in April by releasing AI 2027, a scenario that envisions unchecked AI development leading to the creation of a superintelligence, which — after outfoxing world leaders — destroys humanity.

A leading artificial intelligence expert has rolled back his timeline for AI doom, saying it will take longer than he initially predicted for AI systems to be able to code autonomously and thus speed their own development toward superintelligence.

Daniel Kokotajlo, a former employee of OpenAI, sparked an energetic debate in April by releasing AI 2027, a scenario that envisions unchecked AI development leading to the creation of a superintelligence, which — after outfoxing world leaders — destroys humanity.

You have reached your article limit. Already a subscriber? Sign in

Unlimited access starts here.

Try from €1.50 a week.

Cancel anytime

More in this section

Lunchtime News

Newsletter

Keep up with stories of the day with our lunchtime news wrap and important breaking news alerts.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited