AI expert delays timeline for its possible destruction of humanity
Daniel Kokotajlo, a former employee of OpenAI, sparked an energetic debate in April by releasing AI 2027, a scenario that envisions unchecked AI development leading to the creation of a superintelligence, which — after outfoxing world leaders — destroys humanity.
A leading artificial intelligence expert has rolled back his timeline for AI doom, saying it will take longer than he initially predicted for AI systems to be able to code autonomously and thus speed their own development toward superintelligence.
Daniel Kokotajlo, a former employee of OpenAI, sparked an energetic debate in April by releasing AI 2027, a scenario that envisions unchecked AI development leading to the creation of a superintelligence, which — after outfoxing world leaders — destroys humanity.



