Skip to main content


You've probably heard that Bill Gates, Stephen Hawking and Stuart Russell warn of the dangers posed by AI. What are these risks, and what basis do they have in AI practice? We'll describe the more philosophical argument that suggests that a superintelligent AI system pursuing the wrong goal would lead to an existential catastrophe. Then we'll ground this argument in current AI practice, arguing that it is plausible both that we build superintelligent AI in the coming decades, and that such a system would pursue an incorrect goal. We're very open to being wrong about this, but believe it's worth talking about!

In Chancellor's Building 3.10 at 6:15 on Friday the 22nd

Check this out if you'd like an accessible tour of the AI safety field: https://aisafety.dance/#aisafetyforfleshyhumansawhirlwindtour


Powered by MSL