top of page

John Burden

Programme Co-director: Kinds of Intelligence, Senior Research Fellow,

Leverhulme Centre for the Future of Intelligence

Senior Research Associate,

Centre For the Study of Existential Risk

My work focuses focuses on the challenges of evaluating the capability and generality of AI systems, with a particular emphasis on Artificial General Intelligence.
You can see my published work and blog using the buttons below.


Current Research

Robust Evaluation of Cognitive Capabilities and Generality in Artificial Intelligence

I am a post-doc on the RECOG-AI Project at the Leverhulme Centre for the Future of Intelligence. RECOG-AI aims to improve on cognitive evaluation for AI systems, taking inspiration from Comparative Psychology and Psychometrics.

Stable Foundations

This project is focused on identifying and mitigating risks that arise from the ubiquity of Foundation Models. Understanding the way that models interact with each other and users is key for ensuring a safe AI ecosystem.

Previous Research

Automating Abstraction For Potential Based Reward Shaping

This was my PhD project. Here, I looked at creating methods for agents to learn their own abstractions for Reinforcement Learning. This automation of the abstraction process  helped agents to glean more useful information about their experiences and improve their learning speed.

Paradigms of Artificial General Intelligence And Their Associated Risk

This project seeks to identify forms that AGI could take and the types of risks they may pose.
Much of this work focuses on the evaluation of AI systems, specifically their capabilities, generality, and safety. 
A thorough understanding of these properties of AI systems is essential for ensuring that AI is beneficial to humanity.

bottom of page