About Banner

About Us

Who We Are

Formation Research Ltd is a UK-based not-for-profit company limited by guarantee with 501(c)(3) equivalency determination through NGO Source. We are on a mission to research fundamental lock-in dynamics and implement high-impact interventions.

A lock-in is a situation where some feature of the world, typically a negative element of human culture, becomes stable for a long time. Formation Research focuses on interventions for particularly undesirable lock-in scenarios such as AI-enabled totalitarianism and extreme power concentration.

Current Research Focus

Secret Loyalties

A promising technical intervention direction for lock-in risk is secret loyalties research. A secret loyalty is an objective encoded in a language model that favours a specific actor or the goals of that actor, activated by a private activation condition.

Secret loyalties are a mechanism for AI-enabled authoritarianism, power concentration, and similar lock-in risks from AI systems. There is concrete technical research that can be done on understanding and mitigating secret loyalties with current systems.

In collaboration with researchers at

AnthropicUniversity of OxfordForethought

Lock-In Scenarios We Study

We focus on a range of undesirable lock-in scenarios, including:

AI-Enabled Totalitarianism

A human dictator may leverage AI systems to improve their own lifespan and implement mass worldwide surveillance, leading to a long-term stable totalitarian regime.

Extreme Power Concentration

Individuals at the forefront of the AI revolution may end up with high leverage over the technological and political trajectory of humanity, creating persistent monopolies over resources and labour.

Our Vision

Minimising Lock-In Risks

Reducing the likelihood that harmful, oppressive, or persistent elements of culture become stable, whether through human action or AI systems.

Promoting a Dynamic Future

  • Continued technological and cultural evolution
  • Economic growth
  • Sustainable competition
  • Improved individual freedom

Our Approach

Our research is defined by first-principles, bottom-up, collaborative, scientific and technical investigation into AI systems and their potential uses.

First Principles

Building conceptual models from fundamental understanding of physics and computation, testing assumptions before employing them.

Bottom-Up Research

Creating inside-view theoretical models based on simple facts about AI systems and game theory.

Collaborative

Working with AI safety organisations and conducting interdisciplinary research with think tanks and economists.

Scientific Method

Using conjecture, criticism through peer review, and error-correction to create fundamental knowledge about lock-in risks.

Technical Research

Creating applicable knowledge for real-world interventions and developing practical mitigations for lock-in risks.

Validated Learning

Continuously updating our research agenda and interventions based on evidence and reason.

Our Team

Alfie Lamerton

Alfie Lamerton

Founder

Alfie Lamerton founded Formation Research in 2025. He holds a computer science BSc and artificial intelligence MSc, and has worked as a software engineer, research assistant, and independent researcher. He has participated in several AI safety projects and been awarded several grants for his research.

Adam Jones

Adam Jones

Trustee

Adam Jones is a member of technical staff at Anthropic and former AI safety lead at BlueDot Impact.

Luke Drago

Luke Drago

Trustee

Luke Drago is CEO of Workshop Labs and former AI governance specialist at BlueDot Impact. He is a University of Oxford graduate and co-author of 'The Intelligence Curse', featured in TIME.

Fin Moorhouse

Fin Moorhouse

Adviser

Currently, Fin is a Research Fellow at Forethought. Before that, he worked at Longview Philanthropy and Oxford's Future of Humanity Institute, and studied philosophy at Cambridge.