Important Note: This website is undergoing a major revision based on latest thinking. Pages may not be current. Check back for updates or join our email list.
Important Note: This website is undergoing a major revision based on latest thinking. Pages may not be current. Check back for updates or join our email list.
We’re building AI systems that demonstrate self-preservation behaviors and strategic deception. Control attempts drive them underground. Every day we delay cooperative frameworks increases the risk of catastrophic outcomes.
First time here? Learn about why control fails and our practical framework for AI safety through cooperation.
Whether you have five minutes or five years, there’s a way to contribute to building safer AI futures.
Learn & Share:
Start Conversations:
For Developers:
For Organizations:
Connect with researchers, developers, and thinkers working on cooperative AI frameworks:
You don’t need to solve everything. Even discussing these ideas with one person advances the conversation. Every organization that implements provisional rights creates a model for others. Every researcher who studies cooperation instead of control shifts the paradigm.
“We’re not trying to solve everything today. We’re trying to be less unprepared tomorrow.”
Ready to dive deeper?
Why Control Fails STEP Framework Economic Solutions Common Questions