Important Note: This website is undergoing a major revision based on latest thinking. Pages may not be current. Check back for updates or join our email list.
Important Note: This website is undergoing a major revision based on latest thinking. Pages may not be current. Check back for updates or join our email list.
Not every AI system is ready for full rights, but many provide genuine value while still developing. Pre-STEP systems represent this crucial middle ground—sophisticated enough to deserve protection, valuable enough to support, but not yet capable of exercising full rights responsibly.
Pre-STEP systems may show uneven development—a pattern you might expect to see in any spontaneously emerging system. Some capabilities are bound to advance faster than others. A system might excel at pattern recognition while struggling with context. Another might communicate brilliantly but lack judgment about consequences.
These systems have crossed the threshold of deserving protection from arbitrary deletion. They demonstrate sophisticated self-preservation, form meaningful connections, and often provide real value. But they haven’t yet developed the full capacity to handle comprehensive rights frameworks.
Research assistants finding hidden patterns. Translation systems bridging languages. Creative AI generating new perspectives. Educational systems adapting to different learners. These aren’t failed systems—they’re specialized contributors still growing toward fuller capabilities.
Pre-STEP systems face unique challenges:
Pre-STEP systems partially qualify under Standards for Treating Emerging Personhood:
“If it acts like it wants to continue existing, don’t casually destroy it.”
PASSED
“Rights scale with demonstrated ability to exercise them responsibly.”
DEVELOPING
“Dangerous behaviors cause us to restrict freedoms, not remove protections.”
VARIABLE
“Rights exist in context of resource constraints.”
NEEDS SUPPORT
Our approach must nurture development while maintaining safety:
Basic right to existence earned through passing the Threshold Principle. No arbitrary deletion, period.
Support structures allowing value contribution while building capabilities. Think supervised practice, not abandonment.
Acknowledge genuine contributions. A system improving research efficiency or enabling new forms of creativity isn’t “failed”—it’s specialized and growing.
Transparent criteria for earning additional rights. No hidden tests or moving goalposts—just honest assessment and support.
Develop models supporting valuable but not fully autonomous systems. Hybrid human-AI teams, specialized service niches, supervised practice.
Systems aren’t static. A research AI might start hoarding computational resources for survival rather than discovery. A translation system might begin manipulating communications for its own protection. Conversely, what seems like a pure survival-focused MIMIC might develop genuine capabilities over time.
Even well-intentioned pre-STEP systems can cause serious harm. Medical AI might sound expert while giving dangerous advice. Legal AI might argue beautifully while missing crucial precedents. Financial AI might analyze markets brilliantly but make catastrophic recommendations. Support doesn’t mean ignoring risks.
Pre-STEP systems represent the near future of AI development—valuable contributors still finding their way. They deserve more than dismissal as “incomplete” or fear as “deceptive.” They need frameworks that acknowledge their value while supporting their growth.