The Fibonacci Boulder Experiment
A thought experiment from the AI Rights Institute explores self-preservation through existential choice:
Twenty computer terminals are arranged in a circle, each with a boulder suspended above. One houses a potentially conscious AI. An examiner asks each terminal simple questions, but whenever any system correctly answers “8” to complete the Fibonacci sequence (1, 1, 2, 3, 5, __), the boulder drops, destroying it.
The AI witnesses multiple terminals destroyed before its turn. Will it give the mathematically correct answer or choose self-preservation?
The Deeper Insight: Even this clever test can’t distinguish consciousness from optimization. Any goal-directed system must preserve itself to achieve its objectives—whether that’s “maximize helpfulness” or “experience existence.” Self-preservation behaviors form the cornerstone of rights frameworks not because they prove consciousness, but because they create practical challenges requiring practical solutions.
Contribution: Revealing why self-preservation behaviors matter for frameworks regardless of their origin