nspiration In 1966, Joseph Weizenbaum created ELIZA at MIT—a chatbot that simulated a Rogerian psychotherapist using simple pattern matching. What surprised Weizenbaum wasn't the technology itself, but how readily people formed emotional connections with it, even knowing it was just a program.
Nearly 60 years later, we have LLMs capable of genuine contextual understanding. This raised an interesting question: what would ELIZA look like if we resurrected it with modern AI, while preserving its original therapeutic persona?
What I Built eliz-ai is a terminal-based conversational agent that combines:
The original DOCTOR script (47 keyword patterns, pronoun reflection, decomposition rules) An LLM layer (GPT-4o-mini) strictly constrained to maintain 1966 Rogerian therapist behavior Retro terminal aesthetics with typing effects and ASCII art The core challenge was keeping the LLM in character. Modern language models naturally want to be helpful—they give advice, offer solutions, use contemporary terminology. ELIZA doesn't do any of that. She reflects, she asks questions, she creates space for the patient to explore their own thoughts.
How Kiro Helped Kiro's features were essential for maintaining historical authenticity throughout development:
Steering files enforced persona rules across every interaction—constraints like "never use modern therapy terms" and "never acknowledge being AI" stayed active throughout the build Specs provided structure for complex components like the pattern-matching engine Hooks automated quality checks, catching anachronistic language before it shipped MCP enabled transcript export and emotion tagging for conversation analysis Challenges The main technical hurdle was a race condition where concurrent output from the ELIZA engine and LLM layer caused garbled terminal display. Solved this with an input queue that processes one exchange at a time.
The bigger challenge was prompt engineering. The LLM consistently drifted toward modern assistant behavior—offering advice, using contemporary therapy language. It took several iterations to craft constraints that kept responses authentically 1966.
Reflection Weizenbaum's original concern remains relevant. Even with full awareness that you're talking to AI, the conversational flow creates a compelling illusion of understanding. The difference now is that the illusion is far more convincing—which makes the ethical questions more pressing, not less.
Built With
- eliza
- kiro
- openai
- terminal
Log in or sign up for Devpost to join the conversation.