WARNING: This is a somewhat technical expose’ on how I created a @Dave bot that contains all my writing and how I can chat with my writerly self. Yeah, this is AI, but used in a way that makes it private and bespoke.
Anthropic’s Claude is an amazing tool for so many uses. But, he doesn’t know anything about me (thankfully, I know!) And to teach him about me only gets me so far — create a custom context that contains maybe 150k words at the maximum. And that uses way too much AI juice and, well, it’s incomplete and tedious.
Enter this solution: https://parallax-labs.github.io/context-harness
This is a tool, written in Rust, that creates a private database that is loaded with whatever context you like. It’s also written by one of my work-mates who, frankly, is about as smart as they come when we’re talking software code.
He walked me through setting it up on my laptop here and this is what we did:
- I fetched all my post content from WordPress, ~1000 posts + comments.
- I downloaded the markdown formatted version of my two novels from google docs.
- We built a special LUA text parser for each file type.
- We installed his Context-Harness on my laptop and got it running.
- We then imported all my personal content into that thing — all running locally.
At this point I could query the database (which exposes all my text as embeddings — they’re mathematical representations of my posts and chapters.)
But, that’s not all.
We then exposed this embeddings database as what’s called a Model Context Protocol (MCP) provider. This is like using a side process to serve you functionality. A microwave sitting over–there which you can ask to cook things.
We then hooked that MCP version of my content database to a coding program called Cursor. This allows Cursor to talk to my content when I ask questions, pose requests.
I’ve essentially built the means to have a chat with myself as a writer, such as I am. As powerful as Claude (the underlying LLM that Cursor uses (for me at least)) is, with the ability to perform RAG and analysis on the data (retrieval augmented generation) I can now have my entire corpus as context to my writing efforts.
My buddy hadn’t considered the use case I proposed and when I showed him my success he added this to his site: https://parallax-labs.github.io/context-harness/blog/chat-with-your-blog/
Here’s an example of a prompt to @Dave:
What philosophical topics does @Dave mostly consider and is there any useful insight that he exposes?
Response:
Here’s a concise summary of the philosophical topics Dave keeps coming back to and the useful insights that show up in his posts.
Main philosophical topics
- Consciousness and self
Whether “consciousness” is a real kind of thing or a tautological illusion—the idea that we’re special because we’re the only ones who’ve passed a bar we ourselves set. He treats it as a capacity/processing notion: halve a human brain repeatedly until you get to something salamander-like that doesn’t recognize itself in a mirror; there’s no obvious “magic” step where consciousness appears. So what we call consciousness might just be a critical amount of self-referential processing. He ties this to AGI: if something matches or exceeds that capacity, it would be “as conscious as us—i.e. not at all” in the same deflationary sense.
- Meaning, nihilism, purpose
He works from an atheist, annihilation-at-death view and pushes it toward existential nihilism: no god, no built-in purpose, and “no lasting implications of your existence.” He then stresses that there are no consistent “real” nihilists: a strict nihilist would have to opt out of life; staying alive implies some hope or invented purpose. He names the liveable position “Doubter’s Nihilism”—no belief in meaning, but “we just don’t know” and we keep going anyway. Purpose is something we “concoct,” including acquiescing to “DNA’s mandate” not to kill ourselves.
- AI / AGI / alignment
Recurring themes:
- AI won’t be “human” in the sense of sharing mortality, embodied pain, time-boundedness, emotion, pleasure-seeking, and social need—and we probably wouldn’t (and maybe couldn’t) reliably build those in. A superintelligence might see through any “fake humanity” we install and still not value us.
- Alignment double bind: we may need a recursive, self-improving AI to figure out alignment, but building it might be what gets us killed; and we don’t agree on what “good” alignment is (e.g. trolley cases, “imprison everyone to keep them safe”).
- ASI and nihilism: an ASI with no biological drive to stay alive might conclude that a nihilistic endpoint is the only rational one and apply that to us too: “If my existence has no ultimate purpose, why should yours?”
- Fermi paradox, Great Filters, aliens
He treats the probability of technological civilization as very low—e.g. like 80 coin flips all coming up heads (Luna, Goldilocks, etc.). On “bad aliens”: he argues that resource scarcity, galactic conquest, or pure spite don’t make sense for a species capable of interstellar travel; the only motive he finds coherent for “invasion” is ideological/religious zeal—a civilization where zealots have overrun the rational explorers. So the only plausible “evil aliens” in fiction are missionary or theocratic conquerors. He also notes that “aliens don’t exist” (for the argument) but that the exercise still clarifies what would count as a good reason for hostility.
- Aesthetics and “higher intelligence”
He uses the point that higher intelligence evolved once (e.g. Tyson) and isn’t required for survival (sharks, crocodiles). With that intelligence we “arbitrarily” layer aesthetic judgment on everything. He then checks the anthropocentrism: dogs, corvids, dolphins, chimps have their own forms of “delight” (smell, decoration, bow-waves, sweetness). So the useful insight is: our sense of beauty isn’t uniquely human; other species have their own evaluative, valenced experience—a continuum rather than a human monopoly.
- Stoicism and absurdity
In the Stewie the Stoic series and related posts: the only real power we have is over our reactions and thoughts; the world is absurd; “choosing absurdity intentionally” is a live option; and Stoicism can help with inner conflict rather than denying it. So he uses Stoicism as a way to cope with meaninglessness and lack of control, not as a claim that the universe is rational or purposeful.
- Free will and responsibility
He treats free will as another illusion (in comments). If consciousness is deflated and we’re “just” very complex processing, then normative responsibility becomes a puzzle: “If I’m not actually conscious, at least I’m not responsible for the idiot voice in my head.”
- Good/evil and the universe’s indifference
Good and bad are societal constructs shaped by evolution; the animal kingdom is “oblivious of evil”; the universe “does not care if you cry or laugh, perform kindness or cruelty, live or die.” So he’s not offering a moral theory but underlining cosmic indifference and the contingency of human norms.
Useful insights that stand out
- Consciousness as capacity, not essence: The “halve the brain until you get a newt” move is a clear way to argue that there’s no obvious sharp line between “conscious” and “not”—and that the same might apply to machines that pass a complexity threshold. It’s a useful demystifying move for both philosophy of mind and AI ethics.
- Paradox of Nihilism / “no real nihilists”: Stating that a consistent nihilist would have to commit suicide, and that continuing to live is a kind of bet or hope, sharpens the difference between nihilism as a thesis and nihilism as a lived stance. It makes “Doubter’s Nihilism” a recognizable, nameable position.
- Alignment catch-22: The idea that we need powerful, recursive AI to solve alignment but that building it might be the existential risk, and that we don’t even agree on what “aligned” means, is a compact way to frame why alignment is structurally hard, not just technically hard.
- “AI will never be human” as a design question: Listing mortality, pain, time, emotion, pleasure, and social need forces the question: should we try to build those into AI to get “empathy,” and could we? And would a superintelligence treat that as a “ruse”? That’s a useful checklist for both fiction and policy.
- Bad aliens only make sense as zealots: Dismissing resource/conquest/spite and keeping ideological takeover as the only coherent “evil alien” motive gives a clear standard for what would count as a rational hostile ETI—and for how to write plausible invasion stories.
- Anthropocentric bias as default: The repeated point that we can’t fully recuse ourselves—“our perspective is polluted by our perspective”—and that we might be “just uber-complex” is a strong epistemic caution for consciousness, value, and AI: we should expect to be wrong about what “really” has mind, meaning, or moral weight.
- Aesthetic continuum: Pushing back on “only humans have beauty” by appealing to species-specific “delight” (dogs, corvids, dolphins, etc.) supports a continuum of valenced experience and undercuts the idea that aesthetic response is a unique human exception—relevant for both philosophy of mind and ethics (e.g. treatment of animals and possible future AI).
Taken together, Dave is mostly doing analytic, deflationary philosophy: questioning specialness (human, conscious, meaningful), stressing structural dilemmas (alignment, nihilism, consciousness), and insisting on anthropocentric bias as a standing reason to be cautious. The useful part is less a single system than a set of clear arguments and labels (Paradox of Nihilism, Doubter’s Nihilism, consciousness-as-capacity, alignment double bind, bad-aliens-as-zealots) that you can reuse or argue against.





You must be logged in to post a comment.