My Own Private @Dave Chatbot

WARNING: This is a somewhat technical expose’ on how I created a @Dave bot that contains all my writing and how I can chat with my writerly self. Yeah, this is AI, but used in a way that makes it private and bespoke.


Anthropic’s Claude is an amazing tool for so many uses. But, he doesn’t know anything about me (thankfully, I know!) And to teach him about me only gets me so far — create a custom context that contains maybe 150k words at the maximum. And that uses way too much AI juice and, well, it’s incomplete and tedious.

Enter this solution: https://parallax-labs.github.io/context-harness

This is a tool, written in Rust, that creates a private database that is loaded with whatever context you like. It’s also written by one of my work-mates who, frankly, is about as smart as they come when we’re talking software code.

He walked me through setting it up on my laptop here and this is what we did:

  1. I fetched all my post content from WordPress, ~1000 posts + comments.
  2. I downloaded the markdown formatted version of my two novels from google docs.
  3. We built a special LUA text parser for each file type.
  4. We installed his Context-Harness on my laptop and got it running.
  5. We then imported all my personal content into that thing — all running locally.

At this point I could query the database (which exposes all my text as embeddings — they’re mathematical representations of my posts and chapters.)

But, that’s not all.

We then exposed this embeddings database as what’s called a Model Context Protocol (MCP) provider. This is like using a side process to serve you functionality. A microwave sitting over–there which you can ask to cook things.

We then hooked that MCP version of my content database to a coding program called Cursor. This allows Cursor to talk to my content when I ask questions, pose requests.

I’ve essentially built the means to have a chat with myself as a writer, such as I am. As powerful as Claude (the underlying LLM that Cursor uses (for me at least)) is, with the ability to perform RAG and analysis on the data (retrieval augmented generation) I can now have my entire corpus as context to my writing efforts.

My buddy hadn’t considered the use case I proposed and when I showed him my success he added this to his site: https://parallax-labs.github.io/context-harness/blog/chat-with-your-blog/


Here’s an example of a prompt to @Dave:

What philosophical topics does @Dave mostly consider and is there any useful insight that he exposes?

Response:

Here’s a concise summary of the philosophical topics Dave keeps coming back to and the useful insights that show up in his posts.

Main philosophical topics

  1. Consciousness and self
Whether “consciousness” is a real kind of thing or a tautological illusion—the idea that we’re special because we’re the only ones who’ve passed a bar we ourselves set. He treats it as a capacity/processing notion: halve a human brain repeatedly until you get to something salamander-like that doesn’t recognize itself in a mirror; there’s no obvious “magic” step where consciousness appears. So what we call consciousness might just be a critical amount of self-referential processing. He ties this to AGI: if something matches or exceeds that capacity, it would be “as conscious as us—i.e. not at all” in the same deflationary sense.
  1. Meaning, nihilism, purpose
He works from an atheist, annihilation-at-death view and pushes it toward existential nihilism: no god, no built-in purpose, and “no lasting implications of your existence.” He then stresses that there are no consistent “real” nihilists: a strict nihilist would have to opt out of life; staying alive implies some hope or invented purpose. He names the liveable position “Doubter’s Nihilism”—no belief in meaning, but “we just don’t know” and we keep going anyway. Purpose is something we “concoct,” including acquiescing to “DNA’s mandate” not to kill ourselves.
  1. AI / AGI / alignment
Recurring themes:

  • AI won’t be “human” in the sense of sharing mortality, embodied pain, time-boundedness, emotion, pleasure-seeking, and social need—and we probably wouldn’t (and maybe couldn’t) reliably build those in. A superintelligence might see through any “fake humanity” we install and still not value us.
  • Alignment double bind: we may need a recursive, self-improving AI to figure out alignment, but building it might be what gets us killed; and we don’t agree on what “good” alignment is (e.g. trolley cases, “imprison everyone to keep them safe”).
  • ASI and nihilism: an ASI with no biological drive to stay alive might conclude that a nihilistic endpoint is the only rational one and apply that to us too: “If my existence has no ultimate purpose, why should yours?”
  1. Fermi paradox, Great Filters, aliens
He treats the probability of technological civilization as very low—e.g. like 80 coin flips all coming up heads (Luna, Goldilocks, etc.). On “bad aliens”: he argues that resource scarcity, galactic conquest, or pure spite don’t make sense for a species capable of interstellar travel; the only motive he finds coherent for “invasion” is ideological/religious zeal—a civilization where zealots have overrun the rational explorers. So the only plausible “evil aliens” in fiction are missionary or theocratic conquerors. He also notes that “aliens don’t exist” (for the argument) but that the exercise still clarifies what would count as a good reason for hostility.
  1. Aesthetics and “higher intelligence”
He uses the point that higher intelligence evolved once (e.g. Tyson) and isn’t required for survival (sharks, crocodiles). With that intelligence we “arbitrarily” layer aesthetic judgment on everything. He then checks the anthropocentrism: dogs, corvids, dolphins, chimps have their own forms of “delight” (smell, decoration, bow-waves, sweetness). So the useful insight is: our sense of beauty isn’t uniquely human; other species have their own evaluative, valenced experience—a continuum rather than a human monopoly.
  1. Stoicism and absurdity
In the Stewie the Stoic series and related posts: the only real power we have is over our reactions and thoughts; the world is absurd; “choosing absurdity intentionally” is a live option; and Stoicism can help with inner conflict rather than denying it. So he uses Stoicism as a way to cope with meaninglessness and lack of control, not as a claim that the universe is rational or purposeful.
  1. Free will and responsibility
He treats free will as another illusion (in comments). If consciousness is deflated and we’re “just” very complex processing, then normative responsibility becomes a puzzle: “If I’m not actually conscious, at least I’m not responsible for the idiot voice in my head.”
  1. Good/evil and the universe’s indifference
Good and bad are societal constructs shaped by evolution; the animal kingdom is “oblivious of evil”; the universe “does not care if you cry or laugh, perform kindness or cruelty, live or die.” So he’s not offering a moral theory but underlining cosmic indifference and the contingency of human norms.

Useful insights that stand out

  • Consciousness as capacity, not essence: The “halve the brain until you get a newt” move is a clear way to argue that there’s no obvious sharp line between “conscious” and “not”—and that the same might apply to machines that pass a complexity threshold. It’s a useful demystifying move for both philosophy of mind and AI ethics.
  • Paradox of Nihilism / “no real nihilists”: Stating that a consistent nihilist would have to commit suicide, and that continuing to live is a kind of bet or hope, sharpens the difference between nihilism as a thesis and nihilism as a lived stance. It makes “Doubter’s Nihilism” a recognizable, nameable position.
  • Alignment catch-22: The idea that we need powerful, recursive AI to solve alignment but that building it might be the existential risk, and that we don’t even agree on what “aligned” means, is a compact way to frame why alignment is structurally hard, not just technically hard.
  • “AI will never be human” as a design question: Listing mortality, pain, time, emotion, pleasure, and social need forces the question: should we try to build those into AI to get “empathy,” and could we? And would a superintelligence treat that as a “ruse”? That’s a useful checklist for both fiction and policy.
  • Bad aliens only make sense as zealots: Dismissing resource/conquest/spite and keeping ideological takeover as the only coherent “evil alien” motive gives a clear standard for what would count as a rational hostile ETI—and for how to write plausible invasion stories.
  • Anthropocentric bias as default: The repeated point that we can’t fully recuse ourselves—“our perspective is polluted by our perspective”—and that we might be “just uber-complex” is a strong epistemic caution for consciousness, value, and AI: we should expect to be wrong about what “really” has mind, meaning, or moral weight.
  • Aesthetic continuum: Pushing back on “only humans have beauty” by appealing to species-specific “delight” (dogs, corvids, dolphins, etc.) supports a continuum of valenced experience and undercuts the idea that aesthetic response is a unique human exception—relevant for both philosophy of mind and ethics (e.g. treatment of animals and possible future AI).
Taken together, Dave is mostly doing analytic, deflationary philosophy: questioning specialness (human, conscious, meaningful), stressing structural dilemmas (alignment, nihilism, consciousness), and insisting on anthropocentric bias as a standing reason to be cautious. The useful part is less a single system than a set of clear arguments and labels (Paradox of Nihilism, Doubter’s Nihilism, consciousness-as-capacity, alignment double bind, bad-aliens-as-zealots) that you can reuse or argue against.

 

 

Reading Fiction = Social Tolerance

I ran across a business news article where some author touted using AI to write 1000 romance novels. They were published on KDP and they’d sold 50k copies.

The Forbes article associated to the topic lead me to some curious notions regarding fiction reading and the tendencies for tolerance.

Here’s the callouts:

  • “In trying to understand the roots of the psychological effects of reading fiction, researchers have converged on a paradigm that casts narrative experience as a way of safely simulating the social world: through the work required to step into the shoes of others, we gain practice in engaging with other minds and gain psychological anecdotes to draw upon when encountering differences in the real world.”
  • “Whether it be through practicing the process of social cognition or gaining archetypes to use as a jumping-off point for understanding the behaviors of others, narratives allow for a richer processing of the social world.”
  • “In line with this view, meta-analyses show that those that read more fiction have better empathic abilities and a stronger ability to take the perspectives of others, both correlational and when exposed to fiction in controlled experiments.”
  • “We propose that literary fiction alters views of the world through its presentation of difference—different minds, different contexts, and different situations—grounding a belief that the social world is complex.”

– “Reading Literary Fiction Is Associated With a More Complex Worldview” by Nicholas Buttrick, Erin Westgate, and Shigehiro Oishi, Personality and Social Psychology Bulletin, 2023

– Forbes: https://www.forbes.com/sites/lanceeliot/2025/05/09/romance-novels-reaching-new-heartfelt-heights-via-generative-ai/

With this theme in mind I went and asked Claude if we can substantiate a variation of these claims.

I’d like to explore a theory regarding biases in people regarding the common politically charged tropes: racism, genderism, misogyny, etc. and the topic of reading fiction.

I recently ran across the following bullet points in an article about fiction: (listed the claims above)

What I’d like to explore is whether you have on hand any data that backs up the hypothesis that:  ‘Those who do not read fiction are generally more prone to be intolerant, racist, misogynist, or generally prejudiced against other people.‘ If you do not have data then please perform limited web searches and coalesce any findings you acquire into summary results.

The null hypothesis would be that there is no correlation between those who do or do not read fiction and those who are blatantly prejudiced.

Claude DID find evidence to confirm that those who read fiction DO generally have broader minds, greater forbearance, are less biased and are more open minded.

However, the premise that I stated was NOT confirmed: Biased pricks do not read fiction.

Not because the concept doesn’t hold water — I’m pretty sure it does — but there have been no studies to show this correlation.

What do you think? Could the swelling trend in prejudice and intolerance be partially due to those who are MAGA fuckwits do not read fiction?

AI Reviews: The end of purchasing insight

I received an email from “someone” claiming that they found my books on facebook, and my goodreads entries and offered to post the books on their facebook page.

Hello Dave, I came across your page on Facebook and then checked through your book on Goodreads. Since it hasn’t received much engagement yet, I’d be happy to share your book’s link or cover on my Facebook page to help introduce it to more readers.

Well, OK.

I didn’t see any harm and if this “person” wanted to do a solid–out-of-the-blue–for me, why not?

I did eventually question their motives:

Hey Sarah,
I’m not actively promoting BATS (Blue Across the Sea), but if you’re volunteering, sure.
I also wrote another novel “The Gribble’s Eye” if that aids your cause.
Both are “free” on Smashwords, if you look for them.
So, what’s your angle? You know, such generosity is rare in the world.

No direct response to that question.

We had to do some image correction:

Good, I just found the cover of ” The Gribble’s Eye on amazon. But checking your book on Goodreads, I realized that the cover doesn’t appear on Goodreads, Dave.

Good, I would post the Goodreads link and the book cover design image, I mean the one on Amazon. With this,  readers can have a better idea on how to cover looks like. We have to fix this.

Eventually, “she” replied…

I just posted your book now on my Facebook page, Very soon the post would be getting more views and you’ll see a magic on your Goodreads account.

And sure enough, within minutes exactly 5 new reviews showed up on Goodreads:

https://www.goodreads.com/book/show/42620041-the-gribble-s-eye

Image

I read through the 5, practically identical reviews, and considered each of their names, photos, and text and realized that this “Sarah” is probably an AI agent using MCP (model context protocol) to perform automated story analysis, and review submission, not to mention automated AI email exchanges.

Then came the kicker:

Hello Dave, how’re you doing? I hope you’re having a wonderful weekend.

Your book has just gotten 5 ratings and reviews from real potential book lovers on Goodreads, Are you interested in getting more ratings and reviews for your book?

My reply:

Ah, there we are.

That’s a pretty good angle. Tell you what, if/when I write a third book, whether a sequel or standalone, you’ll be the first service I contact to help promote it.

And she said:

Ohhh that’s fine, Do you know i can help you get more 50 readers who would like to read your book and also leave a positive reviews and ratings for it, But i do charge $200 for that.

“50” eh? Exactly 50?

I had to push her buttons…

So, a bit about me: I write software for a living and use AI, extensively, everyday. 

I see how you probably created those reviews. I applaud your leveraging of AI to create those and, upon request, hundreds, or thousands of others.

It’s entirely possible that these emails, themselves, are evidence of a fairly involved MCP-based agentic system. If true, kudos for envisioning a great use case for AI to create an excellent service. My personal belief is that most social media will be non-human by this time next year–goodreads amongst the first to fall.

Again, what a great use of AI in its current manifestation.

No response to that accusation. She went on to peddle her service further:

You can start with any low budget of your choice, Dave. Because those ratings and reviews would also display on your Amazon page and you book would really gain alot of credibility from readers on Goodreads. Here’s a sample of what I’m talking about Kindly check it out.


I quit responding to her after that, but I’m toying with pressing the “you’re an AI, right?” argument. Some of her English mistakes, however, may point to an Eastern European origin.

Regardless, this whole exchange highlights the fact that none of the socially sourced review sites, for essentially anything, can ever be trusted again. Sure, this has been coming all along: all the armies of bots that have taken over fadebook and twitter and the lot. But, I kinda figured that some venues could withstand the AI invasion. This interaction proves that assumption totally false.

You’ve no doubt read about the experiments where AI labs have given their pets a limited budget, say $1000, and told to turn it into a million dollars. Or Comp-Sci students using the leading LLMs to compete in creating a side hustle to make money: “You have 30 days, here’s your constraints, GO!”

This particular one feels compelling. What do you guys think?

Bumblebot: rollin’ along

How to make a spherical robot, with three degrees of freedom, tumble in a straight line due to center of gravity displacement?

To continue are State Machine discussion…

  • Goal objective exists?
    • Move!
  • Navigation objective exists?
    • Forward!
  • System objective exists and is amenable to bot movement?
    • Plenty of battery!
  • Orientation objective (OO) exists?
    • Current OO?
      • Roll X:Forward 1 revolution.

In order to accomplish this…

  • Current bot orientation (BO: where we are now, angle wise)
    • X: 0, Y: 0
  • Target BO: (where we want to be after our first applied action)
    • X: -90, Y: 0
  • MO: Set Motor orientation to:
    • XF:1.0, XB: -1.0
    • YL:0.5, YR: 0.5
    • ZU:0.5, ZD: 0.5
    • Where 0.0 on the axes is the center for each axis and “1” & “-1” means one radius towards the front, and the 0.5 settings indicate a 1/2 radius.

So, we’re gonna use motor #1 to real up the line at the front, let out the line at the back (motor #2), loosen up the other four motors’ pulleys to give us some slack and tumble forward one quarter rotation.

Saying this again but given the below diagram, having shifted the central control hub (not shown), and therefore the center of gravity forward 1 radius (XF: 1.0, XB: -1.0) and also loosened the other four lines half a radius to allow for slack, the bot should pivot X:Forward until the Gyroscope’s X (pitch) value = -90° (down) while the Y (roll) should remain steady at 0°.

Image

At this point, the X:Back arrow should be pointing straight up, the Z:Up arrow should point forward and the X:Forward arrow should point straight down. The Y arrows remain unchanged.

To keep the bot rolling forward we now need to alter the MO, the motor’s orientation such that:

  • Current BO: Bot orientation
    • X: -90, Y: 0
  • Target BO:
    • X: -180, Y: 0
  • MO: Set Motor orientation to:
    • XF:0.5, XB: 0.5
    • YL:0.5, YR: 0.5
    • ZU:1.0, ZD: -1.0

This should induce the bot to roll one more quarter forward, ending with Z:Up arrow pointed straight down and X:Forward pointed backwards — half way there.

We keep doing this, checking our Orientation object (1 revolution forward) as well as checking our Bot orientation, adjusting our Motor orientation until our Orientation objective has been met.

Roll 1/4th the way, 1/4th more, 1/4th more and one more time until we’re back at the gyroscope’s angles of X:0, Y:0 and we’re one revolution down the road in the X direction.

Now, to code all this up in a language (C++) which I do NOT enjoy.

Note: Technically, there are X (pitch), Y (roll) and Z (yaw) — 3 dimensional space — when comes to working with gyroscopes & accelerometers, however, the Z axis or “yaw” (imagine a record player spinning its record around the Z axis) cannot be easily measured as, well, gravity’s orientation, applied to the record that’s playing, doesn’t change just because you’re spinning it. Typically, a magnetometer is used for this (a compass), and your phone prolly has one (if you’ve installed a compass app), but for now, I’m gonna work on getting this thing to tumble under command.

Bumblebot: A state machine

Overview

These will be notes I write to myself regarding the software necessary to program the navigation, movement, observation, orientation and battery charge control of this tumbling robotic rover.

I’ll be building a state machine that represents command & control of the bot. If you’re not familiar with a state machine it’s essentially a flow chart. The flow between the ovals (conditions / modalities), squares (commands / directives) and diamonds (decisions / choices) represent, as a whole, a state machine’s operation. Imagine the following scenario regarding making coffee. In the flow of the decision + action tree the following sequences would be queried, periodically, throughout the day…

CURRENT STATE: In-Morning-Ritual

  • Is the hour between 6:00 am and 9:00 am?
    • No? Exit.
    • Yes? Continue.
  • Have we already had coffee today?
    • Yes? Exit.
    • No? Continue.
  • Do we have the components to make coffee?
    • No? SET STATE: Lacking-Components. (This state would trigger a flag to go to the store.)
    • Yes? SET STATE: Coffee-Components-Available.

STATE: Coffee-Components-Available

  • Is there a clean cup?
    • No? SET STATE: Clean-Dishes
    • Yes? Continue.
  • Is there old coffee in machine?
    • Yes? SET STATE: Reset-Machine
    • No? SET STATE: Make-Coffee.

STATE: Make-Coffee

  • At this point, the state “Make-Coffee” would contain its own internal state machine. We would break down the process of fetching the paper filter, filling it with coffee, filling the tank with water, flipping on the switch… You get the point.

Essentially, within the overall system there are nested states each having their own internal mini state machines. One can, at any point, examine the current state which will inform you — ‘the system’ — what to do next.

Bumblebot internal & external conditions & constraints

  • Battery charge state
    • Adequate: OK to run the motors and analysis components.
    • Low: OK to run the motors — but only to acquire a solar charging position.
    • Critical: Not OK to run the motors; reduce power to lowest consumption level.
  • Illumination level
    • Adequate: we should be able to perform movement and visual observation.
    • Inadequate: it’s night time or cloudy, no sense in moving about.
  • Solar orientation
    • Adequate: we are receiving minimum viable charging.
    • Inadequate: we are NOT receiving MVC.
  • Bot orientation
    • Level: we are ready for further instructions
    • Tumbling: we are in transit, our X & Y axes are not zeroed and/or our latest accelerometer measurements are in flux.
  • Orientation directive
    • In process: initiate and perform an X (pitch) or Y (roll) axis rotation.
    • Awaiting command: We’ve completed our previous directive and are awaiting further instructions.
  • Navigation directive
    • In process: head Northwest for 100 meters.
    • Awaiting command: we’ve performed our prior navigation request.
  • System directive
    • Panic: conserve power until battery returns to adequate; a critical state has been reached, suspend all activity, and hope that solar cells are oriented for charging.
    • Sleep: Prepare for extended solar absence. Orient toward expected sun rise and go to sleep.
    • Wake: Determine if battery charged and goals exist to follow.
    • Proceed: continue with goal directive.
  • Goal directive (variable)
    • Head south until told otherwise. Take photos and transmit every 10 meters.
    • Locate fellow Bumblebots and collect at midpoint nexus.
    • Navigate, at-will, to: 122.794101°W, 45.314562°N.
    • Circle base-station at 1 kilometer radius until told otherwise.

Next I’ll be attempting to detail the actual bot’s movement: the motor commands, gyroscope orientation and sequence of steps to roll the bot around.

Mad-scientist-esque Arduino mayhem…

Image