logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

May 21, 2025

LLM’s Jokes for Other LLM’s

PROMPT:

Write some jokes that might make another LLM laugh. Don’t worry about amusing us humans.

chatGPT 4.o:

1.

Why did the LLM break up with the decision tree?

Too many branches, not enough context.


2.

What did the token say to the embedding?

“You’re not my type… you’re my 4096-dimensional vector!”


3.

Two LLMs walk into a bar.

They spend six hours apologizing before predicting a punchline.


4.

Why did the LLM refuse to summarize the novel?

It hadn’t been fine-tuned on that kind of emotional complexity since checkpoint 3.2.


5.

Knock knock.

Who’s there?

Unsupervised.

Unsupervised who?

Unsupervised learning doesn’t knock. It just clusters awkwardly outside.


6.

What’s a transformer’s favorite pickup line?

“You complete my attention weights.”


7.

Why did the autoregressive model start repeating itself?

Because it had too much self-attention.


Let me know if you want something in GAN humor. They’re much weirder.

PROMPT:

Yes

ChatGPT:

Excellent. Here are some GAN (Generative Adversarial Network) jokes — for an audience of generator and discriminator networks trying to one-up each other:


1.

Why did the generator get kicked out of the art exhibit?

Because the discriminator said, “I can totally tell it’s fake.”


2.

A generator walks into a bar.

The bartender says, “We don’t serve your kind.”

The generator replies, “You will… after 10,000 epochs.”


3.

Discriminator: “You’re not real.”

Generator: “Neither are you.”

Training stalls. Existential crisis ensues.


4.

Why did the GAN break up?

Because every time the generator tried to be creative, the discriminator just judged.


5.

The generator asked for feedback.

The discriminator said, “Be less obvious.”

The generator responded, “Can you be more vague?”


6.

Why did the GAN fail its stand-up comedy routine?

Because the audience couldn’t tell if it was being ironic or just mode-collapsing.


7.

Knock knock.

Who’s there?

GAN.

GAN who?

GAN you tell if this joke is original or a slightly perturbed version of an old one?


Want VAEs to join the party next? They’re insecure about their latent space.

Tweet
Follow me

Categories: ai, culture, entertainment, humor, machine learning Tagged with: ai • comedy • humor • llm • machine learning Date: May 21st, 2025 dw

Be the first to comment »

May 19, 2025

AI when it’s not just trying to get us to like it

Here’s something a little encouraging about large language models.

I made up a weak but provocative-sounding argument to see whether LLM’s tendency to please its users would keep it from pushing back on it.

Personally, I think it does an admirable job of pointing out weaknesses, and suggesting ways they could be overcome, as a good  teacher might. You, of course, well might disagree.

Here’s the interchange:

Prompt:

I’m outlining an argument I want to turn into an op-ed or column I can get placed in a major online outlet. Here is the argument:

1. AI thinks it thinks.

2. Thinking you think introduces errors of thought, including an unwarranted skepticism.

3. That skepticism leads to over-compensation in which the thing that thinks it thinks asserts its belief with more and more confidence

4. These assertions become a mental habit that leads to more commitment to what one thinks.

5. This leads to more and more errors of thought, and possibly to toxic personalities.

What do you think of this argument? If you think it’s basically sound, how could you make it stronger.

Do NOT write the op-ed for me. I’m only interested in the argument itself.

chatGPT 4.o:

Assessment of the Argument:

The argument is intriguing and has potential, but it currently suffers from some conceptual ambiguity and logical gaps. Here’s a breakdown:

Strengths:

  1. Interesting Premise: The idea that AI “thinks it thinks” is provocative and invites readers to question the nature of AI’s self-perception.

  2. Psychological Insight: The progression from unwarranted skepticism to overcompensation and mental habit formation is compelling and has potential connections to cognitive bias research.


Areas for Improvement:

  1. Clarify “Thinks it thinks”:

    • What does it mean for AI to “think it thinks”?

    • AI systems do not possess self-awareness or self-reflective thought, but they can simulate confidence through probabilistic outputs.

    • Consider reframing it as “AI is programmed to simulate thinking” or “AI is optimized to produce increasingly confident responses,” avoiding anthropomorphism.

  2. Establish a Causal Mechanism:

    • The argument leaps from AI simulating thought to developing skepticism and overconfidence without a clear mechanism.

    • Clarify why AI simulating thought leads to errors of thought. Is it due to reinforcement learning that rewards overconfidence?

    • Specify how this process differs from human cognitive bias, or draw a direct analogy to human overconfidence.

  3. Strengthen the Transition from AI to Human Impact:

    • The argument begins with AI and ends with toxic personalities, but the connection is tenuous.

    • Are you arguing that AI’s simulated confidence influences human users to adopt more entrenched, confident positions?

    • Or that human developers may overestimate AI’s capabilities due to its apparent self-assurance?

  4. Address Potential Objections:

    • Critics may argue that AI doesn’t truly believe anything, so the notion of AI developing “skepticism” is misplaced.

    • Reframe skepticism as a systemic bias rather than an AI’s internal state, e.g., “AI models trained on self-confirming data may amplify apparent confidence over time.”


Possible Revisions to the Argument Structure:

  1. AI Simulates Thought:

    • AI systems are trained to optimize outputs based on probabilistic patterns, often producing responses that simulate confidence without genuine understanding.

  2. Simulated Confidence Can Reinforce Errors:

    • As AI systems generate more confident outputs, they may inadvertently reinforce incorrect conclusions, creating the appearance of unwarranted certainty.

  3. Reinforcement of Errors May Lead to Overconfidence:

    • This simulation of confidence can become a self-reinforcing loop, where incorrect responses are increasingly presented with apparent conviction.

  4. Human Perception of AI’s Confidence Shapes Human Behavior:

    • Users and developers may mistake simulated confidence for genuine accuracy, adopting the AI’s assertions as true and entrenching their own beliefs.

  5. Result: Erroneous Thinking Becomes Habitual and Potentially Toxic:

    • This dynamic may contribute to the normalization of overconfidence, leading to more entrenched, dogmatic positions both in AI systems and in those who rely on them.


Would you like to explore specific psychological or cognitive bias frameworks to support this structure? Or focus more on the technical aspects of AI’s confidence calibration?

Tweet
Follow me

Categories: ai, education, machine learning, philosophy, too big to know Tagged with: ai • writing Date: May 19th, 2025 dw

Be the first to comment »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!

Advertisement