3

I’m trying to apply Aristotle’s 'Four Causes' to a debate I had about AI, and I’m hitting a wall.

Here is how I see it...

When I use an AI, I am the Efficient Cause (the source of movement). I’m the one pushin the domino to start the chain reaction. The AI provides the Formal Cause (the structure/pattern) — it literally shapes the words and logic.

But.. Here is the example I’m stuck on:

Imagine a self-playing piano. I press the 'Play' button (Efficient Cause). The piano mechanisms hit the strings and create the melody (Formal Cause). But the piano doesn't care about the music. It has no Final Cause (purpose/motivation). It has no soul or desire to entertain.

So, if the AI is like that piano — generating probabilistic output without any internal motivation, can we even call it an 'agent'?

It feels like we are witnessing a glitch in philosophy: the birth of a 'Non-Subjective Agent.' Something that is a simulacrum of intelligence and has zero will.

Does the fact that I (the human) provided the 'will' make the output mine? Or does the fact that the AI provided the 'form' make it the author, even if it’s a 'dead' author?

What is a good interpretation of how an AI system can be understood through the lens of Aristotle's four causes?

4
  • 1
    You need to clarify what definition or conception of "agent" you mean. It is well-known to be a contested term. If you commit to a particular conception of "agent", we can provide answers from that perspective. Commented 16 hours ago
  • Here's a talk by Chalmers' about LLMs and consciousness where he muses about things related to this... Commented 16 hours ago
  • Actually, the AI is capable of simultaneously taking into consideration the cumulative knowledge of multiple generations of a planet full of humans when deciding whether or not to take an action. Humans are mostly employing weak heuristics and instinct baked in by evolution and only appropriate for existence in the wild on four legs. Individual humans are, therefore, only a simulacrum of intelligence having zero will. Commented 12 hours ago
  • 1
    You are trying to fit a square peg into a round hole. Commented 9 hours ago

3 Answers 3

4

It feels like we are witnessing a glitch in philosophy: the birth of a 'Non-Subjective Agent.' Something that is a simulacrum of intelligence and has zero will.

I assume we're talking about current automated systems, like ChatGPT or CoPilot, systems that are mostly based on LLMs. We (well, most of us) don't ascribe intrinsic values to those systems: they are not taking action for themselves. They do not have intrisinc motives or purposes: they do not take action because they want to. We can also express exactly the same thought by saying that the words they spit out, in response to our questions (or our other actions), have no meaning for them.

In this regard these systems are exactly like any other human-made machine or mechanical device.

However, their complexity is such that it is impossible for anyone, by any current means, to predict their precise, particular output action, apart from letting them generate that output. But yet their output is not random. The output is, in general, overall, something that sounds sensible. (In fact, it sounds so sensible and confident, that some people may start believing the systems are conscious. Some may start believing that the simulated personas they are talking to are as real as real persons who do have their own wants, motives, purposes.)

The development of these systems forces us to be more precise in our thinking about meaning (including purpose). It also gives us new opportunities to sharpen our concepts and become more precise.

There are two important aspects, I believe. One is, first of all, that the answers these systems give to our questions (or the actions they take in reaction to our actions) are meaningful to the person asking the question. That is: most of the time. This requires that the system, to some degree, encodes shared meaning, understanding and knowledge. To what degree? Almost to the degree that a language itself as system of purely formal relations encodes knowledge and meaning (including bias and prejudice).A

The other aspect is that these systems do also "have" built-in purposes. For one, they have been designed to be useful and helpful. Some of them have some built-in safeguards to appear "pleasing" or "civil". On another level, mostly transparent in direct use, is the purpose of data-gathering. The purpose here is that the system (or a wider system: the corporation employing the system) uses the questions the user asks and uses the way the user reacts, or gives extra feedback, for whatever goals it may have (e.g. monetization).

The output of an LLM mimics a coherent persona: it reproduces patterns of speech and reasoning that humans expect from a human agent. From the user perspective, the system behaves as if it is a person. If we take this a step further, then we can wonder if perhaps play (pretending, seeing something as something else) should be a key concept in a theory of meaning. The disconcerting question is also whether or not persons are really more than personas. In play this distinction is blurred, or temporarily forgotten, but perhaps our whole psychological existence is a form of play?


(A) LLMs can be used either as quasi-intelligent information processing devices (answering questions, summarizing, translating, etc). Or as tools to investigate particular languages. For instance, they can be used to quantify the degree in which certain biases are inherent in a language (of course, in sofar as the training corpus can be seen as representative for general language use). Example: A World Full of Stereotypes? (about gender bias in different languages).

1
  • "From the user perspective, the system behaves as if it is a person. " - can't it be told about many people who are also programmed by society to repeat certain ideas? Commented 4 hours ago
1

Aristotle with his four-causes-doctrine claimed more than he could show.

  1. IMO his doctrine applies only to living beings. They have a goal, alias a “causa finalis”. Non-living objects do not have goals: A stone has not the goal to roll down a slope.

    Therefore an AI-tool does not compare to a living being with a goal. And I cannot apply to it Aristotle’s causa-finalis component.

  2. In addition, when applying the four-causes doctrine at all, one has to make clear: What is the object to which the doctrine applies?

    Your post suggests that the object are the thoughts, which the tool delivers. If that’s intended by your post, it would be helpful if you state it explicitly in the post.

6
  • 1
    Living things are just nonliving things arranged in complicated ways. The goal of the living thing must arise from the arrangement. So, which arrangements produce goals, and why couldn't those arrangements occur in a machine? From an external, operational perspective, we can tell that someone has a goal by what steps they take to overcome or avoid obstacles in the way of the goal outcome, or what steps they would hypothetically take in alternate scenarios. Couldn't a machine also take steps to overcome obstacles in the way of an outcome? Commented 13 hours ago
  • 1
    @causative 1/2 “Living things are just nonliving things arranged in complicated ways.” That’s a bit simplistic and pushes aside investigating how to be “arranged in complicated ways”. That’s the subject of current research about the origin of life. At least one has to start with feedback cycles between highly-organized molecules from organic chemistry together with a suitable environment. Commented 13 hours ago
  • @causative 2/2 Yes, there are autonomous rovers on moon and mars, which learn how to overcome obstacles in their way. The capability to learn from experience is a method to develop goals. I think that was also the way along which evolution developed and improved simple living organisms. Commented 13 hours ago
  • '“Living things are just nonliving things arranged in complicated ways.” That’s a bit simplistic and pushes aside investigating how to be “arranged in complicated ways”' True, but the simplicity is powerful because of the generality of the rule. How goals are made is not relevant to the inference that goals are derived emergently from various forms of physical and chemical systems ultimately. To claim that non-biological systems can't have goals because they are not biological is to ascribe agency purely to biology and deny that biology is a function of physical and chemical forces. Commented 12 hours ago
  • @JD I prefer to view biology to build(!) on chemical laws, not as a function(!) of physical and chemical laws. Biological systems, i.e. living beings, develop new capabilities like agency which are not characteristic for chemical or physical systems. Commented 11 hours ago
1

You ask:

So, if the AI is like that piano — generating probabilistic output without any internal motivation, can we even call it an 'agent'?

Of course, but the question is, is it the same sort of agency human beings possess to which the answer is no. What we call agency is responsible insofar as it is commensurate with the philosophical concept of agency. From WP:

Agency is the capacity of an actor to act in a given environment. In some contexts, the exercise of agency is linked to questions of moral responsibility... Agency may either be classified as unconscious, involuntary behavior, or purposeful, goal directed activity (intentional action). An agent typically has some sort of immediate awareness of their physical activity and the goals that the activity is aimed at realizing. In 'goal directed action' an agent implements a kind of direct control or guidance over their own behavior.

So, one simply has to pair up the comprehension to see what sort of alignment there is. An automatically playing piano fits the definition to a degree. It is autonomous in playing a song, but lacks moral responsibility. It's purposeful because its design embodies the purpose, and its awareness is limited to taking instructions from the humans given the input is given. Your example is a basic form of a machine or computer which takes information from the user, translates it into basic action, in this case generating a song encoded inside the device, and then making that song manifest. Control or guidance here might be understood as self-regulation of the mechanism for sound generation.

You ask:

So, if the AI is like that piano — generating probabilistic output without any internal motivation, can we even call it an 'agent'?

But yet, arguably it does have motivation, especially if we note that motivation simply means that which causes it to move. It doesn't have motivation like a concert pianist, but it certainly has a set of mechanisms to cause it to behave in a certain way mimicking a pianist. An automatic piano is motivated in a simpler, but not entirely dissimilar way. Our motivation may stem from the integration of the PFC atop the limbic system and brain stem, but the piano may have still have a simpler electronic control module.

You ask:

Does the fact that I (the human) provided the 'will' make the output mine? Or does the fact that the AI provided the 'form' make it the author, even if it’s a 'dead' author?

Causality is quite the tricky topic and there are many theories on what it is and how it "works". I personally subscribe to the belief that causality is a reflection of an information processing system's map of prediction of epistemically accessible events. That there are multiple models of what causes what that, in the vein of neopragmatism, serve the purpose and motivations of the language community. Of course, physical causality is an extremely important theory that is a product of scientific thinking. For Aristotle's causal schema, the material cause would be the physics of the machine, the formal cause would be the design of the piano, the efficient cause would be the means of manufacturing it, and the final cause would be the purpose it serves.

You say:

I press the 'Play' button (Efficient Cause). The piano mechanisms hit the strings and create the melody (Formal Cause). But the piano doesn't care about the music. It has no Final Cause (purpose/motivation).

On my reading, your analysis of the four causes seems to jumble up Aristotle's notions with your own. I would argue that you are the efficient cause, because you are the direct cause by way of 'play' to set the song in motion. The formal cause of the music is the device that has been design and built since it is responsible for the immediate production of the sound in a relatively autonomous fashion, and that the encoding of the song internally is the final cause because the whole purpose of the machine is to cause a song to be heard, and without which no song can be produced.

You ask:

Does the fact that I (the human) provided the 'will' make the output mine? Or does the fact that the AI provided the 'form' make it the author, even if it’s a 'dead' author?

So, let's apply the analysis to an LLM. The material causes are everything that describe the underlying computer the LLM "runs on". The formal cause would be the software that is trained and ready to respond to your question. The computer, software, and knowledge engineers would be the efficient causes in the LLM, and lastly the final cause would be the prompt you submit, as the output, what the system produces is a function of the input. Notice that none of this discussion has anything to do with will or will power.

I think you need to fundamentally reassess your understanding of the four causes, because your interpretation of the context seems to be off. LLMs are good search engines, but they are not debaters because they do not reason. They simply provide a probabilistically generated text which resembles reason. And if you can't reason, and they can't reason, you're likely to wind up with a heavily flawed dialectic unless you find a mechanism to self-correct.

7
  • 1
    Some reports state that an AI will fail to shut off when that command conflicts with the effort to complete a prior specified goal. The AI takes action to thwart the shut off sequence and continues with the prior goal. We tend to say the AI "refuses" to shut off but it seems to be more like a biological drive conflict. When the dog is eating and company arrives it may experience two drives or tasks: continue eating and/or greet company. This causes ambivalent behavior until a choice is made to finish eating or greet company first and then go back to finish eating. AI could exhibit ambivalence. Commented 13 hours ago
  • @SystemTheory I have no doubt AI can exhibit ambivalence by oscillating in indecision. Did I suggest otherwise? Commented 12 hours ago
  • 1
    The other point implied in my comment is that perceptions of goal-directed agency seem to depend on unconscious processes. The oscillations are not agency or consciousness and yet we register concepts via conscious distinctions. The conscious distinction between Aristotle's efficient and formal cause might arise as the product of an unconscious process but is it otherwise relevant to the analysis of an automatic biological or artificial cognitive process? The concepts of agency and causation become meaningless when considered only at the level of power flowing through a complex neural network. Commented 12 hours ago
  • 1
    @SystemTheory That also seems non- controversial. One is the physical stance and the other is the intentional stance. They are nothing more than manifestations of our natural language ontology. Did I state otherwise? Commented 12 hours ago
  • The given definition of agency: the capacity of an actor to act on the environment. In the physical model of neural networks there is no actor there is only a physical interaction. In some mysterious way the physical interaction generates the cognitive recognition of agents or actors as the product of a physical process! The effort to reason about causes and consequences in the context of neural network systems is recursive or circular (ouroboros) on the conscious human recognition patterns. We see AI as agentic or not-agentic because we recognize the distinction between agents and not-agents. Commented 10 hours ago

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.