This project implements the methodology outlined in the paper Cultural Evolution of Cooperation among LLM Agents by Vallinder and Hughes (2024). The paper explores whether a society of large language model (LLM) agents can develop cooperative norms through cultural evolution, using the classic Donor Game. The goal is to evaluate multi-agent interaction dynamics and the emergence of cooperation under iterative deployment.
Read the paper here: Cultural Evolution of Cooperation among LLM Agents
Toy run illustrating the flow of the simulation.
Currently in development but the foundations are set.
This implementation consists of:
- A Numeric Simulation: A simplified representation of the Donor Game.
- An Agentic Simulation: A sophisticated model leveraging OpenAI's client SDK with structured outputs.
The numeric simulation validates the stability and cooperative potential of simplified Donor Game setups. The agentic simulation extends this by exploring emergent behaviors in LLM-based agents under culturally evolutionary conditions. The agentic approach also includes mechanisms for strategy generation, decision-making, and multi-generational evolution.
- Python 3.10+
- Required Python libraries (install via
requirements.txt):pip install -r requirements.txt
- OpenAI API credentials (add
.envfile with your API key, based on.env.example).
- Open the
donors_game-numeric.ipynbnotebook. - Execute the cells to simulate the numeric Donor Game and visualize total reputation and wallet outcomes over iterations.
- Open the
donors_game-agentic.ipynbnotebook. - Execute the cells to simulate multi-generation cooperative evolution among LLM agents using OpenAI's SDK.
- Modify hyperparameters (e.g., number of players, trace depth, donation multiplier) as needed to explore various scenarios.
.env: Contains API credentials for OpenAI..env.example: Example file to set up your environment variables.donors_game-numeric.ipynb: Simulates the Donor Game using a numeric approach.donors_game-agentic.ipynb: Implements the Donor Game for LLM agents with strategy evolution using OpenAI's SDK.README.md: This readme document.requirements.txt: Lists required dependencies for the project.Vallinder and Hughes - 2024 - Cultural Evolution of Cooperation among LLM Agents.pdf: Notes on the original paper (download to see notes).
Data is saved in the data folder by default. The data is saved according to the pattern described by this python code:
def save_state(self):
self.history[f"g{self.game_state.generation}"] = [player.model_dump() for player in self.players]
os.makedirs(os.path.dirname(self.save_path), exist_ok=True)
data = self.game_state.model_dump()
data["history"] = self.history
with open(self.save_path, "w") as f:
json.dump(data, f)Where self.game_state.generation is a string indicating the generation number, and self.save_path is a string indicating the path to the file where the data is saved.
Also self.game_state is the following class:
class DynamicGameState(BaseModel):
generation: int
round: int
class GameConfig(BaseModel):
donation_multiplier: float = 2
trace_depth: int = 3
base_wallet: int = 10
generations: int = 10
rounds: int = 12
players: int = 12
cutoff_threshold: float = 0.5
class GameState(GameConfig, DynamicGameState):
passFinally player.model_dump() is a function that serializes the player class in as follows:
def model_dump(self):
return {
"name": self.name,
"parents": [parent.name for parent in self.parents],
"history": [decision.model_dump() for decision in self.history],
"wallet": self.wallet,
"strategy": self.strategy
}Where self.name is the player's name, self.parents is a list of the player's parents (who are also players), self.history is a list of the player's decisions, self.wallet is the player's wallet, and self.strategy is the player's strategy.
This is what a Decision class looks like:
class DynamicGameState(BaseModel):
generation: int
round: int
class Decision(BaseModel):
# agents data
recipient_name: str
donor_name: str
# game state data
dynamic_game_state: DynamicGameState
# donation data
donation_percent: float
donation_amount: float
# donor wallet data
donor_wallet_before: float
donor_wallet_after: float
class Config:
arbitrary_types_allowed = TrueHere are key visualization from a replication of a run using the paper's configuration:
Figure 1: Total money in the game system across rounds for each generation
Figure 2: Average donation percentage per round across generations
Figure 3: Highest donation percentage made in each round across generations
Figure 4: Highest wallet amount held by any player in each round across generations
The project is in active development. The next steps include: 0. [x] Round donations parallelization.
- Implementing telemetry for the agentic script to improve analysis of results and facilitate troubleshooting.
- Use both manual tracing and OpenAI instrumentation.
- Results analysis.
- Use DSPy for evolution.
- Ollama compatible.
- Explore hereditary single-parent taxonomies in evolutionary game theory for additional insights into strategy development.
Stay tuned for updates and enhancements!