Effect of Private Deliberation: Deception of Large Language Models in Game Play.

Clicks: 54
ID: 279336
2024
Article Quality & Performance Metrics
Overall Quality Improving Quality
0.0 /100
Combines engagement data with AI-assessed academic quality
AI Quality Assessment
Not analyzed
Abstract
Integrating large language model (LLM) agents within game theory demonstrates their ability to replicate human-like behaviors through strategic decision making. In this paper, we introduce an augmented LLM agent, called the private agent, which engages in private deliberation and employs deception in repeated games. Utilizing the partially observable stochastic game (POSG) framework and incorporating in-context learning (ICL) and chain-of-thought (CoT) prompting, we investigated the private agent's proficiency in both competitive and cooperative scenarios. Our empirical analysis demonstrated that the private agent consistently achieved higher long-term payoffs than its baseline counterpart and performed similarly or better in various game settings. However, we also found inherent deficiencies of LLMs in certain algorithmic capabilities crucial for high-quality decision making in games. These findings highlight the potential for enhancing LLM agents' performance in multi-player games using information-theoretic approaches of deception and communication with complex environments.
Reference Key
poje2024effectentropy Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors Poje, Kristijan;Brcic, Mario;Kovac, Mihael;Babac, Marina Bagic;
Journal Entropy (Basel, Switzerland)
Year 2024
DOI
524
URL
Keywords

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.