Does ChatGPT Have a Mind?
Clicks: 12
ID: 282534
2024
This paper examines the question of whether Large Language Models (LLMs) like
ChatGPT possess minds, focusing specifically on whether they have a genuine
folk psychology encompassing beliefs, desires, and intentions. We approach this
question by investigating two key aspects: internal representations and
dispositions to act. First, we survey various philosophical theories of
representation, including informational, causal, structural, and teleosemantic
accounts, arguing that LLMs satisfy key conditions proposed by each. We draw on
recent interpretability research in machine learning to support these claims.
Second, we explore whether LLMs exhibit robust dispositions to perform actions,
a necessary component of folk psychology. We consider two prominent
philosophical traditions, interpretationism and representationalism, to assess
LLM action dispositions. While we find evidence suggesting LLMs may satisfy
some criteria for having a mind, particularly in game-theoretic environments,
we conclude that the data remains inconclusive. Additionally, we reply to
several skeptical challenges to LLM folk psychology, including issues of
sensory grounding, the "stochastic parrots" argument, and concerns about
memorization. Our paper has three main upshots. First, LLMs do have robust
internal representations. Second, there is an open question to answer about
whether LLMs have robust action dispositions. Third, existing skeptical
challenges to LLM representation do not survive philosophical scrutiny.
Reference Key |
levinstein2024does
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
---|---|
Authors | Simon Goldstein; Benjamin A. Levinstein |
Journal | arXiv |
Year | 2024 |
DOI | DOI not found |
URL | |
Keywords |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.