Collaborative Human-AI Risk Annotation: Co-Annotating Online Incivility with CHAIRA
Clicks: 6
ID: 283244
2024
Collaborative human-AI annotation is a promising approach for various tasks
with large-scale and complex data. Tools and methods to support effective
human-AI collaboration for data annotation are an important direction for
research. In this paper, we present CHAIRA: a Collaborative Human-AI Risk
Annotation tool that enables human and AI agents to collaboratively annotate
online incivility. We leveraged Large Language Models (LLMs) to facilitate the
interaction between human and AI annotators and examine four different
prompting strategies. The developed CHAIRA system combines multiple prompting
approaches with human-AI collaboration for online incivility data annotation.
We evaluated CHAIRA on 457 user comments with ground truth labels based on the
inter-rater agreement between human and AI coders. We found that the most
collaborative prompt supported a high level of agreement between a human agent
and AI, comparable to that of two human coders. While the AI missed some
implicit incivility that human coders easily identified, it also spotted
politically nuanced incivility that human coders overlooked. Our study reveals
the benefits and challenges of using AI agents for incivility annotation and
provides design implications and best practices for human-AI collaboration in
subjective data annotation.
Reference Key |
singh2024collaborative
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
---|---|
Authors | Jinkyung Katie Park; Rahul Dev Ellezhuthil; Pamela Wisniewski; Vivek Singh |
Journal | arXiv |
Year | 2024 |
DOI | DOI not found |
URL | |
Keywords |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.