Learning Latent Parameters without Human Response Patterns: Item Response Theory with Artificial Crowds.

Clicks: 209
ID: 81935
2019
Article Quality & Performance Metrics
Overall Quality Improving Quality
0.0 /100
Combines engagement data with AI-assessed academic quality
AI Quality Assessment
Not analyzed
Abstract
Incorporating Item Response Theory (IRT) into NLP tasks can provide valuable information about model performance and behavior. Traditionally, IRT models are learned using human response pattern (RP) data, presenting a significant bottleneck for large data sets like those required for training deep neural networks (DNNs). In this work we propose learning IRT models using RPs generated from artificial crowds of DNN models. We demonstrate the effectiveness of learning IRT models using DNN-generated data through quantitative and qualitative analyses for two NLP tasks. Parameters learned from human and machine RPs for natural language inference and sentiment analysis exhibit medium to large positive correlations. We demonstrate a use-case for latent difficulty item parameters, namely training set filtering, and show that using difficulty to sample training data outperforms baseline methods. Finally, we highlight cases where human expectation about item difficulty does not match difficulty as estimated from the machine RPs.
Reference Key
lalor2019learningproceedings Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors Lalor, John P;Wu, Hao;Yu, Hong;
Journal proceedings of the conference on empirical methods in natural language processing conference on empirical methods in natural language processing
Year 2019
DOI 10.18653/v1/D19-1434
URL
Keywords

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.