STAIR Actions: A Video Dataset of Everyday Home Actions
Clicks: 51
ID: 282420
2018
Article Quality & Performance Metrics
Overall Quality
Improving Quality
0.0
/100
Combines engagement data with AI-assessed academic quality
Reader Engagement
Emerging Content
5.4
/100
18 views
18 readers
Trending
AI Quality Assessment
Not analyzed
Abstract
A new large-scale video dataset for human action recognition, called STAIR
Actions is introduced. STAIR Actions contains 100 categories of action labels
representing fine-grained everyday home actions so that it can be applied to
research in various home tasks such as nursing, caring, and security. In STAIR
Actions, each video has a single action label. Moreover, for each action
category, there are around 1,000 videos that were obtained from YouTube or
produced by crowdsource workers. The duration of each video is mostly five to
six seconds. The total number of videos is 102,462. We explain how we
constructed STAIR Actions and show the characteristics of STAIR Actions
compared to existing datasets for human action recognition. Experiments with
three major models for action recognition show that STAIR Actions can train
large models and achieve good performance. STAIR Actions can be downloaded from
http://actions.stair.center
| Reference Key |
takeuchi2018stair
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
|---|---|
| Authors | Yuya Yoshikawa; Jiaqing Lin; Akikazu Takeuchi |
| Journal | arXiv |
| Year | 2018 |
| DOI |
DOI not found
|
| URL | |
| Keywords |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.