MASK-RL: Multiagent Video Object Segmentation Framework Through Reinforcement Learning.

Clicks: 291
ID: 94556
2020
Article Quality & Performance Metrics
Overall Quality Improving Quality
0.0 /100
Combines engagement data with AI-assessed academic quality
AI Quality Assessment
Not analyzed
Abstract
Integrating human-provided location priors into video object segmentation has been shown to be an effective strategy to enhance performance, but their application at large scale is unfeasible. Gamification can help reduce the annotation burden, but it still requires user involvement. We propose a video object segmentation framework that leverages the combined advantages of user feedback for segmentation and gamification strategy by simulating multiple game players through a reinforcement learning (RL) model that reproduces human ability to pinpoint moving objects and using the simulated feedback to drive the decisions of a fully convolutional deep segmentation network. Experimental results on the DAVIS-17 benchmark show that: 1) including user-provided prior, even if not precise, yields high performance; 2) our RL agent replicates satisfactorily the same variability of humans in identifying spatiotemporal salient objects; and 3) employing artificially generated priors in an unsupervised video object segmentation model reaches state-of-the-art performance.
Reference Key
vecchio2020maskrlieee Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors Vecchio, Giuseppe;Palazzo, Simone;Giordano, Daniela;Rundo, Francesco;Spampinato, Concetto;
Journal IEEE Transactions on Neural Networks and Learning Systems
Year 2020
DOI
10.1109/TNNLS.2019.2963282
URL
Keywords

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.