Contrastive language and vision learning of general fashion concepts.
Clicks: 43
ID: 277634
2022
Article Quality & Performance Metrics
Overall Quality
Improving Quality
0.0
/100
Combines engagement data with AI-assessed academic quality
Reader Engagement
Emerging Content
12.6
/100
42 views
8 readers
Trending
AI Quality Assessment
Not analyzed
Abstract
The steady rise of online shopping goes hand in hand with the development of increasingly complex ML and NLP models. While most use cases are cast as specialized supervised learning problems, we argue that practitioners would greatly benefit from general and transferable representations of products. In this work, we build on recent developments in contrastive learning to train FashionCLIP, a CLIP-like model adapted for the fashion industry. We demonstrate the effectiveness of the representations learned by FashionCLIP with extensive tests across a variety of tasks, datasets and generalization probes. We argue that adaptations of large pre-trained models such as CLIP offer new perspectives in terms of scalability and sustainability for certain types of players in the industry. Finally, we detail the costs and environmental impact of training, and release the model weights and code as open source contribution to the community.
| Reference Key |
chia2022contrastivescientific
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
|---|---|
| Authors | Chia, Patrick John;Attanasio, Giuseppe;Bianchi, Federico;Terragni, Silvia;Magalhães, Ana Rita;Goncalves, Diogo;Greco, Ciro;Tagliabue, Jacopo; |
| Journal | Scientific reports |
| Year | 2022 |
| DOI |
18958
|
| URL | |
| Keywords |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.