From what data can we learn concepts such as objects, actions, and scenes? Recent studies on zero-shot, one-shot, and few-shot learning have shown the effectiveness of collaboration between computer vision and natural language processing. This workshop promotes deeper and wider collaboration across many research fields to scale-up these studies. With the common theme Learning Concepts we hope to provide a platform for researchers to exchange knowledge from their respective backgrounds.
We call papers of the following 5x5 topics for learning concepts (objects, actions, scenes etc) from various types of data.
|· Few-/Low-/k-Shot Learning||from||Image Data|
|· One-Shot Learning||Video Data|
|· Zero-Shot Learning||Text Data|
|· Cross-Domain Learning||Audio Data|
|· Meta Learning||Sensor Data|
The organizers will also provide a new challenging competition, namely Few-Shot Verb Image Classification with images of 1,500 verb concepts. This is a part of the Large-Scale Few-Shot Learning Challenge to create a large-scale platform for benchmarking few-shot, one-shot, and zero-shot learning algorithms. Papers on this topic are also acceptable through the regular review process.
We invite original research papers and extended abstracts. All submissions should be anonymized, formatted according to the template of ICCV 2019.
Research Papers (4-8 pages excluding references) should contain unpublished original research. They will be published in the ICCV workshop proceedings, and will be archived in the IEEE Xplore Digital Library and the CVF.
Extended Abstracts (2 pages including references) about preliminary work or published work will be archived on this website.
Please submit papers via the submission system (https://cmt3.research.microsoft.com/MDALC2019).
Workshop Paper Submission: July 26th, 2019
Notification of acceptance: August 22nd, 2019
Camera-ready paper: August 29th, 2019
Workshop: October 27th AM at the same venue as ICCV 2019 in Seoul, Korea.