Fifth Workshop on Computer Vision for AR/VR

( CV4ARVR )

October 16, 2021

Organized in conjunction with ICCV 2021

Overview | Submission | Program | Papers | People | FAQ

@cv4arvr


Call for Extended Abstracts

Author notification: September 17, 2021

Extended deadline for submission: August 6, 2021, 11:59pm PT

 
 
 

Submission and Reviews

We invite submission of 2-3 page extended abstracts (using the ICCV 2021 format) describing work in the domains suggested below or in closely-related areas. Accepted submissions will be presented as posters at the workshop. We encourage submissions that incorporate live demonstrations. Authors may submit a draft of their poster as an optional fourth page. The extended abstract and poster sketch should be submitted as a single PDF file with no more than 3-4 pages (excluding references). Reviewing of abstract submissions will be double-blind. Submissions of work which has been previously published, including papers accepted to the main ICCV 2021 conference are allowed.

Submission of Previously Published Work

We invite submissions of relevant work which has been previously published, including papers accepted to the main ICCV 2021 conference. The purpose of the workshop is not a venue for publication, so much as a place to gather together those in the community either working on or interested in computer vision for AR and VR. In the case of previously published work, it is not necessary for the authors to maintain anonymity. Instead, please cite the existing publication in the submitted abstract. These will be reviewed single-blind (much as a journal is reviewed: authors are known to reviewers, reviewers unknown to authors).

 

The extended abstracts will be made publicly available as non-archival reports, allowing for future submissions, and can be submitted up to the announced deadline.

Topics* of interest for submissions include:

 

Perception:

1. Problems needed to be solved for robust AR/VR:

  • 3D-Reconstruction: research on methods that automatically generate 3D virtual environments/objects based on images or other forms of data collected from the real environment/objects (e.g SfM).

  • Tracking Techniques: methods for tracking a target object/environment via cameras and sensors, and estimating viewpoint poses (eg. SLAM).

  • Calibration and Registration: geometric, or photometric calibration methods, and methods to align multiple coordinate frames.

2. Problems related to AR/VR for human interaction:

  • Recognition: recognition and detection technology that could be used in AR/VR applications to improve user experience.

  • Interaction Techniques and User Interfaces: any CV technologies that could improve the interaction with AR/VR environments (e.g. Language and Vision : using natural language as input or output to the CV systems deployed in AR/VR applications).

Projection:

  • Display Techniques: research on display hardware to present virtual content in AR/VR, including head-worn, handheld, and projected displays.

  • Image Generation: realistic image synthesis, inpainting, or rendering techniques that could benefit AR/VR projections.

Applications:

  • Visualization: research into methods that use AR/VR to make complex 2D/3D data easier to navigate through and understand.

  • AR/VR Applications: research on AR/VR systems in application domains such as medicine, manufacturing, or military, among others.

  • Efficient CV Systems: research on how to efficiently deploy/train CV models in devices that have limited computational power (e.g. in mobile/wearable devices)

  • Mobile/Wearable AR/VR: research on AR/VR applications and techniques for wearable and mobile platforms, such as tablets and smartphones.

 
 

In general, any CV solutions that could benefit AR/VR applications.

We encourage submissions from researchers and students that incorporate live demonstrations of research results in addition to poster presentation. You can contact Poster & Demo Session organizers with questions regarding submissions and demo format.

* Several topic definitions are taken from Zhou et al., Kim et al.

 

Workshop Sponsors