Recent years have witnessed significant progress in learning 3D representations from 2D visual data, which allows us to infer our 3D world from just a single or a few 2D observations at test time. However, many of these approaches rely heavily on synthetic data and/or extensive manual annotations as supervision during training, and hence face challenges when generalizing to complex real world scenarios. Unsupervised 3D learning has therefore been gaining popularity recently, with the goal of understanding our entire 3D world by learning from unannotated, unconstrained (i.e., "in-the-wild") data. This workshop aims to cover the recent advances in unsupervised and weakly-supervised 3D learning.
More concretely: learning of 3D shape, pose, motion, appearance, illumination and material properties from "in-the-wild" images and videos, such as those collected from the Internet, without explicit ground-truth supervision.
Beyond the current state of the field, the workshop will hopefully shed light on the challenging questions as well as the next steps. For example:
- - Which level of supervision should we aim at for learning 3D in the wild at scale: well-captured real 3D data, synthetic data, weak annotations (e.g. keypoints, masks), category template shapes, category labels only or nothing at all?
- - Inductive biases are often injected into unsupevised methods to replace explicit ground-truth supervision. Can they actually generalize to "in-the-wild" environments?
- - What is the right representation for effective 3D learning in the wild?
- - How to evaluate methods trained on "in-the-wild" data, as ground-truth information is usually unavailable?
- - How to move beyond single-category learning towards more general 3D scene understanding?
- - How to recover complex appearance properties beyond geometry, as the ground-truth labels of those are even more difficult to obtain?
The workshop will be held online in conjuction with ICCV 2021. The talks will be livestreamed and recorded on Youtube and the conference platform. There are no papers at this workshop.