HANDS

Observing and Understanding Hands in Action
in conjunction with ECCV 2022



This page is a rebuild of the original page, which can be found here

Overview

Welcome to join our ECCV 2022 Workshop!

The sixth edition of this ECCV2022 workshop aims at gathering researchers who work on 2D/3D hand detection, segmentation, pose estimation, and tracking problems and its applications. This edition will emphasize reduced ground truth labels and focus on topics such as semi-supervised or self-supervised learning for training hand pose estimation systems. Development of RGB-D sensors and camera miniaturization (wearable cameras, smart phones, ubiquitous computing) have opened the door to a whole new range of technologies and applications which require detecting hands and recognizing hand poses in a variety of scenarios, including AR/VR, assisted car driving, robot grasping, and health care. However, labelling accurate real-world hand poses is still non-trivial. Most existing hand pose methods fail to generalize well to the real world scenarios, especially when considering hand-object or hand-hand interaction scenarios. As new multiview video benchmarks have been proposed for the hand-object or hand-hand interaction, our goal is to encourage semi-/self-supervised learning for hand poses to utilize spatial-temporal information and reduce reliance on annotations. We will also cover up a “breadth of application” including sign language recognition, desktop interaction, egocentric views, object manipulations, far range and over-the-shoulder driver footage. The relevant topics include:

Topics

We will cover all hand-related topics. The relevant topics include and not limited to:
  • 2D/3D hand pose estimation
  • Semi-/self-/weakly-supervised pose estimation
  • Hand-object/hand interaction
  • Robot grasping and object manipulation
  • Imitation learning, reinforcement learning
  • Hand detection/segmentation
  • Gesture recognition/interfaces
  • 3D articulated hand tracking
  • Hand modelling and rendering
  • Hand activity recognition
  • Egocentric vision systems
  • Structured prediction, regression, and other relevant theories/algorithms
  • Applications of hand pose estimation in AR/VR/robotics/haptics
  • Driver hand activity analysis

Schedule(Israel Time)

Sunday afternoon (2:00 pm - 6:00 pm), Oct. 23. 2022
Grand Ballroom E, David Intercontinental, Tel Aviv

14:20 - 14:30 Introduction/opening remarks
14:30 - 15:00 Invited Talk: Robert Wang
Title: Hands for AR/VR
15:00 - 15:30 Invited Talk: Hyung Jin Chang
Title: Understanding Hand-Object Interactions in 3D with Graph-based Network
15:30 - 15:34 Challenge results
15:34 - 15:42 Technical report: Cunlin Wu
Title: How to lift multi-view 2D hand pose to 3D counterpart? A closed form solution
15:42 - 15:50 Technical report: Xiaozheng Zheng
Title: Multiview-Consistent Self-Supervised Learning for Hand Pose Reconstruction
15:50 - 16:50 Coffee break & Paper (poster) presentations
16:50 - 17:20 Invited speaker: Yu-Wei Chao
Title: Human-Robot Handover: From Real to Sim
17:20 - 17:50 Invited speaker: Dimitrios Tzionas
Title: Towards Human Avatars in Interaction with Objects
17:50 - 18:00 Conclusion & prizes

Accepted Papers & Extended Abstracts

We are delighted to announce the following accepted papers and extended abstracts will appear in the workshop! All Extended abstracts and invited posters should prepare posters for communication during the workshop.


Poster size: the posters should be portrait (vertical), with a maximum size of 90x180 cm.

Accepted Extended Abstracts

  • Background Mixup Data Augmentation for Hand and Object-in-Contact Detection.
    Koya Tango, Takehiko Ohkawa, Ryosuke Furuta, Yoichi Sato.
  • [pdf] [supp]
  • Scalable High-fidelity 3D Hand Shape Reconstruction via Graph Frequency Decomposition.
    Tianyu Luan, Jingjing Meng, Junsong Yuan.
    [pdf]
  • MC-hands-1M: A glove-wearing hand dataset for pose estimation.
    Prodromos Boutis, Zisis Batzos, Konstantinos Konstantoudakis, Anastasios Dimou, Petros Daras.
    [pdf] [arxiv]
  • Controllable Human Grasp Generation.
    Lan Feng, Sammy Christen, Jie Song.
    [pdf] [poster]

Technical Reports

  • Multiview-Consistent Self-Supervised Learning for Hand Pose Reconstruction.
    Xiaozheng Zheng, Chao Wen, Zhou Xue.
    [pdf]
  • How to Lift Multi-View 2D Hand Pose to 3D Counterpart: A Closed Form Solution.
    Cunlin Wu, Yang Xiao, Changlong Jiang, Jinghong Zheng, Zhiguo Cao, Zhiwen Fang, Joey Tianyi Zhou, and Junsong Yuan.
    [pdf]

Invited Posters

  • S2Contact: Graph-based Network for 3D Hand-Object Contact Estimation with Semi-Supervised Learning.
    Tze Ho Elden Tse, Zhongqun Zhang, Kwang In Kim, Ales Leonardis, Feng Zheng, Hyung Jin Chang.
  • AlignSDF: Pose-Aligned Signed Distance Fields for Hand-Object Reconstruction.
    Zerui Chen, Yana Hasson, Cordelia Schmid, Ivan Laptev.
  • Domain Adaptive Hand Keypoint and Pixel Localization in the Wild.
    Takehiko Ohkawa, Yu-Jhe Li, Qichen Fu, Ryosuke Furuta, Kris M. Kitani, Yoichi Sato.
  • PressureVision: Estimating Hand Pressure from a Single RGB Image.
    Patrick Grady, Chengcheng Tang, Samarth Brahmbhatt, Christopher D. Twigg, Chengde Wan, James Hays, Charles C. Kemp.
  • Generative Adversarial Network for Future Hand Segmentation from Egocentric Video.
    Wenqi Jia, Miao Liu, James M. Rehg.
  • TransGrasp: Grasp Pose Estimation of a Category of Objects by Transferring Grasps from Only One Labeled Instance.
    Hongtao Wen*, Jianhang Yan*, Wanli Peng, Yi Sun.
    [poster] [video]

Invited Speakers

Prof. Dr. Hyung Jin Chang is an Associate Professor of the School of Computer Science at the University of Birmingham and a Turing Fellow of the Alan Turing Institute. His research interests are focused on human-centred visual learning, especially in application to human-robot interaction. Computer vision and machine learning including deep learning are his expertise research area.
Dr. Yu-Wei Chao is a Senior Research Scientist at NVIDIA Seattle Robotics Lab. His research lies in the intersection of computer vision, machine learning, robotics, and simulation. His recent work focuses on human-robot interaction and robot learning from human in the context of object manipulation.
Prof. Dr. Dimitrios Tzionas is an Assistant Professor at the University of Amsterdam. Earlier he was a Research Scientist at the Perceiving Systems (PS) department, at MPI for Intelligent Systems in Tübingen. His goal is to develop human-centered AI that perceives humans, understands their behavior and helps them to achieve their goals. Potential applications include Augmented/Virtual Reality (AR/VR), Human-Computer Interaction (HCI), and Human-Robot Interaction (HRI).
Dr. Robert Wang supports a computer vision and machine perception team at Facebook Reality Labs / Oculus developing and shipping egocentric hand-tracking, body-tacking and and human understanding technology for augmented reality and virtual reality. Prior to that he had co-founded a small company, Nimble VR (acquired by Facebook) that built skeletal hand-tracking software.

Organizers

  • Prof. Antonis Argyros (FORTH)
  • Dr. Anil Armagan (Huawei)
  • Dr. Guillermo Garcia-Hernando (Niantic)
  • Prof. Otmar Hilliges (ETHZ)
  • Prof. Tae-Kyun Kim (KAIST and Imperial College London)
  • Prof. Vincent Lepetit (ENPC ParisTech and TU Graz)
  • Dr. Iason Oikonomidis (FORTH)
  • Prof. Angela Yao (NUS)
  • Challenge Committee

  • Mr. Sammy Christen (ETH Zürich)
  • Mr. Shreyas Hampali (Graz University of Technology)
  • Mr. Linlin Yang (Uni Bonn and NUS)
  • Sponsors

    Contact

    hands2022@googlegroups.com