HANDS

Observing and Understanding Hands in Action
in conjunction with ECCV 2024


Overview

Welcome to our HANDS@ECCV24.

Our HANDS workshop will gather vision researchers working on perceiving hands performing actions, including 2D & 3D hand detection, segmentation, pose/shape estimation, tracking, etc. We will also cover related applications including gesture recognition, hand-object manipulation analysis, hand activity understanding, and interactive interfaces.

The eighth edition of this workshop will emphasize the use of large foundation models (e.g. CLIP, Point-E, Segment Anything, Latent Diffusion Models) for hand-related tasks. These models have revolutionized the perceptions of AI, and demonstrate groundbreaking contributions to multimodal understanding, zero-shot learning, and transfer learning. However, there remains an untapped potential for exploring their applications in hand-related tasks.

Schedule

The HANDS workshop will be held in the afternoon on September 30th, 2024, at MiCo Milano.


Other information will be released later.

Accepted Papers & Extended Abstracts

We are delighted to announce the following accepted papers and extended abstracts will appear in the workshop!

Full-length Papers

  • AirLetters: An Open Video Dataset of Characters Drawn in the Air
    Rishit Dagli, Guillaume Berger, Joanna Materzynska, Ingo Bax, Roland Memisevic
  • RegionGrasp: A Novel Task for Contact Region Controllable Hand Grasp Generation
    Yilin Wang, Chuan Guo, Li Cheng, Hai Jiang
  • Generative Hierarchical Temporal Transformer for Hand Pose and Action Modeling
    Yilin Wen, Hao Pan, Takehiko Ohkawa, Lei Yang, Jia Pan, Yoichi Sato, Taku Komura, Wenping Wang
  • Adaptive Multi-Modal Control of Digital Human Hand Synthesis using a Region-Aware Cycle Loss
    Qifan Fu, Xiaohang Yang, Muhammad Asad, Changjae Oh, Shanxin Yuan, Gregory Slabaugh
  • Conditional Hand Image Generation using Latent Space Supervision in Random Variable Variational Autoencoders
    Vassilis Nicodemou, Iason Oikonomidis , Giorgos Karvounas, Antonis Argyros
  • ChildPlay-Hand: A Dataset of Hand Manipulations in the Wild
    Arya Farkhondeh, Samy Tafasca, Jean-Marc ODOBEZ
  • EMAG: Ego-motion Aware and Generalizable 2D Hand Forecasting from Egocentric Videos
    Masashi Hatano, Ryo Hachiuma, Hideo Saito

Extended Abstracts

  • Leveraging Affordances and Attention-based models for Short Term Object Interaction Anticipation
    Lorenzo Mur-Labadia, Ruben Martinez-Cantin, Jose J Guerrero, Giovanni Maria Farinella, Antonino Furnari
  • Diffusion-based Interacting Hand Pose Transfer
    Junho Park, Yeieun Hwang, Suk-Ju Kang
  • Are Synthetic Data Useful for Egocentric Hand-Object Interaction Detection?
    Rosario Leonardi, Antonino Furnari, Francesco Ragusa, Giovanni Maria Farinella
  • Parameterized Quasi-Physical Simulators for Dexterous Manipulations Transfer
    Xueyi Liu, Kangbo Lyu, jieqiong zhang, Tao Du, Li Yi
  • Pre-Training for 3D Hand Pose Estimation with Contrastive Learning on Large-Scale Hand Images in the Wild
    Nie Lin, Takehiko Ohkawa, Mingfang Zhang, Yifei Huang, Ryosuke Furuta, Yoichi Sato
  • Task-Oriented Human Grasp Synthesis via Context- and Task-Aware Diffusers
    An-Lun Liu, Yu-Wei Chao, Yi-Ting Chen
  • Action Scene Graphs for Long-Form Understanding of Egocentric Videos
    Ivan Rodin*, Antonino Furnari*, Kyle Min*, Subarna Tripathi, Giovanni Maria Farinella
  • Get a Grip: Reconstructing Hand-Object Stable Grasps in Egocentric Videos
    Zhifan Zhu, Dima Damen
  • Self-Supervised Learning of Deviation in Latent Representation for Co-speech Gesture Video Generation
    Huan Yang, Jiahui Chen, Chaofan Ding, Runhua Shi, Siyu Xiong, Qingqi Hong, Xiaoqi Mo, Xinhan Di
  • OCC-MLLM-Alpha:Empowering Multi-modal Large Language Model for the Understanding of Occluded Objects with Self-Supervised Test-Time Learning
    Shuxin Yang, Xinhan Di
  • Dyn-HaMR: Recovering 4D Interacting Hand Motion from a Dynamic Camera
    Zhengdi Yu, Alara Dirik, Stefanos Zafeiriou, Tolga Birdal
  • Learning Dexterous Object Manipulation with a Robotic Hand via Goal-Conditioned Visual Reinforcement Learning Using Limited Demonstrations
    Samyeul Noh, Hyun Myung

Invited Posters

  • AttentionHand: Text-driven Controllable Hand Image Generation for 3D Hand Reconstruction in the Wild
    Junho Park
  • HandDAGT : A Denoising Adaptive Graph Transformer for 3D Hand Pose Estimation
    Wencan Cheng, Eunji Kim, Jong Hwan Ko
  • On the Utility of 3D Hand Poses for Action Recognition
    Md Salman Shamil, Dibyadip Chatterjee, Fadime Sener, Shugao Ma, Angela Yao

Topics

We will cover all hand-related topics. The relevant topics include and not limited to:
  • Hand pose and shape estimation
  • Hand & object interactions
  • Hand detection/segmentation
  • Hand gesture/action recognition
  • 4D hand tracking and motion capture
  • Hand motion synthesis
  • Hand modeling, rendering, generation
  • Camera systems and annotation tools
  • Novel algorithms and network architectures
  • Multi-modal learning
  • Self-/un-/weakly-supervised learning
  • Generalization and adaptation
  • Egocentric vision for AR/VR
  • Robot grasping, object manipulation, Haptics

Invited Speakers

Hanbyul Joo
Seoul National University

Shunsuke Saito
Reality Labs Research

Shubham Tulsiani
Carnegie Mellon University

Qi Ye
Zhejiang University

Organizers

Hyung Jin Chang
University of Birmingham

Rongyu Chen
National University of Singapore

Zicong Fan
ETH Zurich

Otmar Hilliges
ETH Zurich

Kun He
Meta Reality Labs

Take Ohkawa
University of Tokyo

Yoichi Sato
University of Tokyo

Elden Tse
National University of Singapore

Linlin Yang
Communication University of China

Lixin Yang
Shanghai Jiao Tong University

Angela Yao
National University of Singapore

Linguang Zhang
Facebook Reality Labs (Oculus)

Sponsors

Contact

hands2024@googlegroups.com