HANDS

Observing and Understanding Hands in Action
in conjunction with ECCV 2024

Overview

Welcome to our HANDS@ECCV24.

Our HANDS workshop will gather vision researchers working on perceiving hands performing actions, including 2D & 3D hand detection, segmentation, pose/shape estimation, tracking, etc. We will also cover related applications including gesture recognition, hand-object manipulation analysis, hand activity understanding, and interactive interfaces.

The eighth edition of this workshop will emphasize the use of large foundation models (e.g. CLIP, Point-E, Segment Anything, Latent Diffusion Models) for hand-related tasks. These models have revolutionized the perceptions of AI, and demonstrate groundbreaking contributions to multimodal understanding, zero-shot learning, and transfer learning. However, there remains an untapped potential for exploring their applications in hand-related tasks.

Schedule

TBD

Call for Papers

We will call for full length submissions with published proceedings via the CMT system. Also, we will invite submission of 2-3 page extended abstracts and recent hand-related papers/posters to the workshop.

Challenges

We will present the HANDS24 challenge based on AssemblyHands, ARCTIC, OakInk2 and UmeTrack. More details will be released later. Looking forward to your participation.

Invited Speakers

TBD

Organizers

Hyung Jin Chang
University of Birmingham

Zicong Fan
ETH Zurich

Otmar Hilliges
ETH Zurich

Kun He
Meta Reality Labs

Take Ohkawa
University of Tokyo

Yoichi Sato
University of Tokyo

Linlin Yang
Communication University of China

Lixin Yang
Shanghai Jiao Tong University

Angela Yao
National University of Singapore

Linguang Zhang
Facebook Reality Labs (Oculus)

Contact

lyang@cuc.edu.cn