HANDS

Observing and Understanding Hands in Action
in conjunction with ICCV 2025


Overview

We are very happy to invite recently introduced benchmarks, UmeTrack, GraspM3, OakInk2 and GigaHand, and host a series of challenges.To participate in the challenge, please fill the Google Form and accept the terms and conditions.

Winners and prizes will be announced and awarded during the workshop.

Please see General Rules and Participation and Four Tracks below for more details.

Timeline

July 3 2025 (Opened) Challenge data & website release & registration open
around July 15 2025 (Please refer to each track page for details) Challenges start
TBD Registration close
October 1 - October 10 (Please refer to each track page for the specific deadline) Challenge submission deadline
around October 10 (Please refer to each track page for details) Decisions to participants
October 17 2025 Technical Report Deadline for Challenges (Invited Teams)

General Rules and Participation

We must follow the general challenge rules below and more rules can be found on each offical track page.

  • To participate in the challenge, please fill the Google Form and accept the terms and conditions.

    • Please DO use your institution's email address. Please DO NOT use your personal email address such as gmail.com, qq.com and 163.com.

    • Each team must register under only one user id/email id. The list of team members can not be changed or rearranged throughout the competition.

    • Each team should use the same email for creating accounts in the evaluation server, and use the team name (in verbatim) in the evaluation servers.

    • Each individual can only participate in one team and should provide the institution email at registration.

    • The team name should be formal and we preserve the rights to change the team name after discussing with teams.

    • Each team will receive a registration email after registration.

    • Teams found to be registered under multiple IDs will be disqualified.

    • For any special cases, please email the organizers.

  • The primary contact email is important during registration.

    • We will contact participants via emails once we have an important update.

    • If there are any special reasons or ambiguities that may lead to disputes, please email the organizers first for explanation or approval. Subsequent contact may result in disqualification.

  • To encourage fair competition, different tracks may include limits like overall model size, training dataset etc. Details can be found in each track page.

    • Teams may use any publicly available and appropriately licensed data (if allowed by the track) to train their models in addition to the ones provided by the organizers.

    • The daily and overall submission number may be limited based on tracks.

    • The best performance of a team CAN NOT be hidden during the competition. Hiding the best performance may result in warning or even disqualification.

    • Any supervised/unsupervised training on the validation/testing set is not allowed in this competition.

  • Reproducibility is the responsibility of the winning teams and we invite all teams to advertise their methods.

    • Winning methods should provide their source code to reproduce their results under strict confidentiality rules if requested by organizers/other participants. If the organizing committee determines that the submitted code runs with errors or does not yield results comparable to those in the final leaderboard and the team is not willing to cooperate, it will be disqualified, and the winning place will go to the next team in the leaderboard.

    • In order for participants to be eligible for competition prizes and be included in the official rankings (to be presented during the workshop and subsequent publications), information about their submission must be provided to organizers. Information may include, but not limited to, details on their method, synthetic and real data use, architecture and training details.

    • For each submission, participants must keep the parameters of their method constant across all testing data for a given track.

    • To be considered a valid candidate in the competition, the method has to beat the baseline by a non-trivial margin. A method is invalid if there is no significant technical changes. For example, if you simply replace a ResNet18 backbone with a ResNet101 backbone, it is not counted as a valid method. The organizers preserve all rights to determine the validity of the method. We will invite all valid teams to advertise their methods via 2-3 page technical report, and or poster presentation.

    • Winners should provide a 2-3 page technical report, winner talk, and poster presentation during the workshop.

MegoTrack

In XR applications, accurately estimating hand poses from egocentric cameras is important to enable social presence and interactions with the environment. The primary difficulties arise from the integration of multiple calibrated head-mounted cameras and the on-device personalization systems for these cameras. Yet, it also provides a new opportunity to condition pose estimation on a pre-calibrated hand shape to improve accuracy. This challenge is designed to address the unique problems mentioned above. This year the challenge includes two tracks: the hand pose estimation track and the hand shape estimation track. For the hand pose estimation track, participants will be provided with calibrated stereo hand crop videos and MANO shape parameters. The expected results in this challenge are the MANO pose parameters. For the hand shape estimation track, participants will be provided with calibrated stereo hand crop videos, assuming each video captures a single subject. The expected result is the MANO shape parameters, which will be evaluated using vertex errors in the neutral pose.

Check out the following links to get started

Challenge website: https://eval.ai/web/challenges/challenge-page/2333/overview

Toolkit: https://github.com/facebookresearch/hand_tracking_toolkit/tree/main

Grasp Motion

Grasp motion generation for human-like multi-fingered hands has wide applications in animation, robotic grasping, mixed reality interaction, etc. Therefore, we design a grasp motion generation challenge that aims at producing physically plausible grasp motion trajectories conditioned on 3D input objects. The challenge is built on the GraspM3 dataset. The baselines for grasp motion generation will be provided prior to the challenge.

Important Dates

  • Submission deadline for results: October 10, 2025 (11:59PM PST)
  • Results will be shared during the HANDS workshop at ICCV 2025

Rules

  • The evaluation process is conducted in Isaac Gym, and the test set objects are not visible to participants.
  • Participants are allowed to adjust simulation parameters, provided they clearly specify all modifications made to the environment. However, please note that altering these parameters may compromise the integrity of the evaluation setup and could result in rollout failures.
  • For fair comparisons, only methods trained using the datasets from this challenge are qualified for winning.
  • Participants may not use the objects not in the dataset for training, fine-tuning, self-supervised pretraining, or any other form of method development.
  • However, participants may use the objects from the Objaverse dataset or other datasets to evaluate the algorithm before submission.
  • Participants are required to generate grasping sequences based on randomized initial hand poses and lift the object by a certain distance.

Check out the following links to get started

Challenge instructions: https://github.com/DexGraspMotionChallenge/DexGraspMotionChallenge2025/wiki

Toolkit: https://github.com/DexGraspMotionChallenge/DexGraspMotionChallenge2025

GraspM3 Dataset: https://lihaoming45.github.io/GraspM3/index.html

HO Tracker

Enabling dexterous robotic hands to perform complex operations that align with human actions is of great significance. This challenge seeks algorithms that transfer human hand-object manipulation trajectories (e.g., bimanual pen capping) to dexterous robotic hands in simulation, aiming to reproduce physically plausible interactions. Specifically, given a reference motion capture sequence—including the 6D poses of the object(s) and hand(s), as well as the angle of each finger—the algorithm should generate dexterous hand actions that can fulfill the desired object motion in simulator (Issac Gym). Reference motions, estimated via vision-based methods (e.g. MoCap), may contain noise. Participants should develop their algorithms based on the open-source data from OakInk2 and GigaHand. The tasks include manipulations involving

  • 1) Single-hand manipulation with a single object.
  • 2) Bimanual manipulation with a single object.
  • 3) Bimanual manipulation with two separate objects (e.g., pen cap and body).

Evaluations are based on trajectories from the private test set of the OakInk2 and GigaHand dataset, which is currently withheld and will be publicly released around one week before the submission deadline.

Important Notes

  • Submission deadline for results: October 1, 2025 (11:59PM PST)
  • Upon submission, participants are required to submit a set of model weights that can perform rollout within our provided Isaac Gym simulation environment

Check out the following links to get started

Challenge instructions: https://handsworkshop.github.io/challenges/tracker2025.html

Simulation Toolkit: to be released at July 14

Source Sequences for Training: OakInk2 and GigaHand

Current SOTA model for start: ManipTrans

ARCTIC

Humans interact with various objects daily, making holistic 3D capture of these interactions crucial for modeling human behavior. Most methods for reconstructing hand-object interactions require pre-scanned 3D object templates, which are impractical in real-world scenarios. Recently, HOLD (Fan et al. CVPR’24) has shown promise in category-agnostic hand-object reconstruction but is limited to single-hand interaction.

Since we naturally interact with both hands, we host the bimanual category-agnostic reconstruction task where participants must reconstruct both hands and the object in 3D from a video clip, without relying on pre-scanned templates. This task is more challenging as bimanual manipulation exhibits severe hand-object occlusion and dynamic hand-object contact, leaving rooms for future development.

Image

To benchmark this challenge, we adapt HOLD to two-hand manipulation settings and use 9 videos from ARCTIC dataset's rigid object collection, one per object (excluding small objects such as scissors and phone), and sourced from the test set for this challenge. You will be provided with HOLD baseline skeleton code for the ARCTIC setting, as well as code to produce data for our evaluation server to evaluate.

Important Notes

  • Submission deadline for results: October 9, 2025 (11:59PM CEST)
  • Results will be shared during the HANDS workshop at ICCV 2025

Rules

  • Participants cannot use groundtruth intrinsics, extrinsics, hand/object annotations, or object templates from ARCTIC.
  • Only use the provided pre-cropped ARCTIC images for the competition.
  • The test set groundtruth is hidden; submit predictions to our evaluation server for assessment (details coming soon).
  • Different hand trackers or methods to estimate object pose can be used if not trained on ARCTIC data.
  • Participants may need to submit code for rule violation checks.
  • The code must be reproducible by the organizers.
  • Reproduced results should match the reported results.
  • Participants may be disqualified if results cannot be reproduced by the organizers.
  • Methods must show non-trivial novelty; minor changes like hyperparameter tuning do not count.
  • Methods must outperform the current SOTA (BIGS) by at least 5% to be considered valid, avoiding small margin improvements due to numerical errors.
  • We reserve the right to determine if a method is valid and eligible for awards.
  • Submit early to avoid server issues. Only the final valid submission before the deadline will be counted.

Metric: We use hand-relative chamfer distance, CD_h, (the lower the better) as the main metric for this competition. It is defined in the HOLD paper. For this two-hand setting, we average the left and right hand CD_h metrics.

Support: For general tips on processing and improvement on HOLD (see here). For other technical questions, raise an issue. Should you have any confusion regarding the ARCTIC challenge (e.g., regarding to the rules above), feel free to contact zicong.fan@inf.ethz.ch.

Check out the following links to get started

HOLD project page: https://zc-alexfan.github.io/hold

HOLD code: https://github.com/zc-alexfan/hold

Challenge instructions: https://github.com/zc-alexfan/hold/blob/master/docs/arctic.md

Leaderboard: https://arctic-leaderboard.is.tuebingen.mpg.de/leaderboard

ARCTIC dataset: https://arctic.is.tue.mpg.de/

Contact

hands2025@googlegroups.com