CosHand: Controlling the World
by Sleight of Hand

Columbia University
ECCV 2024 (Oral Presentation)

CosHand synthesizes an image of a future after a specific interaction (dotted blue mask) has occurred.

Abstract

Humans naturally build mental models of object interactions and dynamics, allowing them to imagine how their surroundings will change if they take a certain action. While generative models today have shown impressive results on generating/editing images unconditionally or conditioned on text, current methods do not provide the ability to perform object manipulation conditioned on actions, an important tool for world modeling and action planning.

We propose to learn an action-conditional generative models by learning from unlabeled videos of human hands interacting with objects. The vast quantity of such data on the internet allows for efficient scaling which can enable high-performing action-conditional models.

Given an image, and the shape/location of a desired hand interaction, CosHand, synthesizes an image of a future after the interaction has occurred. Experiments show that the resulting model can predict the effects of hand-object interactions well, with strong generalization particularly to translation, stretching, and squeezing interactions of unseen objects in unseen environments. Further, CosHand can be sampled many times to predict multiple possible effects, modeling the uncertainty of forces in the interaction/environment. Finally, method generalizes to different embodiments, including non-human hands, i.e. robot hands, suggesting that generative video models can be powerful models for robotics.

Method

We propose a novel approach of controlling by hands to enable manipulating objects in an image. Given an image, the corresponding hand mask, and a query hand mask of the desired interaction, CosHand synthesizes an image with the interaction applied. Such visual conditioning allows for object interaction.
Method

Results on Something-Somethingv2 (training) Dataset

Results on Something-Somethingv2 dataset
We show that CosHand can perform complex manipulations on a variety of rigid and deformable objects. We show interactions such as squeezing a lemon, closing a drawer, rotating a bottle, and placing items inside cups, which requires understanding of deformable and articulated objects, as well as occlusion.

Testing In-the-wild

Results on Something-Somethingv2 dataset
We test CosHand against challenging In-the-wild collected in our home/lab environments. CosHand remains robust in these scenarios, showcasing its strong generalization ability.

Testing on Robot Arms

Results on Something-Somethingv2 dataset
While CosHand is only trained on hands, it can generalize to robot arms for simple actions. For example, CosHand can predict reasonably the result of robotic actions including: moving objects around, picking up objects, unfolding cloth, and sweeping granular particles.

Comparing Conditioning Methods

Results on Something-Somethingv2 dataset
We show that text-conditioning is insufficient to model interactions, whereas hands allow for better control. Columns 1 & 2 show the input image, query caption, and output of text conditional generation. Columns 3 & 4 show the input image, query hand mask, and output of CosHand. Column 5 shows the ground truth output. Notice that CosHand is able to achieve precise control (including the exact final location of the knife in row 1 and the precise squeezing motion in rows 2 & 3) which results in a output that is more consistent with the ground truth.

BibTeX


      @misc{sudhakar2024controllingworldsleighthand,
        title={Controlling the World by Sleight of Hand}, 
        author={Sruthi Sudhakar and Ruoshi Liu and Basile Van Hoorick and Carl Vondrick and Richard Zemel},
        year={2024},
        eprint={2408.07147},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2408.07147}, 
      }
    
Code for the website is inherited from Nerfies.