EgoPet: Egomotion and Interaction Data from an Animal's Perspective

Arxiv 2024

Amir Bar1,2   Arya Bakhtiar2   Danny Tran2   Antonio Loquercio2  
Jathushan Rajasegaran2   Yann LeCun3   Amir Globerson1   Trevor Darrell2  
1Tel-Aviv University  2Berkeley AI Research  3New York University

[Paper]
[Data]
[Code]
[Bibtex]



Abstract

Animals perceive the world to plan their actions and interact with other agents to accomplish complex tasks, demonstrating capabilities that are still unmatched by AI systems. To advance our understanding and reduce the gap between the capabilities of animals and AI systems, we introduce a dataset of pet egomotion imagery with diverse examples of simultaneous egomotion and multi-agent interaction. Current video datasets separately contain egomotion and interaction examples, but rarely both at the same time. In addition, EgoPet offers a radically distinct perspective from existing egocentric datasets of humans or vehicles. We define two in-domain benchmark tasks that capture animal behavior, and a third benchmark to assess the utility of EgoPet as a pretraining resource to robotic quadruped locomotion, showing that models trained from EgoPet outperform those trained from prior datasets.




EgoPet

The EgoPet dataset is an extensive collection composed of 6,646 video segments distilled from 819 unique videos. High level statistics are provided in the figure below. These original videos were sourced predominantly from TikTok, accounting for 482 videos, while the remaining 338 were obtained from YouTube. The aggregate length of all video segments amounts to approximately 84 hours, which reflects a substantial volume of data for in-depth analysis. In terms of video duration, the segments exhibit an average span of 45.55 seconds, although the duration displays considerable variability, as indicated by the standard deviation of 192.19 seconds. This variation underscores the range of contexts captured within the dataset, from brief encounters to prolonged interactions.





EgoPet Tasks

To measure progress in modeling and learning from animals, we propose three new tasks that aim to capture perception and action: Visual Interaction Prediction (VIP), Locomotion Prediction (LP), and Vision to Proprioception Prediction (VPP). Together with these tasks, we provide annotated training and validation data used for downstream evaluation, and initial benchmark results.

Visual Interaction Prediction (VIP)

The VIP task aims to detect and classify animal interactions and is inspired by human-object interaction tasks. We temporally annotated a subset of the EgoPet videos with the start and end times of visual interactions and the object of the interaction category. The categories, which include person, cat, and dog, were chosen based on how commonly they occurred as objects (for all the categories refer to Supplementary Material of the paper).


Locomotion Prediction (LP)

The goal of the LP task is to predict the future 4 second trajectory of the pet. This is useful for learning basic pet skills like avoiding obstacles or navigating. We extracted pseudo ground truth trajectories using Deep Patch Visual Odometry (DPVO), the best-performing SLAM system for our dataset. We manually filtered inaccurate trajectories in the validation data to ensure high-quality evaluation.


Vision to Proprioception Prediction (VPP)

Finally, in the VPP task, we study EgoPet's utility for a downstream robotic task: legged locomotion. Given a video observation from a forward-facing camera mounted on a quadruped robot, the goal is to predict the features of the terrain perceived by the robot’s proprioception across its trajectory. Making accurate predictions requires perceiving the landscape and anticipating the robot controls. This differs from previous works on robot visual prediction, which require conditioning over current robot controls and are thus challenging to train at scale. To assess performance in this task, we gathered data utilizing a quadruped robodog. This data includes paired videos and proprioception features, which are then utilized for subsequent training and evaluation processes.





Paper


EgoPet: Egomotion and Interaction Data from an Animal's Perspective
Amir Bar, Arya Bakhtiar, Danny Tran, Antonio Loquercio
Jathushan Rajasegaran, Yann LeCun, Amir Globerson, Trevor Darrell

Hosted on arXiv




Acknowledgements

We thank Justin Kerr for helpful discussions. Many of the figures use images taken from web videos. For each figure, we include the URL to its source videos in the Supplementary Material of the paper. This project has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant ERC HOLI 819080). Prof. Darrell’s group was supported in part by DoD including DARPA’s LwLL and/or SemaFor programs, as well as BAIR’s industrial alliance programs.