What we provide
We offer high-quality egocentric video datasets designed for training robot perception, manipulation, and multimodal reasoning models. Our data is captured using fisheye head-mounted and wrist-mounted cameras, providing synchronized first-person views of real-world tasks.Core concepts
Understand the foundations behind our data and capture pipeline.Egocentric data
Learn why first-person video is critical for embodied AI and robotics.
Capture hardware
Head-mounted and wrist-mounted fisheye camera systems.
Annotations
Action labels, hand pose, object states, and language grounding.
Dataset overview
Long-horizon real-world datasets for robot learning.
Using the data
Everything you need to integrate datasets into training and evaluation pipelines.Data format
File structure, video streams, and annotation formats.
Licensing
Commercial usage, training rights, and enterprise terms.
API & tooling
SDK reference
Programmatic access to datasets with Python tooling.
Quickstart
Quickstart
Load egocentric datasets and start training in minutes.
