3,670 hours of first-person video from 931 participants in 9 countries.
Browse commercial Multimodal → Visit original source ↗Ego4D is a massively multi-institutional first-person video dataset led by Meta FAIR. 3,670 hours of egocentric video from 931 camera wearers in 9 countries, annotated for 5 benchmark tasks: episodic memory, hand-object interaction, AV diarization, social interaction, forecasting. Access requires signing a data use agreement.
LQS is our 7-dimension quality score, computed from the dataset's published statistics. See methodology →
Composite score computed from the 7 dimensions below: completeness, uniqueness, validation health, size adequacy, format compliance, label density, and class balance.
Common tasks and benchmarks where Ego4D is the default or competitive choice.
What's actually in the dataset — from the maintainer's published stats.
Ego4D is distributed under Ego4D License (DUA required). This is a third-party public dataset; LabelSets indexes and scores it but does not host or redistribute the data. Always verify current license terms with the maintainer before commercial use.
LabelSets sellers offer paid multimodal datasets with what public datasets often can't give you:
Other entries in the Multimodal catalog.