[{"content":"","date":null,"permalink":"https://m3ed.io/tags/dataset/","section":"Tags","summary":"","title":"Dataset"},{"content":" Coordinate System #Transformations #The reference sensor in the dataset is the left prophesee camera, with the coordinate frame show in Fig. 1 (TODO).\nTo convert from the right OVC camera to the left prophesee camera, you could use the transformation stored in ovc/right/calib/T_to_prophesee_left, noted \\( {}^{pl} T_{or}\\). Hence, if we have a point \\(p_{or}\\) in the coordinates of ovc right, you can transform them by doing:\n$$ p_{pl} = {}^{pl} T_{or} \\ \\ p_{or} $$\n","date":null,"permalink":"https://m3ed.io/docs/","section":"M3ED","summary":"","title":"Docs"},{"content":"","date":null,"permalink":"https://m3ed.io/tags/docs/","section":"Tags","summary":"","title":"Docs"},{"content":"","date":null,"permalink":"https://m3ed.io/tags/documentation/","section":"Tags","summary":"","title":"Documentation"},{"content":" Multi-robot, Multi-Sensor, Multi-Environment Event Dataset Updates # 2025/03/01: The M3ED SLAM Challenge has been published. You have until June 9, 2025 to submit your solutions. See you all in Nashville 🎸! 2025/02/28: M3ED Rev. 1.2 was released! This version includes evo ground-truth pose files, and compressed local scans from FasterLIO. 2023/08/06: The code used to process M3ED has been released in the Github repo. All the h5 data files have been reprocessed to include the version attribute with the corresponding commit hash. We also improved the Data Overview section with a better description of the folder structure and the data files. 2023/07/27: M3ED Rev. 1.1 was released! This version includes several updates and fixes: fixed GT odometry relative to local map for long sequences, improved density of GT depth, improved semantic segmentation reprojection and add visualizations of the data. Additionally, a few sequences have been added. 2023/06/19: M3ED Rev. 1.0 was released at the CVPR 2023 Workshop on Event-based Vision 🎉! Overview # M3ED provides high-quality synchronized and labeled data from multiple platforms, including wheeled ground vehicles (car), legged robots (spot), and aerial robots (falcon), operating in challenging conditions such as driving along off-road trails, navigating through dense forests, and executing aggressive flight maneuvers.\nM3ED processed data, raw data, and code are available to download. Check out our Github repo for an overview on how the data is processed.\nDuration Sequences What researchers say about M3ED #Congrats to @KostasPenn \u0026amp; team at @GRASPlab @PENN! They created a multi-camera dataset for high-speed robotics with our Metavision® EVK4 HD. The M3ED dataset tackles challenges like vibrations, segmentation \u0026amp; demanding scenarios for event cameras👉https://t.co/wu65XKxfT9 @IEEEorg pic.twitter.com/8sxHUWPwCB\n\u0026mdash; Prophesee (@Prophesee_ai) July 26, 2023 M3ED overcame these shortcomings by providing comprehensive ground-truth depth and poses with HD stereo event data recorded in diverse scenarios.\n— Ghosh \u0026amp; Gallego, Event-Based Stereo Depth Estimation: A Survey High definition (HD) data are not available in these datasets until the appearance of M3ED, which utilizes a Prophesee EVK4 event camera with a spatial resolution of 1280 × 720 pixels.\n— Sun et al., EvTTC: An Event Camera Dataset for Time-to-Collision Estimation The largest event camera dataset containing multi-sensor data.\n— Das et al., Neurosim: A Fast Simulator for Neuromorphic Robot Perception Show more quotes ▾ M3ED acts as an informal successor to the MVSEC dataset. It is composed of 110 minutes of outdoor driving sequences, with high spatial definition stereo events (1280×720) and images (1280×800), point clouds from a 64-channel LiDAR at 10Hz with a maximum range of 120m.\n— Brebion et al., DELTA: Dense Depth from Events and LiDAR Using Transformer\u0026#39;s Attention The M3ED dataset provides data collected from event cameras on various platforms, such as quadruped robots, vehicles, and drones. Notably, it has the highest resolution among all datasets, with event cameras at 1280×720.\n— Kang et al., Temporal Stereo Matching From Event Cameras via Joint Learning With Stereoscopic Flow We verify that timestamps are accurately synchronized in M3ED, but we observe a noticeable misalignment in the timestamps of the RGB camera and the event camera in DSEC.\n— Das et al., Fast Feature Field (F3): A Predictive Representation of Events The M3ED dataset by Chaney et al. is the first multi-sensor event camera dataset specifically designed for high-speed dynamic motions in robotics.\n— Shi et al., Fusion techniques of frame and event cameras in autonomous driving: A review Derived datasets #Several research groups have built new datasets and benchmarks on top of M3ED:\n3EED: 3D bounding box annotations for M3ED\u0026rsquo;s drone and quadruped sequences, enabling 3D object detection across multiple embodied platforms. Li et al. Paper, Project page, GitHub, HuggingFace.\nPi3DET: The first cross-platform 3D detection benchmark, built upon M3ED with annotated LiDAR sequences across vehicle, drone, and quadruped platforms. Liang et al. Paper, Project page, GitHub, HuggingFace.\nT²CEF: A dense time-to-collision dataset built from M3ED with refined camera poses at 7 ms resolution, enabling high-speed collision prediction research. Bisulco et al. Paper, GitHub.\nM3ED-Semantic: A semantic segmentation subset of M3ED with per-frame segmentation masks across drone and quadruped sequences, supporting 11 semantic classes. Li et al. Paper, GitHub.\nEXPo: A large-scale event-based cross-platform semantic segmentation benchmark with 89k frames spanning vehicle, drone, and quadruped platforms from M3ED. Kong et al. Paper, Project page.\nM3ED-active: A curated split of M3ED\u0026rsquo;s indoor quadruped sequences that expose an active stereo pattern, enabling research on active stereo depth estimation with event cameras. Bartolomei et al. Paper, GitHub.\nRoboSense Track#5: A cross-platform 3D object detection challenge built on M3ED, where participants adapt vehicle-trained detectors to drone and quadruped platforms. Kong et al. Challenge page, GitHub, HuggingFace.\nContact us to add your dataset to this list!\nLicense #M3ED is released under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. You are allowed to share and adapt under the condition that you give the appropriate credit, indicate if changes were made, and distribute your contributions under the same license.\nRead the paper #You can access the paper from the CVPRW Proceedings.\n@InProceedings{Chaney_2023_CVPR, author = {Chaney, Kenneth and Cladera, Fernando and Wang, Ziyun and Bisulco, Anthony and Hsieh, M. Ani and Korpela, Christopher and Kumar, Vijay and Taylor, Camillo J. and Daniilidis, Kostas}, title = {M3ED: Multi-Robot, Multi-Sensor, Multi-Environment Event Dataset}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {4015-4022} } ","date":null,"permalink":"https://m3ed.io/","section":"M3ED","summary":"","title":"M3ED"},{"content":"Congo has full support for Hugo taxonomies and will adapt to any taxonomy set up. Taxonomy listings like this one also support custom content to be displayed above the list of terms.\nThis area could be used to add some extra descriptive text to each taxonomy. Check out the advanced tag below to see how to take this concept even further.\n","date":null,"permalink":"https://m3ed.io/tags/","section":"Tags","summary":"","title":"Tags"},{"content":" Kenneth Chaney is the Associate Director of the Penn Advanced Research Computing Center and an adjunct professor in CIS at the University of Pennsylvania. His research focuses on robotics perception and event-based cameras. Fernando Cladera is a PhD Student in CIS at the University of Pennsylvania. He focuses on aerial robotics, event-based perception and robotics hardware. Ziyun Wang is a PhD Student in CIS at the University of Pennsylvania. He is interested in computer vision and its application in robotics. Anthony Bisulco is a PhD Student in ESE at the University of Pennsylvania, working on robotics, machine learning and hardware systems. M. Ani Hsieh is an Associate Professor in MEAM at the University of Pennsylvania. She leads the ScalAR Lab, which focuses on fundamental research problems in robotics that lie at the intersection of robotics, nonlinear dynamical systems theory, and uncertainty. Christopher Korpela is the Robotics and Autonomy Program Manager at the John Hopkins Applied Phisics Laboratory. Before joining APL, Chris was an Associate Professor and co-founder of the Robotics Research Center at the United States Military Academy at West Point. Vijay Kumar is the Nemirovsky Family Dean at Penn Engineering. Vijay\u0026rsquo;s research group works on creating autonomous ground and aerial robots, designing bio-inspired algorithms for collective behaviors, and developing robot swarms. Camillo J. Taylor is the Raymond S. Markowitz President\u0026rsquo;s Distinguished Professor in CIS at the University of Pennsylvania. CJ’s research interests focus on computer vision and robotics, including reconstruction of 3D models from images, vision-guided robot navigation and scene understanding. Kostas Daniilidis is the Ruth Yalom Stone Professor in CIS at the University of Pennsylvania. Kostas\u0026rsquo; research group focuses on computer vision and robotic perception, addressing the challenges in the perception of motion and space, such as the geometric design of cameras, the interplay of geometry, and appearance in perception tasks. ","date":null,"permalink":"https://m3ed.io/about/","section":"M3ED","summary":"","title":"About"},{"content":"","date":null,"permalink":"https://m3ed.io/tags/about/","section":"Tags","summary":"","title":"About"},{"content":"AWS #M3ED files are hosted in Amazon AWS arn:aws:s3:::m3ed-dist.\nHTTPS #You may download each sequence individually by clicking on the link in Sequences.\nAdditionally, we provide a python script to download the data.\n$ python download_m3ed.py --help usage: download_m3ed.py [-h] [--vehicle VEHICLE] [--environment ENVIRONMENT] [--to_download TO_DOWNLOAD] [--train_test TRAIN_TEST] [--yaml YAML] [--output_dir OUTPUT_DIR] [--no_download] options: -h, --help show this help message and exit --vehicle VEHICLE Type of vehicle to download: car, falcon, spot. If not provided, all vehicles will be downloaded --environment ENVIRONMENT Type of environment to download: urban, indoor, forest, outdoor. If not provided, all environments will be downloaded --to_download TO_DOWNLOAD Data to download: data, data_videos, depth_gt, pose_gt, gt_vids, semantics, semantics_vids, global_pcd, raw_data --train_test TRAIN_TEST Train or test data to download: train, test. If not provided, both train and test will be downloaded --yaml YAML Path to dataset_lists.yaml. If not provided, it will be downloaded from the repository --output_dir OUTPUT_DIR Output directory to download the data --no_download Do not download the data, just print the list of files Alternatively, you can use the bucket directly from AWS. The file structure is specified in the dataset_list.yaml\n","date":null,"permalink":"https://m3ed.io/download/","section":"M3ED","summary":"","title":"Download"},{"content":"","date":null,"permalink":"https://m3ed.io/tags/download/","section":"Tags","summary":"","title":"Download"},{"content":"","date":null,"permalink":"https://m3ed.io/tags/researchers/","section":"Tags","summary":"","title":"Researchers"},{"content":"Directory Tree #M3ED/ ├── input/ │ ├── raw_bags/ │ │ ├── car_urban_camera_calib_1.bag │ │ ├── tower_imu_calib_1.bag │ │ ├── car_urban_day_city_hall.bag │ │ └── ... │ └── lidar_calibrations/ │ ├── car_urban_day_city_hall.npz │ └── ... └── processed/ ├── car_urban_day_city_hall/ │ ├── car_urban_day_city_hall.h5 │ ├── events_gray.{avi, mp4} │ ├── rgb.{avi, mp4} │ ├── car_forest_data_1_stats.yaml │ ├── car_urban_day_city_hall_pose_gt.h5 │ ├── car_urban_day_city_hall_depth_gt.h5 │ ├── depth.{avi, mp4} │ ├── depth_events.{avi, mp4} │ ├── car_urban_day_city_hall_semantics.h5 │ ├── car_urban_day_city_hall_semantics.{avi, mp4} │ ├── car_urban_day_city_hall.pcd │ └── local_scans/ ├── car_urban_camera_calib_1/ │ ├── car_urban_camera_calib_1.h5 │ ├── camchain.yaml │ ├── report-cam.pdf │ └── results-cam.txt ├── tower_imu_calib_1/ │ ├── imu_chain.yaml │ ├── imu_report.pdf │ └── imu_results.txt └── ... Input Files #These are the files that are used by the build system to generate the h5 files.\nraw_bags: stores the unprocessed data obtained from the sensor tower. These files are not time corrected and events are stored in binary format. lidar_calibrations (npz): stores the manually tuned LiDAR calibrations. Multiple bags can share the same calibration file. Processed Files #Data Files (Train) #Each sequence is recorded individually in the processed folder.\ndata.h5: time synchronized and event decoded. Use these files to access the event camera data, the grayscale and RGB imagers, the lidar, and the IMU. events_gray.{avi, mp4}: recording of the event cameras and the grayscale imagers. depth_gt.h5: ground truth depth and poses from FasterLIO, for the left event camera. Use these files to access ground truth depth. pose_gt.h5: similar to depth_gt.h5, but without the depth information to make this bag lighter. depth_gt.{avi, mp4}: recording of the depth on the frame of the left event camera. depth_gt_events.{avi, mp4}: recording of the GT depth on the frame of the left event camera with events overlaid. stats.yaml statistics for the file. semantics.h5 InternImage semantic outputs. semantics.{avi, mp4}: recording of semantics on the frame of the left event camera with events overlaid. global.pcd: integrated point cloud for the whole sequence. local_scans: folder with individual scans from FasterLIO. For videos, the extension defines the encoder used:\navi: Uncompressed (FFV1) video. Allows to see details and noise that may be lost in compressed videos. mp4: Compressed with H264. Use these files for quick visualization. Data Files (Test) #Test files only provide:\ndata.h5: time synchronized and event decoded. Contains only the events, grayscale, and the IMU. All the other sensors have been stripped. events_gray.{avi, mp4}: recording of the event cameras and the grayscale imagers. Calibration Files # You should not download the calibration files if you are only using the data recordings. These calibrations are the raw outputs of Kalibr, provided here for transparency. We embedded a processed version of the calibration in each data file. See this thread for more information. Camera Calibration #Camera calibration is performed with Kalibr multiple camera calibration. For each one of the recordings, the following results are provided:\ndata.h5: time synchronized and event decoded. Kalibr outputs: camchain.yaml: resulting camera chain with all the transformations in yaml format. results-cam.txt: results summary as a text file. report-cam.pdf: report in pdf format with all the plots. IMU Calibration Files #Camera-to-IMU calibration is performed with Kalibr camera-IMU calibration. For each one of the recordings, the following results are provided.\nPlease note that the IMU to camera calibration is only performed using the grayscale and RGB sensors from the OVC, and not the event cameras.\ndata.h5: time synchronized and event decoded. Kalibr outputs: imu-chain.yaml: resulting camera chain with IMU to camera transformations imu_results.txt: results summary as a text file. imu_report.pdf: report in pdf format with all the plots. ","date":null,"permalink":"https://m3ed.io/data_overview/folderst/","section":"Data Overview","summary":"Understand the folder structure of M3ED. Particularly useful when downloading the whole dataset.","title":"Folder Structure"},{"content":"","date":null,"permalink":"https://m3ed.io/tags/overview/","section":"Tags","summary":"","title":"Overview"},{"content":"","date":null,"permalink":"https://m3ed.io/tags/structure/","section":"Tags","summary":"","title":"Structure"},{"content":"","date":null,"permalink":"https://m3ed.io/tags/contact/","section":"Tags","summary":"","title":"Contact"},{"content":"Bugs, PRs and Feature Requests # Given enough eyeballs, all bugs are shallow.\n— Eric S. Raymond\nM3ED follows an open development process. If you find a bug or would like to request a new feature, please fill out an issue on Github. Do not hesitate to send a PR with improvements!\nContact Us #Want to contact us?\nKen Chaney: chaneyk [at] seas.upenn.edu Fer Cladera: fclad [at] seas.upenn.edu Ziyun Wang: ziyunw [at] seas.upenn.edu Template Attribution #This website was created with Hugo using the Congo template.\n","date":null,"permalink":"https://m3ed.io/contact-us/","section":"M3ED","summary":"","title":"Contact Us"},{"content":" Visit the codabench submission site to submit your solution. About #The goal of this challenge is to leverage the high temporal and spatial resolution of HD event cameras for SLAM and pose estimation applications. This challenge is part of the CVPR 2025 Workshop on Event-based Vision.\nTracks # Event (+ IMU): if you obtain your pose using a single or a pair of event cameras, with or without IMU. Event + Mono (+ IMU): if you obtain your pose using a single or a pair of event cameras fused with monocular global shutter cameras, with or without IMU. Participation #Task #Your goal is to obtain the pose of the reference event camera on a set of challenging settings: car urban, UAV fast flight, and Spot.\nWhat should I submit? #You should submit a single zip file containing the poses (position + quaternions) for the following sequences:\nSequence Test data Reference timestamps car_urban_day_ucity_big_loop h5 txt falcon_outdoor_day_fast_flight_3 h5 txt spot_outdoor_day_penno_building_loop h5 txt For each one of these sequences, provide a file sequence.txt with the following format in camera frame.\ntimestamp tx ty tz qx qy qz qw where:\ntimestamp: timestamp in seconds. You should provide a pose estimate for the timestamps in the Reference timestamp file. tx ty tz: position in meters. qx qy qz qw: orientation quaternion. M3ED uses microseconds (us) as default unit for time. Be sure to scale the time unit accordingly! Data #This challenge is based on M3ED version 1.2. You can check the current version of an HDF file with h5dump -a /version file.h5.\nYou can use all sequences available in M3ED for development. We provide ground truth pose data in TUM trajectory file format so you can evaluate directly with EVO.\nFor example, you can use the following sequence to train your algorithm:\nSequence Data Reference timestamps GT falcon_indoor_flight_1 h5 txt txt Evaluation #You will be evaluated on the accuracy of your pose using evo, computing the APE (absolute pose error) of your pose with respect to the ground truth. The APE of the three sequences are added towards your final score.\nYou can use evo locally, to evaluate the accuracy of your algorithm.\nexport SEQ=falcon_outdoor_day_fast_flight_3 evo_traj tum result/${SEQ}.txt \\ -p --ref=reference/${SEQ}_pose_evo_gt.txt \\ --align --t_max_diff=0.01 --correct_scale \\ --sync --downsample 100 Timeline # March 1: Challenge opens for submissions. June 2 June 9: Challenge ends. June 8 June 9: The top submissions should send their code for manual evaluation, report, and posters. June 4 June 10: Winners announced. June 12: Posters presented at the CVPR Workshop on Event-based vision. After the workshop: The top submissions are invited to collaborate on a report for the challenge. Terms #Participants are not required (but encouraged) to release their code. Nonetheless, the organizers of the challenge will request a copy of the code and instructions to run it locally to validate the results submitted by the authors. Failure to provide code to the organizers is a valid reason for disqualification.\nWe request participants that they do not release the results of their submission, as we may use this dataset for future challenges.\n","date":"14 August 2020","permalink":"https://m3ed.io/slam_challenge/","section":"M3ED","summary":"","title":"CVPRW 2025 SLAM Challenge"},{"content":"For this example, we will use file car_urban_day_horse.h5.\nAttributes #$ h5dump -n 1 car_urban_day_horse.h5 | grep attr attribute /creation_date attribute /raw_bag_name attribute /version creation_date: time when the file was processed. raw_bag_name: raw bag that was used to create this file. version: commit tag of the repo when the data was created. Groups #Event Data #Event data was recorded with Prophesee EVK4 cameras, which have IMX636ES sensors. Event data is stored in the groups /prophesee/left and /prophesee/right.\nThe data from the event cameras is decoded, but undistorted. We provide the distortion coefficients and models from calibration.\n$ h5ls -r car_urban_day_horse.h5/prophesee/right /calib Group /calib/T_to_prophesee_left Dataset {4, 4} /calib/camera_model Dataset {SCALAR} /calib/distortion_coeffs Dataset {4} /calib/distortion_model Dataset {SCALAR} /calib/intrinsics Dataset {4} /calib/resolution Dataset {2} /ms_map_idx Dataset {28661} /p Dataset {433702903/Inf} /t Dataset {433702903/Inf} /x Dataset {433702903/Inf} /y Dataset {433702903/Inf} calib: resolution: (width, height) resolution. T_to_prophesee_left: Transformation from the frame of the camera to the frame of the Prophesee left event camera. camera_model/instrinsics: Camera models and coefficients used in Kalibr. distortion_model/distortion_coeffs: Distortion model and coefficients used in Kalibr. ms_map_idx: precomputed index for the events for each ms of the dataset. (x, y, t, p): event array with (x, y) coordinate, timestamp, and polarity. OVC #This data was recorded with the OVC 3B in the group /ovc.\nGrayscale Imagers / RGB #The OVC imagers have AR0144CS sensors. The grayscale stereo pair is stored in the groups /ovc/left and /ovc/right. The RGB data is stored in the group /ovc/rgb.\nAdditionally, /ts, /ts_map_prophesee_left_t, /ts_map_prophesee_right_t provide useful timing information when working with imagers.\n$ h5ls -r car_urban_day_horse.h5/ovc/left ... /right Group /right/calib Group /right/calib/T_to_prophesee_left Dataset {4, 4} /right/calib/camera_model Dataset {SCALAR} /right/calib/distortion_coeffs Dataset {4} /right/calib/distortion_model Dataset {SCALAR} /right/calib/intrinsics Dataset {4} /right/calib/resolution Dataset {2} /right/data Dataset {714/Inf, 800, 1280, 1} /ts Dataset {714/Inf} /ts_map_prophesee_left_t Dataset {714} /ts_map_prophesee_right_t Dataset {714} calib: Camera calibration information. Please refer to the Event Data section for more information. data: Image data in (n, x, y, c) where n is the image index, (x,y) are the pixel coordinates, and c is the channel (mono, RGB). ts: timestamp of the image. ts_map_prophesee_{left, right}_t: mapping to the index of event camera data for each image. IMU #IMU data was recorded with a VN100T mounted on the OVC. The IMU data is recorded in the group /ovc/imu.\n$ h5ls -r car_urban_day_horse.h5/ovc/imu /accel Dataset {11424/Inf, 3} /calib Group /calib/T_to_prophesee_left Dataset {4, 4} /omega Dataset {11424/Inf, 3} /ts Dataset {11424/Inf} calib/T_to_prophesee_left: Transformation from the frame of the IMU to the frame of the Prophesee left event camera. accel: Linear acceleration provided by the VN100T IMU. omega: Angular velocity provided by the VN100T IMU. ts: Timestamp for the IMU samples. Ouster #This data was recorded with an Ouster OS1-64-U in the group /ouster.\n$ h5ls -r car_urban_day_horse.h5/ouster /calib Group /calib/T_to_prophesee_left Dataset {4, 4} /data Dataset {286/Inf, 128, 12609} /imu Group /imu/accel Dataset {2930/Inf, 3} /imu/omega Dataset {2930/Inf, 3} /imu/ts Dataset {2930/Inf} /metadata Dataset {SCALAR} /ts_end Dataset {286/Inf} /ts_start Dataset {286/Inf} /ts_start_map_prophesee_left_t Dataset {286} /ts_start_map_prophesee_right_t Dataset {286} calib/T_to_prophesee_left: Transformation from the frame of the IMU to the frame of the Prophesee left event camera. data: binary data output from the sensor. We use the LEGACY data format. metadata: metadata required to decode the data. ts_start/ts_end: timestamps corresponding to the start and end of the Ouster swipe. ts_start_map_prophesee_{left, right}_t: mapping to the index of event camera data for the start of the Ouster swipe. imu: IMU included in the Ouster accel: Linear acceleration provided by the Ouster IMU. omega: Angular velocity provided by the Ouster IMU. ts: Timestamp for the IMU samples. ","date":null,"permalink":"https://m3ed.io/data_overview/datafiles/","section":"Data Overview","summary":"Anatomy of  \u003ccode\u003e_data.h5\u003c/code\u003e with events, grayscale and RGB imagers, LiDAR, and IMU data.","title":"Data Files"},{"content":"This section provides an overview of how to access and use the data of M3ED.\n","date":null,"permalink":"https://m3ed.io/data_overview/","section":"Data Overview","summary":"","title":"Data Overview"},{"content":"This is the advanced tag. Just like other listing pages in Congo, you can add custom content to individual taxonomy terms and it will be displayed at the top of the term listing. 🚀\nYou can also use these content pages to define Hugo metadata like titles and descriptions that will be used for SEO and other purposes.\n","date":null,"permalink":"https://m3ed.io/tags/advanced/","section":"Tags","summary":"","title":"advanced"},{"content":"","date":null,"permalink":"https://m3ed.io/categories/","section":"Categories","summary":"","title":"Categories"},{"content":"Data Recordings #Car # Urban Day # Sequence Data GT Semantics Others city_hall h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats horse h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats penno_big_loop h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats penno_small_loop h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats rittenhouse h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats schuylkill_tunnel\ntest sequence h5\nEvents Vids:\nH264/FFV1\n- - stats ucity_big_loop\ntest sequence h5\nEvents Vids:\nH264/FFV1\n- - stats ucity_small_loop h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats Urban Night # Sequence Data GT Semantics Others city_hall h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats penno_big_loop h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats penno_small_loop h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats penno_small_loop_darker h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats rittenhouse h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats schuylkill_tunnel\ntest sequence h5\nEvents Vids:\nH264/FFV1\n- - stats ucity_big_loop\ntest sequence h5\nEvents Vids:\nH264/FFV1\n- - stats ucity_small_loop h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats Forest # Sequence Data GT Semantics Others into_ponds_long h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats into_ponds_short h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats sand_1 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats sand_2\ntest sequence h5\nEvents Vids:\nH264/FFV1\n- - stats tree_tunnel h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats Falcon # Indoor # Sequence Data GT Semantics Others flight_1 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats flight_2 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats flight_3 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats Outdoor Day # Sequence Data GT Semantics Others fast_flight_1 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats fast_flight_2 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats fast_flight_3\ntest sequence h5\nEvents Vids:\nH264/FFV1\n- - stats penno_cars h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats penno_parking_1 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats penno_parking_2 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats penno_parking_3\ntest sequence h5\nEvents Vids:\nH264/FFV1\n- - stats penno_plaza h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats penno_trees h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats Outdoor Night # Sequence Data GT Semantics Others high_beams h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats penno_parking_1 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats penno_parking_2 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats Forest # Sequence Data GT Semantics Others into_forest_1 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats into_forest_2 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats into_forest_4 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats road_1 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats road_2 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats road_3\ntest sequence h5\nEvents Vids:\nH264/FFV1\n- - stats road_forest h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats up_down h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats Spot # Indoor # Sequence Data GT Semantics Others building_loop h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats obstacles h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats stairs h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats stairwell h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats Outdoor Day # Sequence Data GT Semantics Others art_plaza_loop h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats penno_building_loop\ntest sequence h5\nEvents Vids:\nH264/FFV1\n- - stats penno_short_loop h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats rocky_steps h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats skatepark_1 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats skatepark_2 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats skatepark_3\ntest sequence h5\nEvents Vids:\nH264/FFV1\n- - stats srt_green_loop h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats srt_under_bridge_1 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats srt_under_bridge_2 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 h5\nSem. Vids:\nH264/FFV1 pcd\nscans\nraw bag\nstats Outdoor Night # Sequence Data GT Semantics Others penno_building_loop\ntest sequence h5\nEvents Vids:\nH264/FFV1\n- - stats penno_plaza_lights h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats penno_short_loop h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats Forest # Sequence Data GT Semantics Others easy_1 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats easy_2 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats hard h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats road_1 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats road_2\ntest sequence h5\nEvents Vids:\nH264/FFV1\n- - stats road_3 h5\nEvents Vids:\nH264/FFV1\nRGB Vids:\nH264/FFV1 depth h5\npose h5\nevo pose\nDepth Vids:\nH264/FFV1\nWith events:\nH264/FFV1 - pcd\nscans\nraw bag\nstats Calibrations # You should not download the calibration files if you are only using the data recordings. These calibrations are the raw outputs of Kalibr, provided here for transparency. We embedded a processed version of the calibration in each data file. See this thread for more information. dataset_list.yaml indicates which calibration was used for each sequence.\nCamera Calibrations #Car # Name Data Kalibr Output Others forest_1 h5 cam_chain\nresults\nreport\nraw bag forest_2 h5 cam_chain\nresults\nreport\nraw bag forest_3 h5 cam_chain\nresults\nreport\nraw bag forest_4 h5 cam_chain\nresults\nreport\nraw bag forest_5 h5 cam_chain\nresults\nreport\nraw bag forest_6 h5 cam_chain\nresults\nreport\nraw bag urban_day_1 h5 cam_chain\nresults\nreport\nraw bag urban_day_2 h5 cam_chain\nresults\nreport\nraw bag urban_day_3 h5 cam_chain\nresults\nreport\nraw bag urban_day_4 h5 cam_chain\nresults\nreport\nraw bag urban_day_5 h5 cam_chain\nresults\nreport\nraw bag urban_day_6 h5 cam_chain\nresults\nreport\nraw bag urban_day_7 h5 cam_chain\nresults\nreport\nraw bag urban_day_8 h5 cam_chain\nresults\nreport\nraw bag urban_night_1 h5 cam_chain\nresults\nreport\nraw bag Falcon # Name Data Kalibr Output Others forest_1 h5 cam_chain\nresults\nreport\nraw bag forest_2 h5 cam_chain\nresults\nreport\nraw bag forest_3 h5 cam_chain\nresults\nreport\nraw bag forest_4 h5 cam_chain\nresults\nreport\nraw bag forest_5 h5 cam_chain\nresults\nreport\nraw bag forest_6 h5 cam_chain\nresults\nreport\nraw bag forest_7 h5 cam_chain\nresults\nreport\nraw bag forest_8 h5 cam_chain\nresults\nreport\nraw bag forest_9 h5 cam_chain\nresults\nreport\nraw bag forest_10 h5 cam_chain\nresults\nreport\nraw bag indoor_camera_calib h5 cam_chain\nresults\nreport\nraw bag outdoor_day_1 h5 cam_chain\nresults\nreport\nraw bag outdoor_day_2 h5 cam_chain\nresults\nreport\nraw bag outdoor_day_3 h5 cam_chain\nresults\nreport\nraw bag outdoor_night_1 h5 cam_chain\nresults\nreport\nraw bag outdoor_night_2 h5 cam_chain\nresults\nreport\nraw bag Spot # Name Data Kalibr Output Others forest_1 h5 cam_chain\nresults\nreport\nraw bag forest_2 h5 cam_chain\nresults\nreport\nraw bag forest_3 h5 cam_chain\nresults\nreport\nraw bag indoor_camera_calib h5 cam_chain\nresults\nreport\nraw bag outdoor_day_1 h5 cam_chain\nresults\nreport\nraw bag outdoor_day_2 h5 cam_chain\nresults\nreport\nraw bag outdoor_day_3 h5 cam_chain\nresults\nreport\nraw bag outdoor_night_1 h5 cam_chain\nresults\nreport\nraw bag outdoor_night_2 h5 cam_chain\nresults\nreport\nraw bag IMU Calibrations # Name Data Kalibr Output Others falcon_1 - imu_chain\nresults\nreport\nraw bag falcon_2 - imu_chain\nresults\nreport\nraw bag falcon_3 - imu_chain\nresults\nreport\nraw bag tower_1 h5 imu_chain\nresults\nreport\nraw bag tower_2 h5 imu_chain\nresults\nreport\nraw bag ","date":null,"permalink":"https://m3ed.io/sequences/","section":"M3ED","summary":"","title":"Sequences"}]