We recommend using conda for environment setup:
conda create -y -n mad python=3.9
conda activate mad
conda install --file requirements.txt
The download addresses for the data are as follows:
| TASK | dataset_0 | dataset_1 |
|---|---|---|
| Object detection | Gen 1 | 1Mpx |
| Semantic Segmentation | DDD17 | DSEC-Semantic |
| Human Pose Estimation | DHP19 | - |
You need to preprocess the original event data to fit our code. For object detection and semantic segmentation tasks, we divide the data at 50ms intervals; for human pose estimation tasks, we divide the data at intervals of every 7,500 events. For xx dataset, you can run:
python builddataset/build_xx.py
For example, to preprocess a 1mpx dataset, you can run the following code:
python builddataset/build_1mpx.py
coming soon
python MAD_Rep/train_flow.py
We currently provide MAD representation (excluding downstream tasks) prediction and visualization code. You can run the following code to visualize the results of MAD representation.
python pre_xx.py -r path_to_orin_event_data -sr path_to_save_new_data
For example, to preprocess a 1mpx dataset, you can run the following code:
python pre_1mpx.py -r path_to_orin_event_data -sr path_to_save_new_data
you can also
The following are the results of our method on different tasks.
From left to right: Motion tensor, Detection result and GT. From left to right: Motion tensor, Appearance tensor, Segmentation result and GT.


From left to right: Leftarm abduction, Side kick forwards left, Walking 3.5 km/h and Star jumps. In each action, blue represents the predicted result and red represents GT.
This project has used code from the following projects:
- Back to Event Basics for the contrast maximization loss
- YOLOX for the detection PAFPN/head
- EventPointPose for the DHP19 dataset buliding
- Unet for the semantic segmentation head


