A Multimodal, Multi-Task Adapting Framework for Video Action Recognition
DOI:
https://doi.org/10.1609/aaai.v38i6.28361Keywords:
CV: Video Understanding & Activity Analysis, CV: Representation Learning for Vision, NLP: Language Grounding & Multi-modal NLPAbstract
Recently, the rise of large-scale vision-language pretrained models like CLIP, coupled with the technology of Parameter-Efficient FineTuning (PEFT), has captured substantial attraction in video action recognition. Nevertheless, prevailing approaches tend to prioritize strong supervised performance at the expense of compromising the models' generalization capabilities during transfer. In this paper, we introduce a novel Multimodal, Multi-task CLIP adapting framework named M2-CLIP to address these challenges, preserving both high supervised performance and robust transferability. Firstly, to enhance the individual modality architectures, we introduce multimodal adapters to both the visual and text branches. Specifically, we design a novel visual TED-Adapter, that performs global Temporal Enhancement and local temporal Difference modeling to improve the temporal representation capabilities of the visual encoder. Moreover, we adopt text encoder adapters to strengthen the learning of semantic label information. Secondly, we design a multi-task decoder with a rich set of supervisory signals, including the original contrastive learning head, a cross-modal classification head, a cross-modal masked language modeling head, and a visual classification head. This multi-task decoder adeptly satisfies the need for strong supervised performance within a multimodal framework. Experimental results validate the efficacy of our approach, demonstrating exceptional performance in supervised learning while maintaining strong generalization in zero-shot scenarios.Downloads
Published
2024-03-24
How to Cite
Wang, M., Xing, J., Jiang, B., Chen, J., Mei, J., Zuo, X., Dai, G., Wang, J., & Liu, Y. (2024). A Multimodal, Multi-Task Adapting Framework for Video Action Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5517-5525. https://doi.org/10.1609/aaai.v38i6.28361
Issue
Section
AAAI Technical Track on Computer Vision V