Organizers
Yuexian Zou,Peking University Shenzhen Graduate School
Tong Zhang,Peng Cheng Laboratory
Aim and Scope
The way our minds perceive the real world is multimodal. Multimodal deep learning usually refers to the techniques that leverage multi-modality information (including images, texts, audio, time series, genomics) to build AI models that achieve better performances than that using any individual one modality. Recently, multimodal artificial intelligence methods have attracted increasingly more attention in research communities, and has made certain progresses in a number of research applications, such as multimodal retrieval, visual question answering, multi-modality medical image analysis and AI for Science. Due to the heterogeneity of multi-modality data, the construction and analysis of multimodal deep learning models face the challenges of how to represent, associate, align and fuse information from different modalities. Multi-modality special track is now open for submissions. Papers submitted should describe high-quality, original research in English. All papers accepted for this track will be included in the conference proceedings published by Springer and expected to be indexed by EI/ISTP .
This special track aims to bring together the recent developments in the theory and application of Multi-modal deep learning model construction and analysis. The topics include, but are not limited to:
1)Pre-training and/or fine-tuning for scaled models
2)AI for Science
3)Multimodal information analysis, association, retrieval
4)Multimodal medical data analysis
5)AI modeling and analysis of complex video or time series data
6)Multi-modal deep learning algorithm library and/or model library
7)Multi-modal transfer learning
Important Dates
Submission Guidelines
This special track will be held in 2022 Chinese Conference on Pattern Recognition and Computer Vision (PRCV 2022). All papers should be prepared according to the PRCV2022 policy and should be submitted electronically using the conference website (https://cmt3.research.microsoft.com/PRCV2022).
To submit your paper to this special track, please choose "Special Track Multi-modality" to create your submission in CMT.