Yanbei Chen is currently a postdoc researcher in the EML group (led by Prof. Zeynep Akata). She did her Ph.D. with Prof. Shaogang Gong at Queen Mary University of London in London, UK. She received her master’s degree from KTH Royal Institute of Technology in Stockholm, Sweden, supervised by Prof. Atsuto Maki, and her bachelor’s degree from Zhejiang University in Hangzhou, China.
Distilling Audio-Visual Knowledge by Compositional Contrastive Learning Yanbei Chen, Yongqin Xian, A. Sophia Koepke, Ying Shan, Zeynep Akata IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021
Image Search with Text Feedback by Visiolinguistic Attention Learning.Yanbei Chen, Shaogang Gong, Loris Bazzani. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020
Semi-Supervised Learning under Class Distribution Mismatch Yanbei Chen, Xiatian Zhu, Wei Li, Shaogang Gong AAAI Conference on Artificial Intelligence, 2020
Learning Joint Visual Semantic Matching Embeddings for Language-guided Retrieval Yanbei Chen, Loris Bazzani European Conference on Computer Vision (ECCV), 2020
Instance-Guided Context Rendering for Cross-Domain Person Re-Identification Yanbei Chen, Xiatian Zhu, Shaogang Gong IEEE Conference on International Conference on Computer Vision (ICCV), 2019
Semi-Supervised Deep Learning with Memory Yanbei Chen, Xiatian Zhu, Shaogang Gong European Conference on Computer Vision (ECCV), 2018
Deep Association Learning for Unsupervised Video Person Re-Identification Yanbei Chen, Xiatian Zhu, Shaogang Gong British Machine Vision Conference (BMVC), 2018
Person Re-Identification by Deep Learning Multi-Scale Representations Yanbei Chen, Xiatian Zhu, Shaogang Gong IEEE Conference on International Conference on Computer Vision, Workshop on Cross-Domain Human Identification (ICCVW), 2017
Yanbei Chen’s general research interest lies at the intersection of machine learning and computer vision, particularly in semi-/un-/self-supervised learning and multimodal learning. In her Ph.D., she has been researching visual learning in the limited-label regime, including semi-supervised, unsupervised, and cross-domain learning algorithms to tackle computer vision tasks with minimal human supervision. In the long run, she will explore multimodal learning algorithms that could connect, correlate, or integrate multiple input modalities in an explainable and interpretable manner.