Listen to a podcast summary of this paper:
Multimodal retrieval is the task of aggregating information from queries across heterogeneous modalities to retrieve desired targets. State-of-the-art multimodal retrieval models can understand complex queries, yet they are typically limited to two modalities: text and vision. This limitation impedes the development of universal retrieval systems capable of comprehending queries that combine more than two modalities. To advance toward this goal, we present OmniRet, the first retrieval model capable of handling complex, composed queries spanning three key modalities: text, vision, and audio.
Our OmniRet model addresses two critical challenges for universal retrieval: computational efficiency and representation fidelity. First, feeding massive token sequences from modality-specific encoders to Large Language Models (LLMs) is computationally inefficient. We therefore introduce an attention-based resampling mechanism to generate compact, fixed-size representations from these sequences. Second, compressing rich omni-modal data into a single embedding vector inevitably causes information loss and discards fine-grained details. We propose Attention Sliced Wasserstein Pooling (ASWP) to preserve these fine-grained details, leading to improved omni-modal representations. OmniRet is trained on an aggregation of approximately 6 million query-target pairs spanning 30 datasets.
OmniRet addresses two key challenges:
1. Shared Media Resampler. A key challenge in multimodal systems is the high token count produced by media encoders (often > 500 tokens), which limits training batch sizes. We introduce a shared resampling module based on the Perceiver architecture that intelligently condenses large sequences of media tokens into a compact, fixed-size set. It uses modality-specific latents to maintain sensitivity while sharing parameters across modalities.
2. Attention Sliced Wasserstein Pooling (ASWP). Instead of average pooling which discards fine-grained token structure, ASWP conceptualizes the set of LLM output tokens as a distribution and computes a rich embedding based on the distance to a set of learnable references. This preserves fine-grained, token-level information while maintaining the speed and simplicity of a single-vector system.
3. Audio-Centric Multimodal Benchmark (ACM). We curate a new benchmark featuring two novel tasks—composed audio retrieval and audio-visual retrieval—to comprehensively evaluate universal retrieval systems beyond the traditional text-vision paradigm.
We evaluate OmniRet on an extended version of the M-BEIR benchmark covering 13 retrieval tasks across image, video, audio, and text modalities. OmniRet achieves leading performance on 12 out of 13 tasks while being the only model that supports audio retrieval.
To assess generalization, we benchmark OmniRet against leading models (<7B parameters) on a subset of MMEBv2. Our model achieves outstanding performance on video tasks while maintaining competitive image retrieval without being fully fine-tuned on those training sets.
Finally, on our novel Audio-Centric Multimodal (ACM) benchmark, OmniRet illustrates promising results over baselines on both the composed audio and audio-visual retrieval tasks.
@article{huynh2026omniret,
title = {Efficient and High-Fidelity Omni Modality Retrieval},
author = {Huynh, Chuong and Luong, Manh and Shrivastava, Abhinav},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2026}
}