DiverseFlow: Sample-Efficient Diverse Mode Coverage in Flows

Mashrur Morshed and Vishnu Boddeti
IEEE Conference on Computer Vision and Pattern Recognition 2025 .

Abstract

Many real-world applications of flow-based generative models desire a diverse set of samples covering multiple modes of the target distribution. However, the predominant approach for obtaining diverse sets is not sample-efficient, as it involves independently obtaining many samples from the source distribution and mapping them through the flow until the desired mode coverage is achieved. As an alternative to repeated sampling, we introduce DiverseFlow: a training-free, inference-time approach to improve the diversity of flow models. Our key idea is to employ a determinantal point process to induce a coupling between the samples that drives diversity under a fixed sampling budget. In essence, DiverseFlow enables exploring more variations in a learned flow model with a fewer number of samples. We demonstrate the efficacy of our method for tasks where sample efficient diversity is desirable, such as text-guided image generation with polysemous words, inverse problems like large-hole inpainting, and class-conditional image synthesis.