Frequency-Controlled Diffusion Model for Versatile
Text-Guided Image-to-Image Translation

Xiang Gao     Zhengbo Xu     Junhan Zhao     Jiaying Liu

Wangxuan Institute of Computer Technology, Peking University

Accepted by AAAI 2024.

Resources

Citation

@inproceedings{gao2024frequency,
    title={Frequency-Controlled Diffusion Model for Versatile Text-Guided Image-to-Image Translation},
    author={Gao, Xiang and Xu, Zhengbo and Zhao, Junhan and Liu, Jiaying},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    volume={38},
    number={3},
    pages={1824--1832},
    year={2024}
}

Abstract

Recently, text-to-image diffusion models have emerged as a powerful tool for image-to-image translation (I2I), allowing flexible image translation via user-provided text prompts. This paper proposes frequency-controlled diffusion model (FCDiffusion), an end-to-end diffusion-based framework contributing a novel solution to text-guided I2I from a frequency-domain perspective. At the heart of our framework is a feature-space frequency-domain filtering module based on Discrete Cosine Transform, which extracts image features carrying different DCT spectral bands to control the text-to-image generation process of the Latent Diffusion Model from different dimensions, realizing versatile I2I applications including style-guided content creation, image semantic manipulation, image scene translation, and image style translation. Different from related methods, FCDiffusion establishes a unified text-driven I2I framework suitable for diverse I2I application scenarios simply by switching among different frequency control branches. The effectiveness and superiority of our method for text-guided I2I are demonstrated with extensive experiments both qualitatively and quantitatively. The code is publicly available at: https://github.com/XiangGao1102/FCDiffusion.

Key Ideas:
(1) Instruct image-to-image translation with natural language. Large-scale text-to-image diffusion models have revolutionized the field of image generation. We propose to harness their immense generative power and adapt them from text-to-image generation to text-guided image-to-image translation (I2I), providing intelligent tools for image manipulation tasks.
(2) Versatile image-to-image translation with a unified framework. Observing that I2I has diverse application scenarios emphasizing different correlations (e.g., style, structure, layout, contour, etc.) between the source and translated images, it is difficult for a single existing method to suit all scenarios well. This inspires us to design a unified framework enabling flexible control over diverse I2I correlations and thus applies to diverse I2I application scenarios.
(3) Realizing versatile I2I with different modes of frequency control. We propose to realize versatile text-guided I2I from a novel frequency-domain perspective: model the I2I correlation of different I2I tasks with the corresponding frequency band of image features in the frequency domain. Specifically, we filter image features in the Discrete Cosine Transform (DCT) spectrum space and extract the filtered image features carrying a specific DCT frequency band as control signal to control the corresponding I2I correlation. Accordingly, we realize I2I applications of style-guided content creation, image semantic manipulation, image scene translation, and image style translation under the mini-frequency control, low-frequency control, mid-frequency control, and high-frequency control respectively.
(4) Frequency spectrum reconstruction learning. Our FCDiffusion extracts image features carrying different DCT spectral bands as control signal to control the denoising process of the Latent Diffusion Model (LDM). Conditioned on the control signal, the model is trained to reconstruct the filtered-out frequency spectral components of image features with the textual information from the paired text prompt. At inference time, text-driven I2I is thus allowed by feeding in arbitrary text prompt to guide the completion of the DCT spectrum.



Figure 1. The overall architecture of FCDiffusion, as well as details of important modules and operations. FCDiffusion comprises the pretrained LDM, a Frequency Filtering Module (FFM), and a FreqControlNet (FCNet). The FFM applies DCT filtering to the source image features, extracting the filtered image features carrying a specific DCT frequency band as control signal, which controls the denoising process of LDM through the FCNet. FCDiffusion integrates multiple control branches with different DCT filters in the FFM, these DCT filters extract different DCT frequency bands to control different I2I correlations (e.g., image style, structure, layout, contour, etc.).

Results


Figure 2. Results of style-guided content creation realized with mini-frequency control. The image content is recreated according to the text prompt while the style of the translated image is transferred from the source image.






Figure 3. Results of image semantic manipulation realized with low-frequency control. The semantics of the source image is manipulated according to the text prompt while the image style and spatial structure are maintained.






Figure 4. Results of image style translation realized with high-frequency control. The image style (appearance) is modified as per the text prompt while the main contours of the source image are preserved.






Figure 5. Results of image scene translation realized with mid-frequency control. The image scene is translated according to the text prompt. In this scenario, the layout of the source image is preserved while the lower-frequency image style and higher-frequency image contours are not restricted.