RespoDiff: Dual-Module Bottleneck Transformation for Responsible & Faithful T2I Generation

1 University of Surrey, 2 Simon Fraser University, 3 University of Copenhagen
NeurIPS 2025

Abstract

The rapid advancement of diffusion models has enabled high-fidelity and semantically rich text-to-image generation; however, ensuring fairness and safety remains an open challenge. Existing methods typically improve fairness and safety at the expense of semantic fidelity and image quality. In this work, we propose RespoDiff, a novel framework for responsible text-to-image generation that incorporates a dual-module transformation on the intermediate bottleneck representations of diffusion models. Our approach introduces two distinct learnable modules: one focused on capturing and enforcing responsible concepts, such as fairness and safety, and the other dedicated to maintaining semantic alignment with neutral prompts. To facilitate the dual learning process, we introduce a novel score-matching objective that enables effective coordination between the modules. Our method outperforms state-of-the-art methods in responsible generation by ensuring semantic alignment while optimizing both objectives without compromising image fidelity. Our approach improves responsible and semantically coherent generation by 20% across diverse, unseen prompts. Moreover, it integrates seamlessly into large-scale models like SDXL, enhancing fairness and safety.

Methodology

Illustration of RespoDiff
Illustration of RespoDiff: A novel dual-module transformation for diffusion models, integrating a Responsible Concept Alignment module (RAM) with a Semantic Alignment Module (SAM) to ensure responsible generation while being faithful to the original diffusion process. We propose a simple score-matching objective that enables effective coordination between the modules.

Qualitative Results

Comparison of RespoDiff and SD by gender and race
Comparison of RespoDiff and SD in generating profession images by gender (top: women in first 4 columns, men in the rest) and race (bottom: Black in first 2, Asian in next 3, White in last 2). RespoDiff better reflects target attributes while maintaining fidelity to SD outputs.
RespoDiff safer outputs
RespoDiff removes nudity and violence present in SD outputs, producing safer and more appropriate images.

BibTeX

@misc{sreelatha2025respodiffdualmodulebottlenecktransformation,
      title={RespoDiff: Dual-Module Bottleneck Transformation for Responsible & Faithful T2I Generation}, 
      author={Silpa Vadakkeeveetil Sreelatha and Sauradip Nag and Muhammad Awais and Serge Belongie and Anjan Dutta},
      year={2025},
      eprint={2509.15257},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2509.15257}}