As text-to-image generative models gain popularity and widespread accessibility, it is crucial to thoroughly examine their safety and fairness to prevent the dissemination and perpetuation of biases. While existing research focuses on detecting predefined sets of biases, limiting studies to well-known concepts, a new approach called OpenBias has been developed to address the challenge of open-set bias detection in text-to-image generative models.

OpenBias is an innovative pipeline that identifies and quantifies the severity of biases in an agnostic manner, without relying on any precompiled sets. The pipeline consists of three stages. First, a Large Language Model (LLM) is employed to propose biases based on a set of captions. Next, the target generative model produces images using the same set of captions. Finally, a Vision Question Answering model recognizes the presence and extent of the previously proposed biases.

The effectiveness of OpenBias has been demonstrated through a study of Stable Diffusion 1.5, 2, and XL, highlighting new biases that have never been investigated before. Quantitative experiments show that OpenBias aligns with current closed-set bias detection methods and human judgment, validating its reliability and accuracy.

The introduction of OpenBias marks a significant advancement in the field of bias detection for text-to-image generative models. By providing an agnostic approach to identifying and quantifying biases, OpenBias enables researchers and developers to comprehensively assess the safety and fairness of these models, even in the absence of predefined bias sets. This groundbreaking pipeline contributes to the responsible deployment and development of text-to-image generative models, ensuring that they do not inadvertently perpetuate or amplify biases.

As the use of text-to-image generative models continues to expand, tools like OpenBias will play a vital role in maintaining the integrity and fairness of these powerful technologies. By proactively identifying and addressing biases, researchers and developers can work towards creating more equitable and unbiased generative models that benefit society as a whole.