Have you ever thought that machines could be biased? The Stable Diffusion pipeline generates images from text, and oftentimes, bias is prevalent in these images.
We first wanted to detect stereotypes in factors such as race and gender in ML generated images. Using a stable diffusion algorithm we were able to test many different prompts and gauge how the ML model reflects such biases through its training process.
What it does 🚗
The model takes in a prompt (phrase or sentence) and generates an image based on what the sentence says.
How we built it 👷♂️
Language: Python
Framework: PyTorch
Challenges we ran into🕸
We ran into issues trying to determine what specific factors to look for and how to quantify bias. By trying out many different types of prompts we were able to test a variety of inputs and get more comprehensive results.
Accomplishments that we're proud of 🏆
Learned how to utilize the PyTorch framework
Established knowledge of the basics of Machine Learning and Deep Learning
Understood the prevalence of ML models and real-world applications
What we learned 🧠
Learned how Deep Learning networks function
Learned how to detect elements of bias in ML models
Learned to download datasets using PyTorch
What's next? 🔮
Some questions we have for future analysis…
Does the sentence structure inputted as the prompt change the way a model interprets it?
Does the length and complexity of the sentence alter the way a model processes information?
Log in or sign up for Devpost to join the conversation.