Sora has a bias problem
Sora seems to think all academics are men, and predominantly white men at that. And this is a problem.
In the early days of AI image generators — and by “early days” we’re only talking a couple of years ago — there was a tendency for platforms to reflect cultural stereotypes and biases when producing pictures of people — especially in specific occupations. As a result, early AI images of executives and scientists — to take a couple of examples — tended to be men, and white men at that.
Companies like OpenAI and Midjourney have made huge strides in correcting these biases. However, it seems that OpenAI’s new AI video generator Sora hasn’t quite caught up yet.
Playing around with the platform, which was only released to the public a few days ago, I asked it to create a video of an academic giving a lecture.
The result was a video of a white dude with a beard.
I repeated the request for a video of “an academic giving a lecture” — just to get another white dude.
Sequentially repeating exactly the same prompt sixteen times led to videos of sixteen men.
Fourteen were white and two were black. There was no racial diversity other than this.
In addition — although this seems trivial in comparison — thirteen had beards, all wore button down shirts, and all bar one wore a jacket:
I realize it’s early days for Sora and I’m sure that OpenAI will plug this bias gap around gender, race and occupations pretty soon. But I’d expected better from them.
It also leaves me wondering what other biases are embedded in the system — and how other AI video generators are faring.
Clearly we still have a long way to go in de-biasing generative AI. If only these companies were employing more people who could help ensure the technology’s responsible development.
Sigh …
Update
For anyone who’s interested, here are the links to the original Sora videos, in order or generation from first to last:
Video 1
Video 2
Video 3
Video 4
Video 5
Video 6
Video 7
Video 8
Video 9
Video 10
Video 11
Video 12
Video 13
Video 14
Video 15
Video 16
These biases are well-known to be present in society & in the data. The need to be proactive about mitigating biases should be par for the course in AI companies nowadays. Disappointing to see that it hasn’t been addressed yet in Sora. Thanks for sharing these insights, Andrew!