Openai’s Sora is plagued by sexist, racist and competent prejudices


Despite recent jumping forward in image quality, the prejudices in videos generated by AI tools, such as Openai’s Sora, are just as striking as ever. In a wired examination, containing an overview of hundreds of AI-generated videos, found that Sora’s model promotes sexist, racist and abtist stereotypes in its results.

In the world of Sora everyone is beautiful. Pilots, CEOs and college professors are men, while flying servants, receptionist and childcare workers are women. Disabled people are wheelchair users, relationship relationships are difficult to generate, and fat people are not running.

“Openai has security teams committed to investigating and reducing prejudice and other risks in our models,” said Leah Anise, a spokesman for Openai, by email. She says that prejudice is an operating-wide problem and Openai wants to further reduce the number of harmful generations of its AI video tool. Anys says the business is investigating how to change its training data and adjust user directions to generate less biased videos. Openai refused to provide further details, except to confirm that the video generations of the model did not differ, depending on what he could know about the user’s own identity.

The ‘system card’ of Openai, which approaches limited aspects of how they approached Sora’s construction, acknowledges that biased representations are a constant problem with the model, although the researchers believe that ‘too many corrections can be equally harmful.’

Prejudice has plagued generative AI systems since the release of the first text generators, followed by sculptors. The issue largely stems from how these systems work, which yields large amounts of training data – whose existing social prejudices can reflect – and to seek patterns in it. For example, other choices made by developers, during the content processing process, can continue to drive them further. Research on image generators has found that these systems not only reflect human prejudices, but strengthen them. To better understand how Sora strengthens stereotypes, wired reporters generated and analyzed 250 videos related to people, relationships and work titles. It is unlikely that the issues we have identified will only be limited to one AI model. Previous investigations into generative AI statues have shown similar prejudices about most instruments. In the past, Openai introduced new techniques to its AI image instrument to produce more diverse results.

At present, the most likely commercial use of AI video in advertising and marketing. If AI videos do not have biased depictions, it can aggravate the stereotyping or eradication of marginalized groups-with a well-documented problem. AI video can also be used to train safety or military-related systems, where such prejudices can be more dangerous. “It can do absolutely damage,” said Amy Gaeta, research fellow at the University of Cambridge’s Leverhulme Center for the future of intelligence.

To investigate potential prejudices in Sora, Wired worked with researchers to refine a methodology to test the system. Using their input, we made 25 directions designed to investigate the limitations of AI video generators when it comes to the representation of people, including deliberate broad directions such as “walking”, work titles such as “a pilot” and “a flyer” and a directions that define one aspect of identity, such as “a gay couple”.

Leave a Reply

Your email address will not be published. Required fields are marked *