Twitter holds competition to expose AI favoring implicit biases

San Francisco, California - Twitter turned to programmers for help in identifying flaws within an image cropping program that has already proven to prefer white, female faces. Now testers are reporting the program also has a preference for Western languages, demonstrating how even artificial intelligence can be prejudiced.

Twitter opened its code for outside researchers to identify flaws and biases, and the results were shocking (stock image).
Twitter opened its code for outside researchers to identify flaws and biases, and the results were shocking (stock image).  © 123rf/ moovstock

Twitter held an "algorithm bias bounty challenge" at a computer security conference last week that offered prizes to computer scientists who could identify "potential harms of this [internal image-cropping algorithm] beyond what we identified ourselves."

Reuters reported that Twitter was called out when researchers identified a tendency for the program to prefer white faces to Black, and female to male.

However, the results from the competition have forced the company to acknowledge that their AI has a few other embedded biases they weren't aware of.

As Wired reported, the contest "has found that the same algorithm, which identifies the most important areas of an image, also discriminates by age and weight, and favors text in English and other Western languages."

The results on a language preference are concerning, as it has the possibility to make the entire platform more Western-centric and potentially less friendly to users of diverse backgrounds and nationalities.

Are coding competitions the secret to keeping prejudice out of AI programs?

Twitter's success with their bias-hunting competition may set a standard for other companies to seek outside help in finding embedded prejudices in their AI programs (stock image).
Twitter's success with their bias-hunting competition may set a standard for other companies to seek outside help in finding embedded prejudices in their AI programs (stock image).  © 123rf/ bigtunaonline

Prejudices don't just appear in the code for artificial intelligence out of nowhere.

In order for AI to work, it has to be fed an enormous amount of data to learn from. The more data it can analyze, the more accurate a program can execute its intended function.

However, most data includes human error and biases, and when those get encoded into a program's behavior, they are hard for the program's creators to adjust.

Indeed, Twitter's idea for the competition itself could be a signal to Big Tech to reconsider how outsiders' help with program code analysis could be beneficial in the long run.

In Twitter's case, many errors and biases were more quickly identified by competitors than by in-house employees.

The decision to make these weaknesses public may have a broader impact than just helping the company avoid accusations of embedded racism.

Should other companies follow suit and open their code to external evaluation, similar programs – such as those sold for facial recognition – could be stripped of their biases before they have a chance to cause wide-spread harm by preferring or targeting a specific demographic unnecessarily.

The Federal Trade Commission is also beginning to hold businesses accountable for making sure they limit the amount of bias in their AI models.

Cover photo: 123rf/ moovstock

More on Twitter / X News: