Twitter has announced intentions to launch its first algorithmic bias challenge, giving cash rewards for people who help uncover “potential harms” caused by the company’s saliency algorithm. This new competition builds upon the bias identification approach Twitter detailed back in May. The challenge will happen as a part of DEF CON AI Village this year.
Algorithms play a crucial role in social media platforms, helping users see more of the content they’ll have an interest in and fewer of the content they’re less likely to interact with. However, these same algorithms can unintentionally introduce bias to those platforms, raising criticism among those impacted by this problem.
Twitter talked about this issue during a blog post back in May, describing several algorithm biases that will surface and, therefore, the corporate steps to deal with this problem. In an update on the matter today, Twitter revealed that it’s taking things one step farther with its new bias bounty challenge, the primary of its kind within the industry.
Detailing the rationale for this new challenge, Twitter said during a blog post:
We’re inspired by how the research and hacker communities helped the safety field establish best practices for identifying and mitigating vulnerabilities to guard the general public. We want to cultivate an identical community focused on ML ethics to assist us in placing a broader range of issues than we might be ready to on our own.
Twitter describes the trouble as a proactive step towards finding and handling unintended harms caused by algorithms. Twitter will share its saliency model and code as a part of this challenge with individuals who want to participate. the corporate will host a workshop during DEF CON AI Village on August 8, at which point the winners will be announced.
The first-place winner will receive $3,500, while the second place, ‘most generalized,’ and ‘most innovative’ winners will each get $1,000. The one that takes third place gets $500. The challenge began today; interested participants should head over to its page for more info.