Skip to main content

AI Bias Could Put Women’s Lives At Risk - A Challenge For Regulators

Carmen Neithammer published a piece in Forbes in response to a paper by the European Commission that challenged the EU’s position on AI recognition.

Oftentimes, the key to bias and discrimination in artificial intelligence lies in the data sets used to train algorithms. This issue is evident when considering gender bias in AI, an issue that spans multiple fields. A concrete example of how data biased towards men can have real implications can be found in the design of safety features in cars. Specifically, seatbelts are tested and designed with male crash models in mind, making women statistically more likely to be injured or die in accidents.

This seatbelt issue, so to speak, translates over to artificial intelligence. For example, in the medical field, apps that offer medical advice that are trained on data for men can offer inaccurate suggestions to women. Until recently, cardiovascular disease was largely considered a disease experienced by men. Hence, if a woman and a man both report back pain, the man is more likely to be recommended to see a doctor for a potential heart problem, while another underlying cause, such as depression, would be suggested to the woman. Increasing data that represents women is an important solution to this kind of problem, caused by overrepresentation of certain groups in data.

This issue does not stop at gender and is also visible in racial representation. Algorithms can be less accurate for people of color when they are similarly underrepresented in data, and working towards better representation for people of color is another step in eliminating gender bias, as women of color often face the worst accuracy rates from algorithms.

Additionally, it is imperative to increase the number of women in technology — the people building the algorithms themselves. This can better ensure equity when data is being collected to develop algorithms. In 2016, the #SheHealth initiative was launched in Germany in an effort to increase women’s leadership in eHealth. The program also encourages its members to “contribute to strategies and technical innovation” that can make digital health solutions more applicable to women’s needs.

Finally, calling upon policymakers to create certification processes for companies is another step that can help ensure that AI is “fair” and considerate towards gender and other demographic groups. As this concept of fairness shifts, the measures that need to be taken to ensure it change as well, but there is undoubtedly room for growth.


Comments are closed.