From algorithms to action: Why women are key to the future of AI
More women participation at all levels of algorithm design and model training, along with monitoring, assessment and providing feedback to deployed systems is the only way to ensure fairness in the systems.
Algorithmic thinking paves the foundations for computational science. As Artificial Intelligence is all set to impact sectors from healthcare and finance to education and entertainment, it’s crucial to pause and examine the possible consequences of this transformation on women.
The power of the present generation large models comes from their capability to sift through massive digital repositories, learn probabilities associated to patterns, representations, and decision rules, and use them effectively to generate new patterns, be it in the form of images, text or even a sequence of actions that can be interpreted as a plan to execute a task.
The learning is facilitated by statistical and machine learning techniques, pattern recognition and very large deep neural networks, which learn all kinds of probabilistic associations between elements.
Machines have been analysing human-generated content for quite some time, now they are incredibly good at it. However, what has truly captivated the world, is the generative ability of modern AI systems. After enjoying artificially created videos, fictions and paintings, we are now in an era, where we can simply command these systems to perform complex tasks, and expect them to figure out a way to do them and also execute them through agents.
Nothing can be more welcome to women, who spend half the life-time creating optimal plans to accommodate home and workplace responsibilities. Imagine presenting a list of to-dos to an AI assistant, take off for a board meeting, to come back to a house to find everything just as you want them to be! Elated though I am, a doubt creeps in.
Will the AI agent know how to do things like a woman? Have the AI systems learnt enough about women’s perspectives? Can we expect that as machines, they will be fair and objective, and not amplify the societal biases against women?
Looking inside the AI systems
Studying bias in AI systems refers to a systematic way of exploring whether and how the system outputs may exhibit discrimination towards communities. A quick look at the underlying technology can help us understand how and why this important to explore.
Delving inside the AI systems reveals that the first phase of its learning happens in an unsupervised way, that is, patterns are discovered automatically from the content it is fed. While the discovery process itself is guided by the learning algorithms, the depth of learning also depends on the underlying hardware on which it is implemented.
This is followed by a phase of supervised learning, wherein the learning is put to test using a bunch of pre-designed tasks. Human feedback at this point leads to a refinement or fine-tuning of the learnt patterns. Finally, AI systems continue to learn even after deployment through user feedbacks, which help them to do course corrections by reinforcing the paths that lead to right outputs and penalising the wrong ones.
We may now take a look at the different channels through which bias creeps into AI systems along with their natures.
Data bias: As AI systems are trained on historical data, societal biases entrenched in the content, make their way into the learnt models. In 2019, it was reported that Apple’s credit card algorithm was predicting lower credit limits for women than men, even when they had similar financial backgrounds. This was traced to the model’s training data which comprised several decades of banking data of the United States, which was known to discriminate against women. Around the same time, Caroline Perez released her book Invisible Women, where she systematically highlighted how data-driven policies and decisions across various fields, from healthcare to agriculture, were made without incorporating representative data on women.
Algorithmic bias: Sometimes, bias creeps in due to the design of the learning algorithms. For example, if the training data does not have sufficient female representation in specific contexts, the underlying algorithm automatically learns to eliminate those contexts for women. This is how the AI systems learnt strong associations of men to doctors and women to nurse. Algorithms relying on frequentist statistics are heavily prone to these kinds of errors.
Human bias: A third source of bias results from human actions and feedbacks be it from developers, data annotators, or decision-makers. Whatever be the reasons for the human bias, the reinforcement learning mechanism that provide the basis for evolutionary learning, also make these vulnerable. These systems can thereafter inflict potential danger to vulnerable groups including women and other minority communities. In the past, AI assistants like Siri and Alexa, which were built to reflect female persona have also not been spared from such toxic human behavior.
Fortunately, as the world embraces more AI based systems, there is more work on unravelling and addressing model biases systematically, which in turn has led to emphasis on careful data selection and algorithmic fairness. On a different note, AI technologies like Natural Language Processing are helping in detecting and analysing incidents of violations from News articles, text books, company reports, medical case studies, social media content and so on. These results provide valuable insights to industry leaders, policy planners and law makers to ensure a better and safer society.
AI needs more women to drive ethical development
Experts agree that the biases exhibited by AI systems today are by and large the consequences of a male-dominated AI sector. More women participation at all levels of algorithm design and model training, along with monitoring, assessment and providing feedback to deployed systems is the only way to ensure fairness in the systems.
Women have to insist on inclusive design of AI-powered systems to ensure that the future systems are sensitive to gender differences, and incorporate these sensibilities in the outputs and recommendations generated by these. If researchers Joy Buolamwini and Timnit Gebru had not unveiled the gender and skin tone biases of commercially deployed face recognition systems, many women across the world would continue to face the harassment of identity establishment. The same goes for all underrepresented communities.
Studies in brain science have confirmed that men and women think differently. Their approach to solving a particular problem may be quite different. A recent work by researchers from Georgia University have shown how male and female school students differed in style and approach while answering science questions. Interestingly, when these answers were evaluated by different types of AI models, ones which were mixed gender trained and others which were gender-specifically trained, they produced different outcomes. While correct answers by female students were evaluated as wrong by the later models, the gender trained models recognized that both sets were correct.
This goes on to strengthen the belief that including multiple perspectives is a must to make AI systems more inclusive. Presently, data science studies that aim at understanding gender differences are abysmally low. There is a humongous need to conduct many more studies that will enable comprehensive inclusion of women’s voice from all strata of society, for all kinds of tasks. This can only happen with more women stepping into all stages of AI activities – from design and development, to deployment and research.
Moving beyond bias, the AI models have also raised several other ethical concerns regarding privacy, transparency, and accountability. The use of copyrighted material for training the large scale models have been one of the most debated issues. It is widely acknowledged that women, who have historically led efforts in ethics and human rights, can bring in valuable perspectives to the AI field. Female AI researchers and policymakers have been at the forefront of advocating for AI regulations that protect users from discrimination and privacy violations.
According to the World Economic Forum report of 2024, globally, only 30% of AI and Big data professionals are women. The overall presence of women in STEM fields have gone up to a paltry 27% in 2024 from 24.4% in 2015 {https://www3.weforum.org/docs/WEF_GGGR_2024.pdf}. The gender gap is even more pronounced in leadership roles, where decision-making about AI policies and applications takes place. The reasons for this abysmally skewed ratio can be attributed to many factors that include societal biases, lack of mentorship, workplace discrimination, and systemic barriers to funding and leadership opportunities. The gap that manifests in today’s technology products can only close with a larger participation by women at all levels.
Professional bodies like ACM and IEEE have been playing active roles in mentoring women students and professions, connecting them to female leaders and entrepreneurs, creating networking opportunities and also sharing advices on navigating work-life balance. But nothing can work without appropriate social support infrastructure to reduce the burden of housework that fall on women by default.
When it comes to skill penetration for women in AI, the Stanford AI Index 2024 cites India as the leader with a penetration rate of 1.7, followed by the United States (1.2) and Israel (0.9) {https://indiaai.gov.in/article/india-leads-global-ai-talent-and-skill-penetration}. Reports by Nasscom also state that the willingness of Indian women to be a part of the generative AI initiative is quite high. The actual participation in skilling programs however remains low. If rectified, this can augur well for India’s ongoing efforts to build AI-powered solutions for healthcare, agriculture and education which are enablers for building sustainable smart living spaces. To succeed, more women have to be at the forefront of technology, provide leadership, and be a part of decision making bodies.
A study by McKinsey found that companies with gender-diverse executive teams were 25% more likely to have above-average profitability. Clearly AI-driven solutions have a long way to go when it comes to being inclusive and representative of different genders, cultures, and socio-economic backgrounds. Women can take the lead in driving innovation for social good, by pioneering AI-driven technology products that can benefit the society as a whole. Companies also have to ensure more inclusive workspaces. With more women participating in the development process, they will be able to meet the needs of a more diverse audience.
For a long time women have tried to fit into the technology space that was designed by men and for men. The social media, with its intent to maximise reach, has contributed in transforming women from being passive observers to active participants in the web. The next step is to capitalise on this presence, move under the skin of these technologies, and be vocal about women-specific needs. If we want agents to clone us, let’s teach it. Lead from the front. And the time starts now.
(Lipika Dey is a Professor of Computer Science at Ashoka University.)
(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)