Accenture and SAP Address Diversity Issues: Solutions & Strategies

SeniorTechInfo
3 Min Read

Addressing Generative AI Bias: Recommendations for APAC Organisations

Generative AI bias, driven by model training data, remains a significant challenge for organizations, as highlighted by leading experts in data and AI. These experts stress the importance of proactive measures to engineer around or eliminate bias as APAC organizations integrate generative AI use cases into their operations.

Teresa Tung, senior managing director at Accenture, pointed out that generative AI models, predominantly trained on internet data in English with a North American bias, may perpetuate viewpoints prevalent on the internet. This lack of diversity in training data poses issues for tech leaders in APAC countries.

According to Tung, technology and business talent in non-English speaking regions are at a disadvantage as the experimentation in generative AI is primarily driven by English speakers. This disparity not only hinders innovation but also perpetuates biases present in the training data.

AI bias could produce organizational risks

Kim Oosthuizen, head of AI at SAP Australia and New Zealand, emphasized the gender bias in AI, citing a Bloomberg study where women were underrepresented in higher-paid professions in AI-generated images. This representational harm perpetuates stereotypes and reinforces existing biases.

Oosthuizen warned that without addressing bias in AI training data, the problem could worsen as a large proportion of internet images is predicted to be artificially generated in the near future. This could lead to exclusionary outcomes, particularly in critical areas like healthcare and hiring processes.

AI model developers and users must engineer around AI bias

To combat biased data in generative AI models, enterprises need to adapt their design and integration strategies. Tung suggested injecting new data sources or creating synthetic data to balance the training set. Testing for AI bias should be a standard practice, akin to quality assurance for software code.

Guardrails outside AI models can help correct biases before outputs reach end users. Tung illustrated this with an example of using generative AI to identify vulnerabilities in code, where expert validation tests can mitigate bias in the results.

Diversity in the AI technology industry will help reduce bias

Oosthuizen stressed the importance of gender diversity in AI by advocating for women to have a strong presence in AI decision-making processes. Including diverse perspectives in all aspects of the AI journey is crucial for reducing bias and ensuring fair outcomes.

Tung echoed the importance of representation across demographics and emphasized the need for multi-disciplinary teams in AI development. Diversity in AI teams can lead to more inclusive and accurate AI applications across various industries.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *