Estimated reading time: 4 minutes, 12 seconds

The growth of AI over the past few years has been extraordinary and inspires confidence that we will someday achieve AGI. The recent breakthrough of Google Deepmind’s AlphaGo, for example, signaled a major step for the field toward ‘general’ artificial intelligence, or AGI. The game of Go requires a higher level of intuition and cognition than, say, chess because it includes too many permutations for the computer to simply calculate all possible scenarios.

However, AI has not been without controversy. Some believe it could spell the end of humanity as we know it, with super-intelligence taking over and subjugating humanity. A more pressing problem, however, is bias within AI and how it reduces its efficacy.

Of course, bias in humans is very well-documented. Employers have been shown to prefer candidates with certain names, often preferring white individuals. Social phenomena like the gender pay gap can also no doubt be put down to unconscious bias and prejudice.

But it appears there has been a spillover into the algorithms and programming of AI software. Researchers from Princeton have illustrated insensitive biases learnt by Google translate, for example. The Turkish language uses the gender-neutral pronoun “o”, but when phrases like “o bir doktor” and “o bir hemşire” were translated, it produced “he is a doctor” and “she is a nurse” respectively.

Similarly, there is evidence that COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), a program that helps predict criminal reoffending, incorrectly labels African-Americans as ‘high-risk’. This kind of racism is troubling, but it can only be undone with greater understanding of where it comes from.

So, what causes AGI bias?

There is a multiplicity of factors that could lead to bias in AGI as it is developed, but a good portion of the problem comes from data samples. Bias can be introduced by how the data is collected or selected, for example in criminal justice models, certain neighbourhoods can be oversampled because they are over-policed — leading to more crime being recorded, and subsequently, more policing. This kind of feedback loop can be dangerous, embedding and augmenting the mistakes of AI.

Other things can cause bias, of course. Models may also be trained on data that is based on human decisions and preconceptions. For example, a machine that has learnt to read newspapers reflects the gendered language and stereotypes therein, even if the designers of the program do not intend it.

How do we address this problem?

Thankfully, there is a lot that can be done to address machine bias to make it less discriminatory, but it must come from those in the sector: business and research leaders. If your firm is utilising AI, ensure that you have a variety of checks and balances to ensure that it does not become biased. A good place to start might be with a ‘red team’ — an internal group in your organisation that adopts an adversarial approach, challenging accepted practice and encouraging critical thinking. In practicality, this might mean a rigorous analysis of the data and data systems you use with your AI — using Google’s AI handbook might provide a good starting point.

Secondly, it is important to stay on top of the human biases within your organisation and have frank and open — and guilt-free — conversations with programmers who might be unconsciously influencing the outcomes of the AI. With advanced tools that can probe for bias in AI, we are now able to hold people to higher standards too. Whilst this might sound nebulous, the practicality would be simple: for example, you could run an algorithm alongside your decision makers, compare the results, and see what caused the difference. The fact the comparison is with a machine also makes the process less emotive; staff won’t feel personally accused of bias by another colleague.

Lastly, it is also necessary to invest in the diversification of the AI field. A more diverse workforce will be better set up to notice, plan for, and remedy bias when it does crop up. Thankfully, there are already many impressive organisations leading the charge on this front, including AI 4 All, that is helping to build a broader (and larger) pool of talent for the sector. However, business will also need to take the lead on this front by changing hiring practices and actively encouraging interest in AI amongst different demographics.

There’s little doubt that the problem of bias in AI will remain a major issue in the years to come. As we reach a point where AI becomes ‘general’, the biases may become ironed out as the intelligence notices its own flaws. For the time being, however, we will be relying on businesses and researchers to treat the issue of AI bias with the requisite seriousness. There are many policy approaches, such as those outlined here, that can help remedy the problem — it’s just a question of will.


Nikolas Kairinos is the chief executive officer and founder of Fountech.ai, a company specialising in the development and delivery of artificial intelligence solutions for businesses and organisations.

Read 666 times
Rate this item
(0 votes)

Visit other PMG Sites:

click me
PMG360 is committed to protecting the privacy of the personal data we collect from our subscribers/agents/customers/exhibitors and sponsors. On May 25th, the European's GDPR policy will be enforced. Nothing is changing about your current settings or how your information is processed, however, we have made a few changes. We have updated our Privacy Policy and Cookie Policy to make it easier for you to understand what information we collect, how and why we collect it.
Ok Decline