Friday Fave: When AI misjudgment is not an accident

Source

Trick or treat?? It’s Halloween and this week I selected a disturbing article to keep in the spirit of things!

The issue of bias is widely recognised when discussing artificial intelligence (AI) and algorithms. How can we trust the algorithms to provide us with authentic data? Mostly the discussions have focused on unconscious bias, where the algorithms can be impacted by the individual’s beliefs and values – unknowingly. And then there’s cognitive bias, where we make flawed decisions based on various factors. For example, we may believe one person over another in a debate because their argument appears to be logical. In fact, it could be confirmation bias, their argument supports our existing opinion.

These types of biases, although of concern, can be managed by coding with a diverse team, using a number of data sources, monitoring results and adjusting datasets or algorithms.

But now, there’s intentional or deliberate bias. This is not unconscious, this is a deliberate interference with algorithms to intentionally create a bias. Think sophisticated cyber attacks, fake news and between organisations. I think we’re familiar with the impact of deliberate interference such as the Russian manipulation of Facebook during US elections. But how might this be used against companies?

Biased data could also serve as bait. Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them.

There are more examples in the article but start considering the bigger picture between hostile governments that could have far-reaching impact.

The scariest part (and it’s not a Halloween trick or treat) is how simple the process could be. Algorithms could be fed bias data, algorithms could be programmed to amplify existing biases. This is like malicious viruses or malware on steroids – what is being labelled as “poisoned algorithms”.

What can we do to protect our algorithms? Currently, processes for managing unconscious bias include workforce diversity, expand access to diversified data, and build in algorithmic transparency. But is this enough?

The authors suggest that this will be a systemic challenge that will require constant review and further development to ensure we’re producing AI systems that benefit us – not exploit us!

Happy Halloween!

Readhttps://blogs.scientificamerican.com/observations/when-ai-misjudgment-is-not-an-accident/

This post is part of a weekly Friday Faves series contributed by the team at Ripple Effect Group. Read the entire series and collections from other team members here.