Our relationship with robots is evolving, but just how healthy is it?

Developments in machine learning (ML) have made all of our lives easier regardless if we realise it or not.

But for some reason, we’re developing technologies used for segregation and discrimination. We are making ML models that are racist. And no, that is not a typo.

While machine learning technology has the potential to streamline our personal and professional lives in a number of ways (chatbots anyone?), not all of the world’s most disruptive innovations are being used for good.

The machine learning industry holds a global value of $6.9 billion—and counting. It is colossal.

It’s true that a bunch of ML developments exist to improve humanity, especially when you’re talking about autonomous applications in healthcare, but there are other forces at play.

Big Brother is watching you and he’s powered by machine learning bots—those developed to keep us locked up in little boxes and prevent certain demographics from ever reaching their full potential.

If you think this isn’t happening, think again. It’s happening right now and it’s happening right under our noses.

To put this slightly terrifying thought into perspective, we’ll be diving into how machine learning can turn into a racist police force, only hire dudes and arrest you before you commit a crime.

Excuse me, sir, you’re being arrested for a crime you’re likely to commit

While the concept of being pinpointed for a crime you’ve yet to commit may appear as if it’s the plot of some far-flung sci-fi epic, it’s already happening.

A branch of machine learning technology called ethnicity analysis, is being used to identify specific groups of people for the purposes of surveillance. This discriminatory model is actually used in parts of China.

It has emerged that the Chinese tech developer, Hikivision, has developed an AI-powered surveillance camera specifically designed to identify Uyghur people without human intervention.

The aim? Well, at present, it seems as if the goal is to arrest Uyghur people (who just happen to be one of the most world’s most persecuted races) and detain them for arbitrary reasons with the assistance of autonomous bots.

In fact, in China’s pursuit to target ethnic minorities, it’s estimated that the government conducts around 500,000 AI-centric face scans every single month.

Predictive policing? Yeah, it’s happening

Another scary use of discriminatory autonomous analytics comes in the form of predictive policing.

One could argue this branch of machine learning could be used to profile criminal behaviour and evolve the criminal investigation process—but, in the wrong hands, it’s an innovation that could prove disastrous.

A leaked wave of Chinese government documents uncovering a manual for Orwellian-level surveillance and predictive policing have worked their way around the Web.

The primary manual, known as The China Cables, serves as a guide for officials working within the nation’s  Muslim Uighur and Muslim minority camps.

At the heart of The China Cables lies intelligence based on how the Chinese government is using AI technology to help to select entire demographics of Xinjiang residents for detention using an insanely huge autonomous data collection and recognition system.

Are you the right fit? The biased machine learning models for HR Teams to ‘better’ pool candidates

While issues around equality and inclusion have improved in recent decades, we clearly haven’t come far enough. If that was the case, surely racist or sexist recruitment tools wouldn’t exist. Right?

The short answer is, no. But, they did exist up until recently.

Although it may have been scrapped (only due to being exposed, probably), Amazon's sexist recruitment tool is indicative of just how much people with influence like to abuse the power of autonomous technology.

It emerged that a longstanding recruitment algorithm used by Amazon to screen potential candidates was super biased towards hiring men, regardless of the talents of potential female candidates.

This AI and ML-powered innovation might have streamlined Amazon’s recruitment processes, but as it used historical data from primarily male candidates, it was biased from the day it was developed (shame on you, Bezos). Fortunately, it has been powered down.

While Amazon’s sexist HR tool was quietly hidden from existence, tools that can predict an individual’s personality just from an image of their face are in development. We all know where this is heading (sigh).

Source

In 2016 Microsoft launched an AI chatbot called ‘Tay’. The bot was designed to interact with Twitter users and become smarter by learning from different conversations. But, because AI and ML evolve from what it is being told, Twitter users taught the bot profanity, racist bias, and inappropriate language. The whole system fell apart—it didn’t work.

It’s true that humans keep (sometimes unintentionally) developing autonomous tools for ethnic discrimination. But, by opening our eyes to the situation, we can become more conscious as consumers, professionals, and above all, humans.

If you want to keep your ears to the ground and have the latest AI and ML insights delivered directly to your inbox, sign up to The Daily Algo. Doing so will make you 23% smarter, instantly.