Artificial intelligence (AI) is already deeply embedded in so many areas of our lives. Society’s reliance on AI is set to increase at a pace that is hard to comprehend. AI isn’t the kind of technology that is confined to futuristic science fiction movies – the robots you’ve seen on the big screen that learn how to think, feel, fall in love, and subsequently take over humanity. No, AI right now is much less dramatic and often much harder to identify. Artificial intelligence is simply machine learning. And our devices do this all the time. Every time you input data into your phone, your phone learns more about you and adjusts how it responds to you. Apps and computer programmes work the same way too. Any digital programmes that display learning, reasoning or problem solving, are displaying artificial intelligence. So, even something as simple as a game of chess on your desktop counts as artificial intelligence. The problem is that the starting point for artificial intelligence always has to be human intelligence. Humans programme the machines to learn and develop in a certain way – which means they are passing on their unconscious biases. The tech and computer industry is still overwhelmingly dominated by white men. In 2016, there were ten large tech companies in Silicon Valley – the global epicentre for technological innovation – that did not employ a single black woman. Three companies had no black employees at all. When there is no diversity in the room, it means the machines are learning the same biases and internal prejudices of the majority white workforces that are developing them. And, with a starting point that is grounded in inequality, machines are destined to develop in ways that perpetuate the mistreatment of and discrimination against people of colour. In fact, we are already seeing it happen. How can machines be racist? In 2017, a video went viral of social media of a soap dispenser that would only automatically release soap onto white hands. The dispenser was created by a company called Technical Concepts, and the flaw occurred because no one on the development team thought to test their product on dark skin. A study in March last year found that driverless cars are more likely to drive into black pedestrians, again because their technology has been designed to detect white skin, so they are less likely to stop for black people crossing the road. It would be easy to chalk these high-profile viral incidents up as individual errors, but data and AI specialist Mike Bugembe, says it would be a mistake to think of these problems in isolation. He says they are indicative of a much wider issue with racism in technology, one that is likely to spiral in the next few years. ‘I can give you so many examples of where AI has been prejudiced or racist or sexist,’ Mike tells Metro.co.uk. ‘The danger now is that we are actually listening and accepting the decisions of machines. When computer says “no”, we increasingly accept that as gospel. So, we’re listening now to something that is perpetuating, or even accentuating the biases that already exist in society.’
Read more: http://metro.co.uk/2020/04/01/race-problem-artificial-intelligence-machines-learning-racist-12478025/?ito=cbshare

Realist - Everybody in America is soft, and hates conflict. The cure for this, both in politics and social life, is the same -- hardihood. Give them raw truth.