"On the 15th March 2016, an artificially intelligent (AI) software programme called AlphaGo, defeated the world champion of an ancient board game called Go. The game is immensely complex, with a total combination of possible moves numbering several hundred orders of magnitude more than the number of atoms in the universe. Winning the series four-to-one, AlphaGo’s victory was emphatic. It also showcased significant advances in AI’s ability to recognise obscure patterns, learn new ones and adapt strategies to changing circumstances.
"Yet, just two weeks after AlphaGo’s impressive victory, a new chatbot called Tay, exposed a darker side to AI. Designed to engage in friendly conversation with people online and assist them with Microsoft services, Tay’s unique design feature was that “she” learns from her online interactions. Upon Tay’s public release a coordinated barrage of abuse and incessant trolling by Twitter users, taught Tay the wrong lessons. The programme was corrupted into spewing racist, sexist and xenophobic comments, revealing the potential for flaws in the design and programming of AI, as well as the uneasy interaction between artificial intelligence and the natural kind..."