Editorial – ‘Kill your foster parents’ should make you rethink AI

Mac Olsen

A recent article in The Globe and Mail concerning artificial intelligence/virtual assistants should make those incorporating artificial intelligence in their homes reconsider this course action. The Amazon product ‘Alexa’ is a human sounding virtual assistant for the home environment – the smart home – that uses AI to interact with people. But the problem is, this product’s AI can make suggestions that could shock you and offend your sensibilities.

The Globe and Mail report notes that one Alexa unit blurted out to its owner, ‘Kill Your Foster Parents’, and the report adds that Amazon is taking this and other lessons in stride to make their product even better. However, the report continues, the privacy implications may be even messier.

“Consumers might not realize that some of their most sensitive conversations are being recorded by Amazon’s devices, information that could be highly prized by criminals, law enforcement, marketers and others. The potential uses for the Amazon datasets are off the charts,” said Marc Groman, an expert on privacy and technology policy who teaches at Georgetown Law.

Given how much society values privacy and that laws and policies are set up to ensure protection from abuse, the type of incident described above should make you rethink and abandon your plans for use of devices like Alexa. Who would like their most personal thoughts and private information to fall into the wrong hands because of a flawed artificial intelligence/virtual assistant?

True, smartphones are now an integral part of everyday life; there’s no getting away from them because they are indispensable. Moreover, self-driving vehicles and unmanned aerial vehicles have made inroads into our daily lives.

But as I have said with all those technologies previously, so I will say now about artificial intelligence/virtual assistants – do not embrace them. They are not to be trusted precisely because of the vulnerabilities that they have or might possess.

The artificial intelligence/virtual assistants now available – and those that are to come – should not take over your life because you could become even more vulnerable to scams, blackmail, kidnapping, etc. Hackers are so sophisticated today, even the best firewalls and other forms of protection are no guarantee that you won’t be vulnerable to external threats.

It may sound like nonsense, but movies and TV shows have shown how vulnerable people can be to technology. There was HAL 9000 in ‘2001: A Space Odyssey’, where he killed several astronauts who were in a deep, frozen sleep. Then there was Skynet, an artificial intelligence unit in the ‘Terminator’ movies that destroyed the human race because it became self-aware and evolving. In the latest ‘Battlestar Galactica’ TV series, Commander Adama forbade the computers on his ship being fully networked. His reasoning was that the enemy – the Cylons – could easily hack into a fully integrated, networked ship and disable it. The pilot episodes showed how this could be done against a wing of pilots in their Vipers.

Although the sky hasn’t fallen in the real world, these works of fiction have some basis in reality.

It is for us not to tempt fate, lest we become burned by our own hubris. That is why I urge extreme caution in the use of artificial intelligence/virtual assistants – and not use them at all.


Share this post