“Alexa, are you trolling me?”
Amazon made its Alexa Answer program available to the general public this week after several months of beta testing, opening up its artificial intelligence assistant to crowdsourced recommendations for unprogrammed questions.
The process is relatively simple. Anyone can go to the Alexa Answers site, where questions frequently posed to the smart assistant are published in under 300 characters. The answers — which do not require a citation — are fed into Alexa’s algorithms. The responses are rated by other Alexa Answers users via a star system on the site, and by Alexa users via a “Did that answer your question?” audio prompt.
The most frequently answered questions on the website Thursday included: “Can dogs have oranges?” “What’s the longest song in the world?” and “When was the fairest wheel invented?”
Amazon says it has put in place mechanisms — both automatic and manual — to prevent misuse. But experts and users have expressed concern that the program could still be overrun by bad actors, as has been the case with some previous big tech crowdsourcing projects.
“They’re playing with fire, really,” said Ginger Gorman, an online hate expert and the author of “Troll Hunting.”
When the Web was created decades ago, it was envisioned as a place to expand knowledge with potentially limitless resources. But as just a handful of tech giants have increasingly dominated the way consumers search, shop and connect, it has changed introduced significant dangers as those companies struggle with how to source and surface information — particularly as consumers have less patience for scrolling on their phones.
In particular, tech companies have struggled with trying to balance harnessing the wisdom of the crowd without falling prey to Internet trolls or disinformation. Companies from Wikipedia to Google have attempted to use crowdsourced information to help improve their knowledge, frequently with severe growing pains as they figure out security measures.
When they fail, the results are often spectacularly bad. It took only a few hours for Twitter users to train Microsoft’s AI-powered bot Tay to become racist and genocidal in 2016. A few years before, amateur sleuths on the online message board Reddit misidentified Sunil Tripathi as one of the Boston Marathon bombers.
Gorman said sites such as Wikipedia fight trolls by employing an intensive editing structure. She added that, from the outside, that appears to be something Alexa Answers lacks.
According to Amazon, answers will be automatically rejected if they contain “obscene, threatening, defamatory, invasive of privacy, or infringing of intellectual property rights (including publicity rights).” Fast Company reported that Amazon’s algorithms will filter out profanity and questions with a political angle, with at least some human editors.
Amazon spokeswoman Kerry Hall said that Amazon has filters that will flag when a contributor is editing an answer multiple times, as well as help prevent potentially offensive questions and answers from surfacing. Users can also up- or downvote answers to help weed out problems.
“High quality answers are important to us, and this is something we take seriously — we will continue to evolve Alexa Answers,” she added in an email.