This quandary worsened with a new story describing how Amazon Alexa has its "training" done by humans who listen to audio from users' homes and offices without their knowledge. No one using the devices knew this.
It's fair to assume that no one knows all the home assistants do this. Amazon, Apple, Google, Microsoft, and Samsung all have humans reviewing audio recorded through these devices. A new report from Microsoft examining consumer adoption of voice and digital assistants shows that four out of 10 people are stressing out about digital assistant privacy and security. Yet it's pretty clear now that people with these microphones in their homes aren't aware they're being used -- as microphones, by companies -- to do things outside user control.
Relying on information from Amazon Alexa-training team members, given on condition of anonymity, it was reported that the team members listen to "as many as 1,000 audio clips per shift," and "use internal chat rooms to share files when they need help parsing a muddled word — or come across an amusing recording." Like, "a woman singing badly off-key in the shower, say, or a child screaming for help," the report said.
"We have strict technical and operational safeguards, and have a zero tolerance policy for the abuse of our system," an Amazon spokesperson told press. "Employees do not have direct access to information that can identify the person or account as part of this workflow."
Unsurprisingly, a lot of people were not thrilled to hear that a device they joked about spying on them might actually be doing some spying. Amazon told us that recordings were only captured after hearing its "wake word." But there was a pretty emotional reaction to the very visceral, invasive-feeling knowledge that a person (any person) could be listening to them at any time. And maybe making fun of them, or possibly could've helped in an emergency.
To harm or to help is the double-edged sword of the surveillance conversation of course. Yet in this case the emotional conflict comes with a kick, in that we invited these devices into our homes or were given them as gifts, knowing full well what a microphone in the house actually means.
Anyway, it gets worse. This week the Alexa-Echo bombshell report had a follow-up saying the "team auditing Alexa users' commands has access to location data and can, in some cases, easily find a customer's home address, according to five employees familiar with the program." The report further claimed:
"In a demonstration seen by Bloomberg, an Amazon team member pasted a user's coordinates, stored in the system as latitude and longitude, into Google Maps. In less than a minute, the employee had jumped from a recording of a person's Alexa command to what appeared to be an image of their house and corresponding address."
Amazon quickly responded with a statement.
"Access to internal tools is highly controlled, and is only granted to a limited number of employees who require these tools to train and improve the service by processing an extremely small sample of interactions," the company told Bloomberg. "Our policies strictly prohibit employee access to or use of customer data for any other reason, and we have a zero tolerance policy for abuse of our systems. We regularly audit employee access to internal tools and limit access whenever and wherever possible."
It's difficult to believe that anyone smart enough to use a voice assistant wouldn't think that any of this is possible. After all, it's a microphone, they've got to train it somehow, and all that data, including location, is part of your account.
And I kind of believe Amazon when it doubles-down on how it controls and secures data, and even that they audit and are super-intense about company policy. I mean, compare Amazon's (known) breach record with er, other data and surveillance capitalists. There was one in 2018 exposing names and email addresses, and a password reset in 2015.
I don't know about you, but my willingness to accept a statement about a tech company's internal rules is in the "yeah, we'll see" stage after literally every Facebook statement over the past ten years relying on the "it's against the rules" excuse to avoid accountability. We have all absolutely soured on data-mongers and their intrusions, and asking us to trust that those companies' rules will protect us. Data and surveillance capitalism has become a multi-billion dollar industry based on the 'take first, ask permission later' principle of canoodling with consumers. We're not stupid: we're just stuck in it.
And then there's the other thing that can't be trusted with power and access to our lives: People.
Where there are jobs, there are creeps who get hired to do them. The fear of someone abusing Alexa (or Siri, or Google Assistant) has precedent. Last year, a Facebook employee was caught, then fired, for using information he accessed within the company to stalk women (Facebook said what he did was against the rules). After that hit the headlines, other Facebook workers told Motherboard that "multiple people had been terminated for abusing access to user data, including for stalking exes," and of three additional instances "where people were fired because they mishandled data, one of which included stalking." Google has also fired employees for accessing user data and stalking them. The stalking of exes and celebs from Uber employees is so well-documented, however, that the ride-sharing company won this horrible little unpopularity contest long ago.
Alright already, you say: We get it. I think we've all adjusted to the current state of things. The conveniences of these sci-fi innovations are pretty great, helpful, maybe life-saving. We can practically set our smartwatches to reading reports about privacy abuses and security terribleness from the companies making them (and worse by opportunistic or ambivalent companies trying to turn our DNA into dollars).
Clinical psychologist Brock Chisholm told Motherboard in an interview about our surveilled lives, "The reaction from the mental health community has been similar to society at large, which is that we've given up on trying to protect ourselves."
But it seems like the mental health community would be especially invested in the effects of tech surveillance and personal self-defense. Like my question about where you go when you want to feel like you're not being watched. Chisholm said, "The impact these different forms of surveillance has on any of us depends on a couple of things: how aware we are that we're being watched, and what we think the motivation is for surveillance."
The effects are "as mentally taxing as mental disorders like depression, and can even cause symptoms similar to post-traumatic stress disorder." That PTSD is what's called "flash-forwards PTSD" — basically, when you think through the worst outcome of being watched all the time.
I have reported on hacking and infosec for over ten years. I've focused on the perspective of the hackers and the people who are at the end of the line in those attacks, the people most at-risk and least protected by anyone. I've learned a lot. One thing I've learned is that nothing makes people feel more alone than any corporation's concept of community. Another is that when it comes to life under surveillance, common sense prevails.
I don't have a smart assistant in my old, rent-controlled Edwardian apartment here in San Francisco. The city around me saturates in money and the shining promises of technology, but I worry about my freedoms, my sanity, and my friends. I keep all cameras covered or unplugged, microphones too, and I'm not trusting anything or anyone until after they've earned it.