Google Brings People and AI Closer Together (but not the way you think)
Google has been showing a lot of AI technology recently, garnering the Oohs and Aahs of audiences and tech blogs. For example: Gmail will now auto-complete your emails. You type: “I had” and Google types “a really great time at your party last night”. Enter. Done.
This may be a cool application, but it is misguided. Isn't it enough that we don't know how to spell the endings of long words anymore thanks to Autocomplete? Do we want really want humans to not even have to think about the second half of their sentences? As a faculty member, I encounter students daily who have a hard time putting together sentences, chain those sentences into paragraphs, and then combine those into high-school essay level arguments. Off-campus I see Baristas open the calculator app to figure out what the change for $3.50 from a $5 bill is.
Yeah, yeah, yeah. I know. I'm such a luddite roboticist. I should realize that technology only helps us be more effective. But how little do we want to end up thinking for ourselves? With each such feature Google's AI is just taking a tiny annoying bit of thinking out of our lives and adding just a tiny bit of automation. But in the process we lose sight of the fact that thinking is actually good for you!
But the problem is not just that with every such feature Google is making us ever so slightly dumber. An important side effect of Google's AI doing the more of the thinking for us is that it is also taking away another sliver of thought diversity, and replacing it with more uniformity. What if I wanted to say something slightly different than what Google's machine learning found that 200 million other people wrote when they were in a similar situation? Will I really insist on my phrasing? The text is already there, a mere Enter away. It's so easy! Do I really care that much about those subtleties? Who has time for that? I have ads to click on! Enter. Done.
The result is that we increasingly sound the same. And that's good. For machine learning algorithms at least. The more predictable we are, the easier it is to predict us, the better the AI that is predicting us seems to be! Everyone wins! 🎉
At the same I/O event that announced Smart Compose, we saw the new Google conversation agent booking a restaurant slot and a haircut, much to the amazement of the Internet (“Turing Test: Solved!”). This demo is both impressive and troubling. Impressive because they did something that seems really hard. So, kudos for the great tech work!
But it is also troubling, because one of the reasons it works so well is also that people have themselves become so much more predictable. As with auto-complete and Smart Compose, the technology-centric, efficiency-driven culture that Silicon Valley promotes makes us type less, think less, look around less, navigate our spaces less consciously (more below), and as a result it drives people to be more similar and more predictable. This predictability includes the human on the other end of the phone.
Here's the full irony. We get daily news stories about how AI is getting smarter and more human-like. But at the same time we are also getting dumber and more robot-like. And that is, at least in part, because we communicate predominantly through technology, so our communication becomes more technologically-adapted. A true meeting in the middle.
I increasingly feel that with its latest technology, Google just wants to promote more of this, making sure we don't even have to call the hairdresser. God forbid we may hold a human-to-human conversation. With voice! And pauses! The humanity! That may only screw up the next AI down the road.
Finally, Google announced AR overlays for Google Maps. Why do we need this? Just listen to the Google representative on stage, telling us us how frustrating it is when it sometimes takes you 20 seconds to realize you were walking in the wrong direction. Yes, that annoying sliver of opportunity left in our world to get lost for a brief moment despite GPS maps on our bodies 24/7.
When I hear “walking in the wrong direction”, I think of it as an opportunity to look around; to see things we were not explicitly looking for; a last vestige of serendipity. In efficiency culture, we don't want any of that. All of these “problems” are now solved! Again, we are one step more predictable, and as a result AI can get better at predicting us, without that annoying wandering-around data noise that messes up the machine learning algorithms.
Google Assistant making calls pretending to be human not only without disclosing that it's a bot, but adding "ummm" and "aaah" to deceive the human on the other end with the room cheering it... horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.
— zeynep tufekci (@zeynep) May 9, 2018
Zeynep Tufekci is correct to call Silicon Valley “ethically lost”, and the latest Best-of-Show features emphasize that. Others, too, say it better than me, in long format. One thing is sure: As the tech industry marches blindly along, there is an increasing need for academic and activist writers to keep an ideological check on this procession.