Will the internet of things result in predictable people?

Screens over people: two smartphone users in New York City. Photograph: Joseph Reid /Alamy

We’re told that eventually sensors will be everywhere. Not just in phones, tablets, and laptops. Not just in the wearables attached to our bodies. Not just at home or in the workplace. Sensors will be implanted in nearly everything imaginable and they will be networked, tightly connected, and looking after us 24-7-365.

So, brace yourself. All the time, you’ll be be monitored and receive fine-grained, hyper-personalised services. That’s the corporate vision encapsulated by the increasingly popular phrase “internet of everything”.

Techno-optimists believe the new world will be better than our current one because it will be “smarter”. They’re fond of saying that if things work according to plan, resources will be allocated more efficiently. Smart grids, for example, will reduce sizeable waste and needless consumption. And, of course, on an individual level, service providers will deliver us the goods and services that we supposedly want more readily and cheaply by capitalising on big data and automation.

Keanu Reeves and Carrie-Ann Moss In The Matrix (1999): ‘The dystopian vision of The Matrix won’t be created. But even though we won’t become human batteries that literally power machines, we’ll still be fueling them as perpetual sources of data that they’re programmed to extract, analyse, share, and act upon.’ Photograph: Allstar Picture Library

Keanu Reeves and Carrie-Ann Moss In The Matrix (1999): ‘The dystopian vision of The Matrix won’t be created. But even though we won’t become human batteries that literally power machines, we’ll still be fueling them as perpetual sources of data that they’re programmed to extract, analyse, share, and act upon.’ Photograph: Allstar Picture Library

While this may seem like a desirable field of dreams, concern has been raised about privacy, security, centralised control, excessive paternalism, and lock-in business models. Fundamentally, though, there’s a more important issue to consider. In order for seamlessly integrated devices to minimise transaction costs, the leash connecting us to the internet needs to tighten. Sure, the dystopian vision of The Matrix won’t be created. But even though we won’t become human batteries that literally power machines, we’ll still be fueling them as perpetual sources of data that they’re programmed to extract, analyse, share, and act upon. What this means for us is hardly ever examined. We’d better start thinking long and hard about what it means for human beings to lose the ability – practically speaking – to go offline.

Digital tethering in an engineered world

The key issue is techno-social engineering. Techno-social engineering involves designing and using technological and social tools to construct, influence, shape, manipulate, nudge, or otherwise design human beings. While “engineering” sounds ominous, it isn’t inherently bad. Without techno-social engineering, cultures couldn’t coordinate behaviour, develop trust, or enforce justice. Since techno-social engineering is inevitable, it’s easy to get used to the forms that develop and forget that alternatives are possible and worth fighting for.

Think about the world we currently live in. While we benefit immensely from the internet, we’ve become digital dependents who feel tethered to it and regularly pay the steep price of constant connectivity disrupting older personal, social, and professional norms. The old advice of “go offline if you’re unhappy” rings hollow when others constantly demand our attention and not providing it conflicts with widespread expectations that being productive and responsible means being online. Amongst other things, being attached to a digital umbilical cord means daily lives under surveillance and showered with laments about unachievable work-life balance, fear of missing out, distracted parents, and screens being easier to talk to than people.

But the problem runs much deeper, and turns out to be more than the sum of its parts. Georgetown professor Julie Cohen gives the right diagnosis by characterising citizens as losing the “breathing room” necessary to meaningfully pursue activities that cultivate self-development – activities that are separated from observation, external judgment, expectations, scripts and plans. Without freedom to experiment, we run the risk of others exerting too much power over us.

We enjoy this breathing room throughout our lives. We get it in special places, like homes and hiking trails. We cherish it in the in-between spaces, like the walk home from the train or drive to soccer practice. But none of these locations are sacred. Rather, as the invasive pings of our smartphones demonstrate, they’re always at risk.

Find, gather, serve: the digital self

For the moment, we console ourselves with limited governance strategies. We turn notices off. We leave devices behind. We taketechnology Sabbaths and digital detoxes.

Smart homes of the future might follow suit. Perhaps they’ll be programmed to protect some forms of solitude by automating attention-killing tasks. But it’s hard to place much stock in any of this when neither tool nor technique effectively bridges the gap between individual decisions that are deemed counter-cultural and widespread expectations about online commitments.

To make matters worse, it’s difficult to imagine that new forms of pervasive monitoring won’t be invented. And if they are, folks will be told that that life gets better by using them. Take, for example, David Rose, author of Enchanted Objects: Design, Human Desire, and the Internet of Things. He pines for the day when we can stop pestering our spouses and children with questions about how they’re doing, and instead look to kitchens lined with “enchanted walls” that “display, through lines of coloured light, the trends and patterns in your loved one’s mood”. Ironically, minimising human interaction in the always-on environment with automated reports eliminates our freedom to be off.

Entrepreneurial visions like this will profoundly influence the world we’re building. Writer and activist Cory Doctorow observes: “A lot of our internet of things models proceed from the idea that a human emits a beacon and you gather as much information as you can – often in a very adversarial way – about that human, and then you make predictions about what that human wants, and then you alert them.” Concerned about the persistent public exposure that these models rely on, Doctorow identifies an alternative, a localised “device ecosystem” that would allow internet of things users to only “voluntarily” share information “for your own benefit”.

Doctorow is right. We need to think about alternatives. And in principle, he’s got a fine idea. But at best, it’s a partial fix.

Want an example of recent techno-social engineering? Look no further than Facebook ... Photograph: Karen Bleier/AFP/Getty Images

Want an example of recent techno-social engineering? Look no further than Facebook … Photograph: Karen Bleier/AFP/Getty Images

The find, gather, and serve models Doctorow justifiably critiques hide the deeper problem of pervasive techno-social engineering, and so his solution doesn’t address it. Our willingness to volunteer information, even for what we perceive to be for our own benefit, is contingent and can be engineered. Over a decade ago, Facebook aimed to shape our privacy preferences, and as we’ve seen, the company has been incredibly successful. We’ve become active participants, often for fleeting and superficial bits of attention that satiate our craving to be meaningful. And Facebook is just the tip of the iceberg. Throughout the current online environment, consumers are pressured to “choose” corporate services that directly manipulate them or sell their data to manipulative companies.

Intense manipulation in the programmable world

Manipulation is thus the other big techno-social engineering issue that needs to be confronted. The power of traditional mass media – think advertisers and news organisations – to shape culture and public opinion is widely understood. But it seems like child’s play in comparison what we’ve seen on the internet and in visions of the internet of things.

For good reason, there’s already plenty of anxiety about precise and customised forms of manipulation. Marketers want to harvest our big data trail to create behaviourally-targeted advertising that exploits cognitive biases and gets absorbed during moments when algorithms predict we’ll experience heightened vulnerability. Communication tools are being rolled out that perform deep data dives, create psychological profiles, and recommend exactly how we should communicate with one another to get what we want. Facebook has shown it’s ready and willing to non-transparently tweak our emotions – and co-opt us into their agenda – just so we find a product engaging. Given just how much nudging is occurring, it’s no surprise that folks are worried about the potential for elections to be determined by “digital gerrymandering”.

The internet of things is envisioned to be a “programmable world” where the scale, scope, and power of these tools is amplified as we become increasingly predictable: more data about us, more data about our neighbours, and thus more ways to shape our collective beliefs, preferences, attitudes and outlooks. Alan Turing wondered if machines could be human-like, and recently that topic’s been getting a lot of attention. But perhaps a more important question is a reverse Turing test: can humans become machine-like and pervasively programmable.

Evan Selinger is an associate professor of philosophy at Rochester Institute of Technology, where he also is the head of research communications, community and ethics at the Media, Arts, Games, Interaction and Creativity (MAGIC) Center. Twitter: @evanselinger.

Brett Frischmann is a professor and co-director of the Intellectual Property and Information Law Program at Cardozo Law School. Twitter: @BrettFrischman.

They are both co-authors of Being Human in the 21st Century (Cambridge University Press, 2017), a book that critically examines why there’s deep disagreement about technology eroding our humanity and offers new theoretical tools for improving how we talk about and analyze dehumanization.

Source: http://www.theguardian.com/technology/2015/aug/10/internet-of-things-predictable-people


 

//]]>