So I’m finally going to spend more time talking about a book that I feel like I’ve been referencing every-other-breath since I read it, and that’s Reclaiming Conversation by Sherry Turkle. The first point I need to make is that the book has a title that sounds like it should be the worst kind of “OK, boomer” fodder. It’s very much not. Yes, there’s an underlying current that we should be spending more time in face-to-face conversation—something I’ll say more about later—but mostly I’d argue this makes a very good modern companion to one of my other favorite books about technology & society, Illich’s Tools for Conviviality.

Illich’s book is both a product of the 70s and also takes a very high-level view of the effects of hierarchical, coercive systems, but Reclaiming Conversation takes a more modern and personal view. As a psychologist by training, Turkle shares lots of interviews and stories-in-the-small about technology, specifically always-connected devices like phones and wearables.

Now I don’t think this book is perfect. I think there’s an even more radical book just squirming underneath the pages, trying to crawl its way out, that you really only get explicitly in the last fifty pages or so. Since this is more an essay than a review, let’s talk about that book instead.

I’d say that this is mostly a book about the ways that smart phones have been used in a way that is bad for us. Now, that’s a lot more verbose than saying “smartphones were a bad idea” or simply “smart phones are bad”. It’s also more accurate. Turkle, like me, is someone who clearly loves technology and thinks it can be used in ways to improve our lives. Smartphones and other modern always-connected technologies Are Not Bad, but they’re very clearly being used in ways that make us stressed and miserable.

There a couple of big mechanisms behind this. The first is that always-connected devices are not just always vying for our attention but are, in fact, tools that we’ve gotten so habituated to dividing our attention that we’re not even noticing how often we’re not engaged with the world around us. If you need proof consider how often people feel uncomfortable being without smartphones once they’re used to having them, the way we’re starting to make jokes about the horror of having to do basic things like use the bathroom without having a smartphone with us. I think that speaks to a culture that has become far too accustomed to constant information flow from devices that didn’t even exist a little more than a decade ago. Yes, I know, this sounds like an “OK, boomer” moment doesn’t it? But it’s not, not really. The complaint here isn’t “kids on their smartedy phoneses not able to Look Up and experience life”. It’s more “we’re getting used to being interrupted by our devices and so expected to take in constant information streams that we’re not able to concentrate or be present”.

In the former complaint, the argument is that people are being weak-willed and chasing after shininess and fun. In the latter, the argument is that our culture is demanding people to be always available to everything and everyone but in very shallow ways. The word distraction tends to be thrown around a lot in discussions like these. I think it’s a bad one because “distraction”, to me at least, carries the connotation of personal failing: you are the one who is distracted, you the individual are failing to control your attention. I don’t think that’s what’s happening. Your attention is being divided. No, that’s still too passive voice isn’t it?

Other people and, more importantly, companies are dividing your attention. There’s a reason why notifications on phones and tablets are opt-out and not opt-in. Their fundamental goal isn’t to let you know about important things, they’re there to get you to use the device more. Why? Because the more you’re on a service the more ads you see, the more data is collected, the more a profile about you can be built. Why do device manufacturers, operating system creators, allow this kind of behavior? Because they also benefit, profit, from data gathering and constant use. Smartphones aren’t tools made for us to use, they’re tools made for us to need.

It’s tragic, really, because a small device that’s always connected to high-speed internet and loaded with sensors should be solely a good thing. It should be a thing properly integrated into our sense of body, customizable to our liking, gathering only the data we want and only for our own eyes.

But I’m getting ahead of myself.

The “conversation” in Reclaiming Conversation is about face-to-face talking. Turkle emphasizes the importance of raw, unmediated, in-person conversation as being an important part of how we connect with others, learn empathy, and are able to engage with our humanity. I don’t entirely agree here. I think there’s a fundamentally ableist core to this thesis, given her emphasis on sight, on hearing, on eye-contact, on vocalization. I don’t think Turkle means to imply that if you’re neuroatypical or disabled you’re less able to engage with the world and be fully human, but that is the logical conclusion. As a side-note, I find this somewhat disappointing as my first exposure to Turkle’s work was on epistemological pluralism and the idea that there are many different ways people can understand the world and gain knowledge & that none of them should be privileged over any other.

Funnily enough, I think the argument of the book comes out stronger if we get to the core of what “conversation” really should mean here: focused, emotionally honest, deliberately vulnerable communication. Her negative examples in the book are about shallow communication, communication while attention is divided, and about highly-edited communication. That last one is interesting so I’m going to explain it in a bit more detail. She offers examples from interviews of people who spent their time over email and texts crafting their personality even to intimate partners: workshopping every sentence and sentiment to send the right signals and vibe.

Now, I think I’m more sympathetic to this than Turkle is. I know what it’s like to have trouble with social cues, language, and expressing yourself. I’m in my 30s and I still not-infrequently have trouble with knowing what someone meant and what kind of response they might be expecting. I do get the criticism she’s trying to draw out, though, which is that it’s easier to avoid difficult conversations and vulnerable moments over text than in an in-person conversation. Rather than insisting, though, that we need to put face-to-face conversation as some kind of pinnacle of human connection I think it’d be better to put the emphasis on not playing into the social expectation that we should always be available and have the perfect thing to say, an emphasis on allowing ourselves to be weak, imperfect, and finite.

At the same time, I think the heart of the problem isn’t people not wanting deeper emotional contact but rather that the divided attention of the always-online discourages us from forms of communication that aren’t skimmable. You can see this a lot in social media like Twitter. While there are threads that go viral often what gets 10k retweets or more are pithy comments, ideas that you can absorb in seconds. Even the threads are often incredibly skimmable, suited more towards fast consumption than long thinking.

So I think maybe a better way to recharacterize Turkle’s criticisms non-face-to-face communication is that it’s easier for them to be emotionally skimmable and that her concern is that we’re leaning on that property as a default and that, by the nature of our tools, we’re being encouraged by companies and services to do so. Lest I sound conspiratorial in claiming that this is being deliberately furthered I only need to present as evidence the existence of quick-replies: those automated messages that are suggested on every message when you use gmail or, at least in recent Android versions, are suggested for most messaging apps you might use. The companies that developed these features literally want to write our own words for us in the name of convenience.

That wraps us back around to the other “how” of how smartphones and wearables have become bad for us: defaults, usage patterns, and narrative creation. Or, in my own parlance, dependency, devices and services that are made to be needed. In a broader picture, you could argue that capitalism is creating material conditions that leave us reliant on inferior digital services to make up for the things it has already taken away. I’m going to come back to that point at the end.

An example of a service that wants to create dependency, one that’s fresh in my head, is Grammarly, a service that purports to help you write better. Previously, Grammarly ads seemed somewhat harmless but annoying: it’s a spell-check that’s also very opinionated about word choice, encouraging you to not use certain words they consider hackneyed or boring. What hit me about this most recent ad, though, was that they were claiming they could tell you the tone of your message, rating it in terms of things like “accusatory” or “optimistic”.

Absolutely asinine. There is literally no way that something contextless, without knowledge of the relationships between people and established patterns of communication, could actually determine something like tone. Not in any meaningful or useful way. If I’m being harsh here it’s because I think I need to be harsh. The fact that ideas like auto-generate replies or automated tone checking didn’t get laughed out of the room long before anyone implemented them is an indictment of how dysfunctional the tech sector’s relationship to its products and its users really is.

But why weren’t they laughed out of the room? Because of the delighting vision of dependence, of us coming to rely on these tools to perform basic interactions for us in shallow & efficient ways. I find it absolutely dystopic.

The Grammarly example fits well into Reclaiming Conversation in part because of a story she relates from her interviews about users of the site 750words. If you’ve never used 750words, there’s a feature where it will tell you about the emotional content of your post. Back when I used the site, years ago, I thought of this feature as being pretty silly. I’d write about what was happening in my life at the time, which included coming out as a trans woman, going to therapy, and starting grad school—all the at the same time, yes—and it would tell me I was self-focused and angry, even when I wasn’t writing about things that made me angry and, as far as I can tell, I was only labeled self-focused because I tend to say things like “I feel” or “I tend to say things like…”.

I’m not the only person who was told I was self-focused, though, as Turkle relates. She tells the story of a woman started using 750words and noticed that it repeatedly said that she was being “narcissistic”. She didn’t think she was being a narcissist or even particularly self-focused, but she took the algorithm to heart: she began to change her writing to something that topped registering to 750words as self-centered. The woman considers this a victory, a kind of algorithmically directed therapy, but the problem here is that she’s judging herself by something that has no particular validity. The analysis by 750words is simplistic. She might as well be determining whether she’s doing a good job by consulting a mood ring at the end of each session. So I’m trying to be careful and I don’t want to say the woman in the story is wrong for thinking this was helpful, but I do think it’s dangerous for us to want to please algorithms that are trying to create context-less narratives for us.

750words labeled me as being self-centered and obsessed with death because I was talking about coming out as trans and coming to terms with dark things about my childhood. Why? Because there was no viable way for it to have the context to determine what the meaning of what I said was, only superficial relationships between word frequency and mood. Now, you could try to argue that maybe in the future we’ll have a better way of judging mood off of text, maybe building a recognition system off of a corpus that makes gpt-3 look puny. But, I argue, that’s a kind of computational rube-goldberg machine. You’re attempting to make a complex system to solve a simple problem that a person can just do. If I want to know how my writing comes across to another person, then I can simply ask another person.

“But what if I don’t want another person to see it?” is an excellent question to ask here and actually leads us to the even darker part of the 750words story. What we have here is an example of someone changing their private journaling to please some external narrative creator. That means that she is no longer writing for herself, to understand herself, but writing for something else. That means the algorithmic narration of her journaling has materially changed it into something that’s not journaling!

Throughout Reclaiming Conversation Turkle talks about various spheres of intimacy and communication: the solitary, the one, the few, and the public square. A connection she implies but doesn’t explicitly draw is that as more & more of our lives has been pushed to be online we’re losing the distinction between these spaces. We don’t get to as easily exist in the world of solitude, or even in the world of intimacy with a single person, but we’re constantly drawn into something between the few and the public square. In other words, we’re becoming constantly aware of ourselves as objects in a kind of public eye. The role of social media in that I think is getting to be well-understood.

The problem with the 750words story is that it shows allowing the creation of narration about our own private self & data can strip us of solitude in a different way: we become objects to algorithmic analysis, always under the baleful eye of software written by tech companies that want to tell us the meaning of our own lives and experiences.

It might sound like I’m ranting against the quantified self movement. I’m not! I actually love quantified self, in principle at least. I think it’s actually a good thing to have more data about ourselves and our lives as long as we are still the ones to make the story out of it. What’s good is having a fitness tracker that can tell you things like your heart rate, your sleep patterns, how much you’re actually able to move in a day. What’s not good is something else aggregating this data and trying to tell you conclusions about yourself, like whether you should be trying to lose weight or exercise more.

Another example of wanting to create dependency that Turkle addresses is location tracking. Namely, in terms of parents wanting to know where there children are at all times. She interviews families who rely on this incredibly invasive technology because they literally don’t know how to have the conversations about setting boundaries, rules, and the conversation about autonomy vs. safety. That’s not inference, to be clear, the explicit reason was to avoid having to talk about these things. Something is very wrong about our relationship to technology if that even makes sense, handing over incredibly sensitive data to avoid difficult discussions.

But these are almost side-notes to the real problem I see of dependency creation by the tech sector: the ways that smartphones, the gig economy, and social media all tie into the crash of the global economy in ‘08 and the attempt of techno-libertarians to strip us of all our social structures and safety nets.

This is the point, dear reader, where I’m worried I’ll sound a little weird. I promise though that I’m not actually making any particularly outlandish claims, but rather recontextualizing the past ten-ish years.

Since the Great Recession of ‘08-09, the tech sector has been attempting to undermine our social structure—although they call it “disruption”, “creating opportunities”, “making your own brand”, and many other absurd euphemisms. One of the purest examples has been the existence of gig economy. The gig economy is literally just a way of undermining labor laws: pseudo-employees at your beck and call that are underpaid and exist in a constant state of job insecurity and exhaustion. But it doesn’t end there: things like Airbnb present ways of undermining efforts to secure housing by skirting hotel laws to artificially reduce the availability of apartments, Amazon creates monopolies that are soon going to be—and arguably already are—far more powerful than anything the Walton family has ever dreamed of. What’s more subtle, though, are the ways that companies like Apple and Google have gained a stranglehold on the substrate of our computationally saturated world. This is where I think the framework from the book Platform Capitalism by Nick Srnicek is really useful.

He basically argues that companies like Google, Facebook, &c. are largely products of an economic recession. They can’t charge for their services because, well, no one has any spare money to pay for things. Instead, they can control the extraction and distribution of data as their product. They can build profiles of consumers for targeted advertising, something more traditional companies will still excitedly pay for. So what you get is a new kind of parasitic model of capitalism: one in which to use the product is to be the real product. Now, that last bit isn’t a particularly new observation. The good, privacy and autonomy oriented, parts of tech have been trying to scream this message to everyone for a couple of decades. Srnicek’s book, though, does an incredible job tying it into the larger problems of the economy. We’ve reached a point where too much has been taken from us for raising prices or offering new paid products alone to be as viable of a strategy to vast riches. Wages are low and, thanks in part to the undermining of other parts of the tech sector, are staying low.

How does all of this tie into Reclaiming Conversation? Largely, because smartphones, wearables, and tablets are all a part of expanding the reach of platforms. These wonderful devices with more power than any computer before fifteen years ago, packed full of sensors, with enough storage to hold lifetimes worth of media are being used to track us, exploit us, and entice to continue using them. The gig economy needs us to be always online, both to keep us at beck & call but also to keep the continual gathering of data: our habits, our location, our lives. The platform capitalism of the tech sector needs to be constantly feeding off of our data while trying to find new ways to collate and compile this flood of information into something vaguely sensible. That’s why there’s so many “machine learning” startups whose missions seem utterly bizarre: they’re competing to make the tools the big platforms use to refine their crude oil into something salable.

In this I’m willing to go further than Turkle does, as I think she stops just short of saying this: the problem with how we’re using always-connected devices is less the choices of individuals and more about the coercive nature of the current incarnation of capitalism, that needs us to be providing uninterrupted flow of new data to keep feeding the ever less efficient ad-revenue economy.

What does this? It means that in order to actually tackle the issues this book raises for how these devices have been integrated into our lives we have to adopt a truly anti-capitalist framework, one that rejects the call of moremoremore in favor of giving people tools that they can use to actually enhance their lives.

Am I saying that we aren’t responsible for our own usage and consumption on these oil-rigs of data? No, I think we are, but it’s akin to personal recycling and green living: it’s only a part, not the whole, of the story.

So while we need disentangle capitalism from our tools, and we need to make and use devices that our able to be adapted to our lives, what would it even look like to use computational devices that aren’t coercive?

Here we turn one last time to Reclaiming Conversation. Turkle emphasizes near the end of the book that the problem isn’t the existence of smartphones or online services, but rather that we’ve been pressured into them being integrated as defaults into our every moment. We feel the need to carry devices with us constantly, check them in the middle of conversations, and stay up-to-date on the information flow. I think in a world with non-coercive tools & services we’d not feel this kind of pressure. We’d have the ability to make space for one thing at a time, to engage with the internet as a deliberate & thoughtful choice rather than as a default.

Socially this means we wouldn’t have employers that demand us to be responsive constantly. This means no services like Slack that allow for dysfunctional relationships to work at all hours. This also means neither gig economy nor working from your phone like a thousand drop shipping/mlm get-rich-quick plans promise: “making your own hours” is almost always just code for “your time is never your own again”. It also means that we give each other space to respond slowly and that we don’t expect each other to be up-to-date on news, discourse, or social media posts.

On a technological front, it means that we need tools that can operate better without being always online. I don’t think social media is inherently bad, actually, and I think it could be massively improved if we engaged with it in a kind of “batch” mode rather than engaging each other at near the speed of instant messages. Can you imagine using something like Twitter but where, by default, you see the last n hours of posts and can queue up responses to them that fire all at once like old email systems worked?

Honestly, I think it would also be healthier to use things like email more again: email, blog posts, or even going older-school with things like BBSes. We need slower, deliberate, even more vulnerable forms of communication like these. We also need more small, private spaces online that we can use to communicate with groups that are smaller than “everyone one the internet”. Here’s where I think pubnixes and micropubnixes like rawtext.club, the server I’m on, really shine. On rawtext.club there’s only a few dozen people total with accounts. Fewer still log-in regularly. What this means is that it’s a small group of people that end up, well, feeling like people you’re getting to know. It’s more like joining a student club in college than it is like being on a social media site. It’s a feeling I’ve missed, honestly, since most forums started to die off: having a group of people who you wouldn’t say you’re close to but you talk to them enough that you’d be sad to not hear from them again, a different sphere of intimacy than being on the public internet.

Finally, we need our technology to enable our solitude. We should be able to control the data collection of our devices so that we don’t feel algorithmically seen like the woman who changed her journaling because of the 750words mood analysis. It should be easier to operate our devices without notifications. I disabled notifications on my phone, actually, with a couple of exceptions. Do you know what happens? All the software starts begging you to turn notifications back on, warning you of dire problems that will happen if you don’t, explicitly trying to stir fears of “missing out”. That wouldn’t happen in non-coercive systems.

Again, I don’t hate technology. I love it. But I want it to be made for our needs and not another tool of capitalist exploitation. We can have so much more.