In his recent book on what artificial intelligence could mean for a culture permeated with the spirit of self-improvement (an $11 billion industry in the US alone), Mark Coeckelbergh points to a sort of ghostly double that accompanies each of us now: the quantified self, an invisible And constantly growing digital duplicates, made up of all the traces left whenever we read, write, view or purchase anything online, or carry around a device, such as a phone, that can be tracked.
These are “our” data. Then again, they aren’t: we don’t own or control them, and we have scarcely any say in where they go. Companies buy and sell them and mine them to determine patterns in our choices, and between our data and other people’s. Algorithms target us with recommendations; Whether or not we click through, or watch video clips they’ve predicted will draw our attention, feedback is generated, sharpening the cumulative quantitative profile.
The potential to market self-improvement products calibrated to your specific insecurities is obvious. (Just think how much home fitness equipment now gathering dust was once sold using the blunt instrument of the infomercial.) Coeckelbergh, a professor of philosophy of media and technology at the University of Vienna, worries that the effect of AI-driven self-improvement can only be to reinforce already vigorous tendencies toward self-centeredness. The individual personality, driven by its own cybernetically reinforced anxieties, would atrophy into “a thing, an idea, an essence that is isolated from others and the rest of the world and that no longer changes,” he writes in Self-Improvement. The elements of a healthier ethos are found in philosophical and cultural emphasizing that the self “can exist and improve only in relation to others and the wider environment.” The alternative to digging ever into digitally enhanced ruts would be “a better, harmonious integration into the social whole through fulfilling social obligations and developing virtues such as compassion and trustworthiness.”
A tall order, that. It implies not just debate over values but a public decision making about priorities and policies—decision making that is, ultimately, political, as Coeckelbergh takes up in his other new book, The Political Philosophy of AI (Polity). Some of the basic questions are as familiar as recent headlines. “Should social media be more heavily regulated or regulate themselves, in order to create a better-quality public discussion and political participation”—using AI capabilities to detect misleading or hateful messages and delete them, or at least reduce their visibility? Any discussion of the matter is bound to revisit well-established arguments over whether free speech is an absolute right or one constrained by limits that must be clarified. (Is a death threat to be protected as free speech? If not, is a call for genocide?) New and emerging technologies compel a return to any number of classic questions in the history of political thought “from Plato to NATO,” as the saying goes.
In that regard, The Political Philosophy of AI doubles as an introduction to traditional debates, in a contemporary key. But Coeckelbergh also pursues what he calls “a non-instrumental understanding of technology,” for which technology is “not just a means to reach an end, but also shapes those ends.” Tools able to identify and shut down the spread of falsehoods might also be used to “nudge” attention toward accurate information—reinforced, perhaps, whether by artificial intelligence systems able to assess a given source is using sound statistics and interpreting them in a plausible way . Such a development would probably end certain political careers before they got started, but more worrying is that such technology might, as the author puts it “be used to push a rationalist or technosolutionist understanding of politics, which ignores the inherently agonistic [that is, conflictual] dimension of politics and risks excluding other points of view.”
Whether or not lying is inherent in political life, there is something to be said for the benefits of its public exposure in the course of debate. By steering the debate, AI runs the risk of “making the ideal of democracy as deliberation more difficult to realize … threatening public accountability, and increasing power concentration.” Such is the dystopian potential. The absolute worst-case scenarios involve AI turning out to be a new life form, the next step in evolution, and growing so powerful that managing human affairs will be the least of its concerns.
Coeckelbergh gives the occasional nod to that sort of transhumanist extrapolation, but his real focus is on showing a few thousand years’ worth of philosophical thought will not automatically become obsolete through feats of digital engineering.
“The politics of AI,” he writes, “reaches deep down into what you and I do with technology at home, in the workplace, with friends, and so on, which in turn shapes that politics.” Or it can, anyway, provided we direct some reasonable portion of our attention to questioning what we have made of that technology, and vice versa.