Big Problems in Big Data Critiques

Did you click on the right links today? If you are a citizen of China, this might become relevant for your “citizen score”, which, if high enough, enables you to enjoy certain benefits and avoiding restrictions. The “Digital Manifesto”, written by a number of well-respected researchers, condemns citizen scores as much as you probably do. But in their critique of Big Data and Artificial Intelligence, the authors of the manifesto go a bit too far. Here is why.

In 2015, a couple of well-known scholars teamed up to produce a text, entitled “Behavioral Control or Digital Democracy? – A Digital Manifesto“. The text was first published in the German popular science magazine “Spektrum der Wissenschaft” and was recently re-published, along with a number of explications and comments in an edited book (Könneker 2017). It culminates (in the German version) in a list of ten recommendations for the digital era.
The general gist of the manifesto is to warn against a digital future that uses Artificial Intelligence (AI) and Big Data to control humans in a top-down fashion. The authors already see signs of this possible future, e.g. in what they call Big Nudging. Here, citizens are brought to align themselves to an allegedly superior goal, set by a government, through the use of Big Data. This is exemplified in the idea of a “citizen score” (here and here and here), currently developed and tested in China. Chinese citizens will earn reward points for good behavior (measured, among others, in good online behavior), for the right friends and other politically opportune attitudes and behaviors. The higher the number of points, the better the life will go for the individual.
On this basis, the manifesto’s recommendations are aimed at taking steps against this kind of paternalism and social control. They demand, for instance, to decentralize information systems, enhance informational self-determination and participation, increase transparency as well as control abilities of data subjects (i.e. the persons the data are about), and improve collective intelligence through supporting diversity and plurality. The kind of collective intelligence envisaged here is, of course, human collective intelligence.

What is the manifesto’s object?

The digital manifesto is a welcomed contribution to the ongoing debate about humanity’s future and the role “the digital” plays in it. However, as the quotation marks indicate, the problems begin with the lack of a precise subject. Although the authors repeatedly mention big data and big nudging, AI and superintelligence as well as other forms of digital innovations, the text’s main appeal appears to stem from a picture of our digital future that is painted in rather broad strokes.
To give you an example: Towards the end of the text we read that relying on a single superintelligence (perhaps a singleton that Nick Bostrom describes) in all or most of our activities means that computers rule the world. The manifesto holds, however, that a superintelligence can never be as intelligent as the decentralized collective intelligence of humanity. Therefore, they conclude, it would be a mistake to submit oneself to a superintelligence. However, in explaining their position, the authors abruptly switch the topic. They continue by condemning algorithmic decision making and the trend towards individualization of information as leading to echo chambers and filter bubbles and diminishing collective intelligence. The reason for not subjecting oneself to a superintelligence, or so we read, lies in the problematic consequences of individualizing algorithms.
Superintelligence and individualizing algorithms, however, are two different things. There is certainly some truth to the manifesto’s judgments, but the point here is another one: being governed by a superintelligence is definitely not the same as being governed by big data algorithms. To not distinguish between these issues precisely enough is one big problem of the manifesto.

How does our digital future look like?

The vision of humanity’s digital future the authors present is also not quite well differentiated, it seems – another big problem. Let us take issue again with the notion of a superintelligence. Bostrom, in his 2014 book “Superintelligence“, discusses what could take place after machine intelligence has reached the level of human intelligence. It is very likely, he says, that an intelligence explosion will occur at this point. The resulting superintelligence will probably devour humanity’s collective knowledge, found on the internet, right before it makes decisions. It will thus increase its own intelligence, possibly including also by reproducing itself. If and when there is more than one superintelligence, their ruling need not be centralized or top-down, as imagined by the manifesto. Rather, superintelligences will adopt the best organizational form they can imagine.
From this we might derive the following scenarios: Should a decentralized knowledge organization be superior to other forms, the superintelligence will also make us of it. Perhaps it will even include human collective intelligence. Moreover, since morality has emerged in intelligent human collectivities, it is not unlikely that an artificial superintelligence will also develop morality. The norms might most likely be better as our current ones, as will be the political organization of superintelligences. All of this provides a different picture to the (more or less messy) one sketched by the authors of the digital manifesto.
We could certainly discuss this, since there are arguments to the extent that humanity will in fact be extinguished by a superintelligence. What we should be careful about, in any case, is to not conflate issues of domination and autonomy-reducing algorithmic governance with questions of superintelligence behavior.

Where is the problem?

Connected to this point is another point of criticism, one that is also related to issues of centralization. The manifesto repeatedly argues against centralized forms of governance. To do so the authors mention cases of China’s citizen score or other forms of government surveillance (“Big Brother”). What they do not see is that in all of these cases it is a government that produces the autonomy-reducing and dominating governance effect. There are forms of corporate, not government, domination, of course. But in the manifesto this is rather simply claimed, not argued for or even explained. The fact that in the real-life examples of domination, the common theme is that it is governments who are the perpetrators, is not elaborated.
In the manifesto we do not read much about criticisms of governments or the centralization and monopolization of power within the hands of governments. Would it not be time to take a closer look at different forms of domination between governments and corporations? Instead, it seems that the authors use the appeal of criticizing neoliberalism and capitalism to make their case against algorithmic governance. For instance, they complain that pervasive computing manipulates people into “generating free content for Internet platforms, from which corporations earn billions”. This claim certainly needs some back-up, otherwise it is just another example of the manifesto’s weakness regarding differentiation.

Who is the problem?

That there is a difference between a market-driven (i.e. corporate) technology governance and a political one in terms of their normative relevance in the digital future can be supported by at least three arguments. First of all, there is a great variety of algorithms out there with differing purposes and working mechanisms. As Pedro Domingos lays out in his 2015 book “The Mater Algorithm“, the Amazon recommendation algorithm makes people discover bestseller and blockbuster because with them, Amazon as a logistics company earns big bucks. Netflix, by contrast, leads people to less well-known films because they cannot afford to feature the expensive blockbusters in their program. This shows two things: first, not all algorithms are equal, so not all threats from algorithms are equal, too. And second, the market obviously provides a great way for specialization, differentiation and pluralization. Compare the purpose of government-involved algorithms: they exist more or less only to spy on people (and make them obeying citizens or at least non-deviant ones).
Second, it seems that politics is not a good way make good decisions. As a large variety of studies in economics and law have shown, particularly in the Public Choice tradition, the way political decision-making is organized, leads to systematic distortion and bias away from the public interest. The main drivers of this are the monopolization of power in the hands of a few (i.e. the government), and the structure of interests: well-connected groups can easily team up and influence the law-making and regulatory effort done by politicians to their advantage. Private interests dominate the political process, not the public interest. There is evidence that in data protection and information and communication technology (ICT) we can find many of the problematic forms of regulatory capturing that distort political decision-making in other industries as well. Moreover, the political process itself does not provide as good opportunities for control as does the market. Democratic elections, as much as we value them, are not necessarily a good way to control governments. To cast a vote with millions of others, once in four or five years, does not suffice and is not immediate enough to sincerely exert control over governments and their agencies. There is no way to change the provider of government services in such a way that we might reasonably deny that governments have a monopoly.
Third and finally, the manifest itself proposes a political way to educate people and capitalize on diversity. By introducing online deliberation forums the authors want to enhance participation, education, diversity and solution-finding. This, in turn, serves to enhance humans’ collective intelligence. However, the manifesto also states that these exact online forums have caused the existing filter bubbles and echo chambers to come into existence. This is a development that the authors explicitly condemn since it leads to the rule of computers and less autonomy . It is not without irony that the authors propose to challenge the digital future’s problems by using a digital tool that is thought to cause these problems.
In any case, what the authors propose is some form of re-politization whereby collective decisions are put into the hands of the people as a collective, with the aid of online tools to enable and increase participation. This, however, does not erase the problems attached to politics as a form of collective decision-making, and history has shown that these models have led to our current situations where political polarization due to echo chambers and filter bubbles is endemic. This gives further reasons to look elsewhere for solutions to collective governance problems.

Using AI humanistically

One way could be to use AI for finding better laws or even political organization forms. Why not let AI study laws and actual law-making to see where the best legal incentives are to tackle climate change? Or to study what decision-making procedure (democracy, epistocracy, panarchy, self-governance) is best to empower people to exert real control over collectively relevant decision-making bodies? This would be a positive form of using AI in the political context, perhaps even some form of humanistic AI. But since for the manifest’s authors, bringing technology and politics together appears to be wrong, this path is not viable for them.
What should have become clear is that politics is different from markets and that the agents in these fields exhibit different normative features. That should give rise to a more nuanced picture and judgment of digital futures and the main drivers and agents in it.


Finally, a big problem with the manifesto lies in its notion of autonomy. At several occasions the authors complain that algorithmic decision-making reduces people’s autonomy because the decision made by individuals are no longer truly theirs. They claim that recommendation algorithms present choice options that lead to decisions that are no longer the ones of the decider, even though it feels like they were. If an algorithm knows more about me than I do (provided we can say that an algorithm “knows” anything in the first place), and presents me with options concerning what I could do based on this picture of me, the resulting action cannot be considered as my action, the manifesto argues. Similarly, the practice of individualized prices faces ethical and legal challenges since it is thought to be incompatible with anti-discrimination.
However, the problem with such algorithms is that the line between autonomy and heteronomy, between self-determination and other-determination, is blurred. So either the traditional distinction does not hold anymore, or it must be worked out in a different way. This is because what algorithms find out about me appears to be some form of super-me. Even if I did not know that I have this and that preference, once I act upon this algorithmically discovered preference and it feels like it is my decision, it is hard to deny that is something to do with me, perhaps even in a very deep sense. The same holds for individualized prices. We might in fact find it problematic if a seller offers me a good to the highest prize the algorithms says I am willing to pay. But is this prize not very obviously connected to my identity if I am willing to pay it? It certainly is, since it is obviously more related to my personality, history, and psychology than I could have imagined. Therefore, it is very much an autonomous act to act on these preferences or to accept individualized prizes.
I do not deny that some might find algorithmically-based decision-making problematic, as the authors appear to do. But if this is so, they need to find a better way than merely invoking autonomy. If we want to criticize individualized pricing methods we better find improved arguments. Merely referring to autonomy does not suffice. And if no such argument can be made we should let the consumers decide what works best for them. A precautionary approach might just cause too much damage regarding innovation and growth.
The authors repeatedly warn against manipulative technologies that reduce freedom and autonomy. But either the authors (1) have a set of technologies in mind that is too heterogeneous to compare them normatively (see above), or (2) they use ill-fitting concepts and distinctions to make their case (autonomy, heteronomy).
Where do we locate autonomy, then? And how do we protect it? A good start is to look at the structure of decision-making concerning the use and regulation of these technologies, not at merely the situation of human interaction with algorithms itself. In this respect, the manifesto lists several features that encompass transparency, participation, and education. If people can know the algorithms and their working mechanisms, and if they can choose between them as well as choose between which data to give to what algorithm, autonomy will be enhanced. But more needs to be done to work out the details and, for instance, to show that what is currently encoded in our laws represents a philosophically and scientifically viable way to shape our digital future.

Non-domination, not autonomy

I would like to end this comment by suggesting that we should not so much focus on autonomy since this concept does not really match algorithmic decision-making. Rather, we should us (non-)domination as the guiding concept. Domination is a feature of power relations where the freedom of person A depends on the good-will of person B. In other words, relations are thought to be such that whether one is free is arbitrary and depends on something not in one’s reach. To shift from autonomy to domination is also to shift the perspective from individual encounters with algorithms to political constellations regarding the level of control societal actors exert on each other (e.g. consumers on businesses, citizens on governments). Together with a more detailed differentiation between the ways big data is used and, most importantly, by whom it is used, using domination and not autonomy as leading normative concept promises to improve our digital future. In this case, then, we will avoid citizen scores to become reality in our lives while at the same time avoiding to condemn those who we can control just by purchasing the right things.

Leave a comment

Filed under AI, Roboethics

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s