Research

Notes on an Algorithmic Faculty of the Imagination

Authors: ,

Abstract

Following Bernard Stiegler’s perspective on an ‘originary technicity’, this article explores the relationship between imagination and politics in light of recent developments in neural networks technologies (also known as machine learning algorithms). It examines how this new technology is reshaping the political role and place of human imagination. Furthermore, it uses Vilém Flusser’s terminology to examine to what extent this technology can be understood as a new ‘technical faculty of the imagination’. The first part will argue, following Stiegler and Flusser, for a type of approach to the notion of imagination that challenges the human-technology opposition. The second part will introduce the topic of neural networks technologies using the specific example of algorithmic image recognition systems and then, through the prism of the Kant-Hume debate on the foundations of universal knowledge, it will set three possible perspectives on the question of an algorithmic imagination. The third and final section will return to Flusser to see how the relation between imagination and politics is shifting from a modern and human-centred perspective to a post-historical and post-anthropocentric one.

Keywords: machine learning, Immanuel Kant, David Hume, technology, post-history, imagination

How to Cite: Celis, C. & Schultz, M. (2021) “Notes on an Algorithmic Faculty of the Imagination”, Anthropocenes – Human, Inhuman, Posthuman. 2(1). doi: https://doi.org/10.16997/ahip.1016

Introduction

In Capitalist Realism, Mark Fisher builds on the idea, usually attributed to Slavoj Zizek or Fredric Jameson, ‘that it is easier to imagine the end of the world than it is to imagine the end of capitalism’ (2009: 2). For Fisher, capitalism has become so naturalised that a non-capitalist way of life becomes ‘unimaginable’. The term ‘capitalist realism’ refers to the ‘widespread sense that not only is capitalism the only viable political and economic system, but also that it is now impossible even to imagine a coherent alternative to it’ (Fisher 2009: 2). Once, cinema and literature were exercises of the imagination aimed at describing such coherent alternatives (Jameson 2005). Today, these media have been reduced to the endless repetition of social and ecological catastrophe as if it were the inevitable fate of human nature (Fisher 2009: 2).

In a similar argument, Berardi (2011) explores how the idea of ‘no future’ has become a common place in contemporary culture. During the 20th century, he claims, we went from the ‘enthusiastic expectations and proclamations of the Futurists’, to the ‘no future’ of punk culture, and finally to the ‘there is no alternative’ of Thatcherism and Reaganomics (Berardi 2011). This process belongs to a long history of Western civilisation, from the opening up of the ‘New World’ to ‘Spanish colonisation up to the Hollywood colonisation of the planetary mind’, in which our imagination has become the main gate through which capitalism has penetrated the ‘collective unconscious’ (Berardi 2014: 98). Nowadays, the radical proliferation of images and information is exposing the imagination to a process of ‘vertiginous acceleration’ (Berardi 2014: 34). In this context, human imagination is struggling more and more to associate significant pieces of data in order to imagine a coherent alternative future (Berardi 2014: 195).1 Both Fisher and Berardi present radical responses to a scenario that had been once defined with enthusiasm by Fukuyama (1989). What Fukuyama addressed in terms of the ‘end of history’ (and hence as the end of conflict between different ideologies), Fisher and Berardi interpret as the triumph of a hegemonic social order that has undermined every possibility of imagining otherwise.

Around the same years in which Fisher and Berardi were both diagnosing this profound crisis of imagination, Bernard Stiegler published a short essay titled For a New Critique of Political Economy (2010). In it, Stiegler called for a renewal of the critique of contemporary societies that could satisfactorily integrate the question of technology. As he put it, ‘I would like to demonstrate here that the question of tertiary retention opens up a new perspective on political economy and its critique, and, now more than ever, that it makes a new critique of political economy the essential task of philosophy’ (2010: 8). The question of ‘tertiary retention’ is the question of how technical (external) objects define and shape the internal faculties of human subjectivity (perception, memory, imagination, desire, etc.). In doing so, technics play a fundamental role in shaping and reshaping the conditions of possibility of intersubjective (‘transindividual’) relations. Stiegler calls this ‘originary technicity’, an approach that conceives technics as actively and constantly redefining the link between imagination and politics. From the perspective of this originary technicity, the current impossibility of imagining a post-capitalist future, diagnosed both by Fisher and Berardi, could not be properly addressed without an analysis of the technical objects (tertiary memory) that shape our present.

Following Stiegler, this article explores the relationship between imagination and politics in light of recent developments in neural networks technologies (also known as machine learning algorithms). It examines how this new technology is reshaping the political role and place of human imagination. Furthermore, it uses Flusser’s (2002) terminology in order to examine to what extent this technology can be understood as a new ‘technical faculty of the imagination’. The first part will argue, following Stiegler and Flusser, for a type of approach to the notion of imagination that challenges the human-technology opposition. The second part will introduce the topic of neural networks technologies using the specific example of algorithmic image recognition systems and then, through the prism of the Kant-Hume debate, it will set three possible perspectives on the question of an algorithmic imagination. The third and final section will return to Flusser to see how the relation between imagination and politics is shifting from a modern and human-centred perspective to a post-historical and post-anthropocentric one.

Imagination and Technology

Writing in the 1940s, Adorno and Horkheimer claimed that the ‘industrialisation of culture’ was ‘infecting everything with sameness’ (2002: 94). The authors refer to Kant’s concept of schematism to explain the catastrophic consequences brought forward by this industrialisation. In Kant’s (1998) philosophical system, schematism is defined as a specific function of the faculty of the imagination that allows subjects to bridge the manifold given to the senses with the unity of a concept of the understanding. Schematism hence requires an active contribution from the subject’s imagination (Kant 1998: 271). In the culture industry, however, this active contribution is ‘denied to the subject’ (Adorno and Horkheimer 2002: 98). The industrialisation of culture implies that the consumer is no longer required to use his or her imagination in order to subsume sensible data under a concept of the understanding. ‘For the consumer’, Adorno and Horkheimer state, ‘there is nothing left to classify since the classification has been pre-empted by the schematism of production’ (2002: 98). If we walk into a supermarket and buy a can of fruit, for example, we are not required to choose an individual fruit that fits a general rule or schema (i.e., check for its ripeness, look for flaws, etc.). The promise of industrialisation is that all cans of fruit are, and will remain, the same. The faculty of imagination, responsible for the active task of individualising the right fruit, is not necessary any more since this operation has already been realised in the production line. In a world of sameness, imagination becomes ‘atrophied’. For Adorno and Horkheimer this is not only a phenomenon restricted to the consumption of basic commodities, but it is also taking place within the sphere of culture itself. By building on ‘ready-made clichés’, the culture industry is putting an end to the unexpected and, hence, denying its audience ‘any dimension in which they might roam freely in imagination’ (Adorno and Horkheimer 2002: 100). The atrophy of imagination, these authors suggest, ‘needs not to be traced back to psychological mechanisms’ but to the ‘objective makeup’ of cultural products themselves (Adorno and Horkheimer 2002: 100). From their perspective, culture should have a critical role, challenging the ideological premises that guide social reproduction. In their modernist vision, this is achieved not through the representation of specific political or ideological contents, but through the artwork’s formal potential to construct an internal logic different to the one ‘guiding social reproduction’ (Adorno and Horkheimer 2002: 95). With the industrialisation of culture, however, the artwork adopts the same logic as the production line: that of instrumental reason. Hence, the world of culture is not only unable to offer an alternative logic from where to imagine new forms of social reproduction but also becomes an active agent on behalf of that reproduction.

Following Adorno and Horkheimer’s (2002) analysis, the current impossibility to imagine a world beyond capitalism identified by both Fisher (2009) and Berardi (2011, 2014) could be read as the logical consequence of the ‘atrophy of the faculty of imagination’ set in motion by the ‘industrialisation of culture’. Nonetheless, as Stiegler (2011a: 40) warns us, Adorno and Horkheimer’s explanation has one key oversight: by opposing an innate form of schematism to an industrial one, these authors present imagination as a natural and ahistorical faculty which functions as a measuring rod against which they evaluate the dehumanising process put forth by industrial capitalism. In other words, Adorno and Horkheimer reproduce a humanist and anthropocentric opposition between the purity and spontaneity of human imagination and the mechanistic nature of technology and industry. Human imagination is thus referred to as ‘a secret mechanism within the psyche’ that has now been replaced by the inhuman logic of industry (Adorno and Horkheimer 2002: 98). By presenting the issue in these terms, Adorno and Horkheimer reproduce an anthropocentric conception of imagination as that which ensures the singularity of humans as opposed to both machines and animals.2

From this modern, humanist, and anthropocentric perspective, the relation between imagination and politics can be said to respond to at least these two premises: first, imagination is a transcendental faculty that constitutes the common ground (the sensus communis) which unifies the community of humans as a universal species, separating them from both animals and machines.3 Second, imagination is that a priori principle that allows creating the new out of the given, hence making progress and social transformation possible. The ‘atrophy of imagination’ brought forward by the culture industry denounced by Adorno and Horkheimer could hence be seen as the decline of these two modern and humanist premises: the weakening of the common ground that ensures a universal community of human beings, and the impossibility of imagining other possible futures for those beings beyond the sameness of the present.

As aforementioned, however, the problem with this line of thought is that it fails to address how human imagination is intertwined with the historical, social and technical dimensions. As a response, Stiegler (2010, 2011a, 2011b) has addressed the issue of the atrophy of imagination and the impossibility of imagining a future beyond capitalism without falling back upon an ahistorical and transcendental notion of imagination. Instead of opposing imagination to technology, Stiegler develops a critique of contemporary capitalism from the perspective of how recent technological and social transformations have remodelled our faculty of imagination and are hence creating a short circuit between the acceleration of the flows of information on the one hand, and the limits of human subjectivity on the other. As mentioned above, Stiegler defines a theory of ‘originary technicity’ according to which human imagination and technical objects form a hybrid and intertwined notion of subjectivity (Bradley 2011: 102). From this perspective, it would still be correct to say that imagination is that unique faculty that defines us as human beings. That faculty, however, is not an ahistorical trait hidden in the depths of our psyche, but the result of our technical (hybrid) exchange with the world. This does not only highlight the historical, social and technical nature of the faculty of imagination but also blurs the limit between interiority and exteriority, rendering human beings as the outcome of a hybrid intertwining with technics that precedes and exceeds that limit. Stiegler’s theory of originary technicity hence conceives humans not as pure natural beings opposed to technology, but rather as hybrid entities composed of biological, social and technical components.4

This puts into question the two modern premises regarding human imagination mentioned above. If the imagination is exposed to technical, historical and social mutations, then it can no longer function as the common ground for the universal community of human beings (if the imagination changes historically, then it cannot be posed as the sensus communis that holds the human community together; or at least this community cannot be posed as universal, but rather as situated). Moreover, if the imagination is the result of a constant exchange with technics, then the possibility of imagining an alternative future will always depend on the interplay between the available technical surfaces of information and the available capacity to process this information. By challenging these two premises, Stiegler offers an alternative explanation to the current crisis of imagination beyond the framework of Adorno and Horkheimer. The problem of contemporary capitalism, Stiegler (2010: 107) claimed, is not that it replaces human (pure and natural) schematism with an industrial (technical and impure) one, but rather that it is creating a process of acceleration that is ‘intrinsically self-destructive’. The current acceleration of capitalist production is causing an ‘annihilation of time’, that is, an acceleration of the flows of information beyond the limits of individual subjectivity. This has two consequences. First, by accelerating time and destroying the temporal experience of individual subjects, capitalism is undermining the time necessary for ‘human desire’ (i.e., ‘the gap between the drive and its satisfaction’), which is the engine that drives capitalist consumption (Stiegler 2011b). Second, the acceleration of information is destroying individual imagination (i.e., the gap between different elements that makes association possible in order to anticipate the future and invent the new). Thus, the current impossibility to imagine the future would be the result of the ‘annihilation of time’ and the ‘withering of desire’ put forth by capitalism’s intrinsic need for constant acceleration (Stiegler 2010, 2011b).

Another way of understanding the relation between imagination and technics is found in the work of Flusser (2002, 2011). Motivated by the new image-production technologies of the 19th and 20th centuries, Flusser (2002: 114) argued for the differentiation between two ages of the imagination: an age in which image production was entirely dependent on human agency, and an age of technical images in which image production depends more and more on the workings of an apparatus. In the first case, imagination appears as a unique faculty of human beings: the ‘ability to step back from the objective world into one’s own subjectivity’ (2002: 111). In the second case, apparatuses replace human imagination in the production of images (Flusser 2000: 14). In both cases imagination appears as having the structure of a ‘black box’, that is, a closed system that conceals its operation from the ‘user’. Hence, just as the critique of aesthetic judgement in the age of human-produced images required a critique of the faculty of imagination, the critique of technical images requires a critique of the technical imagination which must begin by elucidating the inner workings of the black box (Flusser 2000: 16). Flusser defines the apparatus as a black box that carries out the tasks programmed in it. The privileged position of the apparatus resides in the fact that it can carry out these operations faster and with far less mistakes than human beings (Flusser 2000: 32). For this reason, humans are becoming ‘less and less competent’ to deal with these complex programmes and are hence having ‘to rely more and more on apparatuses’ (Flusser 2000: 32). Put differently, humans are becoming less dependent on human imagination and more dependent on technical imagination. For Flusser, however, the technophobic responses to the current crisis of political imagination are the result of assessing the new technical imagination using normative categories inherited from a previous (humanist) framework.5 In a world governed by apparatuses, programmes, and technical images, human imagination is no longer sufficient for offering a suitable idea of the future. Hence, instead of continuing to oppose human and machines, we need to consider the idea that the future can no longer be imagined in anthropocentric terms but needs to be ‘projected’ by ‘operators’ using the potentialities of the new technical imagination (Flusser 2002: 115).

Following Flusser’s reflections, the next section will explore how the recent development of machine learning algorithms (sometimes referred to also as neural networks) can be understood as the emergence of a new technical faculty of the imagination.6 This requires redefining the limit between humans and machines (overcoming the anthropocentric opposition between imagination and technology), as well as re-evaluating the relation between imagination and politics. Special attention will be given to neural networks that have been trained for object recognition, also known as computer vision systems.7 As it will be argued, these systems seem to challenge the anthropocentric notion of imagination outlined above, posing new questions for our definition of imagination as a strictly human faculty.

Towards an algorithmic imagination

In their 2019 report Excavating AI, Kate Crawford and Trevor Paglen develop an extensive analysis of the political dimension of computer vision, paying particular attention to the training process behind this technology. Crawford and Paglen (2019) write:

to build a computer vision system that can, for example, recognise the difference between pictures of apples and oranges, a developer has to collect, label, and train a neural network on thousands of labelled images of apples and oranges. On the software side, the algorithms conduct a statistical survey of the images, and develop a model to recognise differences between the two ‘classes’. If all goes according to plan, the trained model will be able to distinguish the difference between images of apples and oranges that it has never encountered before. Training sets, then, are the foundation on which contemporary machine-learning systems are built. They are central to how AI systems recognise and interpret the world.

Crawford and Paglen go on to show that the datasets utilised in the training process of these algorithms are composed of skewed, shaky and biased elements. This means that in many cases, the outcome of the training process is a biased algorithm that reproduces social stereotypes and structural prejudices.8 In response to these critiques, some software developers have promised to improve their datasets to make them less biased and more representative. Despite these efforts, however, Crawford and Paglen (2019) insist that ‘the whole endeavour of collecting images, categorising them and labelling them is itself a form of politics, filled with questions about who gets to decide what images mean and what kinds of social and political work those representations perform’.

Crawford and Paglen’s publication represents a significant effort to denounce the complex political dimension of training datasets in computer vision systems. While recognising its contribution, this article focuses on a more elemental question regarding these systems. How do these algorithms identify an object? How do they connect a singular image with a general category or class? These questions lead to a more general reflection on the issue of judgement, that is, on the issue of how a series of particular objects, despite their individual differences, can be subsumed under a general rule.

In rule-based algorithms, the programmer designs a general rule that must fit all possible individual inputs. This means that all individual differences have to be anticipated by the human programmer (Fry 2018: 11). This is why rule-based algorithms can hardly be used for computer vision systems (unless they are restricted to extremely controlled environments and very specific tasks). The complexity of human vision entails that it is practically impossible to write a general rule that can include and anticipate all singular cases. As Dan McQuillan (2018: 256) illustrates it, ‘faces or handwritten letters come in many different forms; while humans learn from an early age to recognise them, it is tricky to write a specification that is precise enough for a machine yet flexible enough to deal with all the natural variations’. The fact that humans can incorporate this pattern-recognition ability so ‘naturally’ reinforces the idea that imagination is both a ‘deep mystery of the human psyche’ (a black box) and a defining trait that separates us from animals and machines. Put in these terms, object recognition algorithms can be said to achieve no small task. Through the training of neural networks, these systems manage to formulate a specific set of rules that successfully automate the complexities of visual perception. Thanks to machine learning technologies, computer vision is becoming a concrete technical imagination, a new black box responsible for the subsumption and classification of the multiplicity that defines human vision.

According to Google’s software engineers Mordvintsev, Olah and Tyka (2015), computer vision is the result of a training process that is capable of ‘extracting the essence’ of a specific object. As they put it:

we train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (e.g., a fork needs a handle and 2–4 tines), and learn to ignore what does not matter (a fork can be any shape, size, colour or orientation) (Mordvintsev Olah and Tyka 2015).

This brief and straightforward account of the training process behind computer vision has far-reaching philosophical implications. When an algorithm is trained to distinguish between the images of an apple and those of an orange, could it be said that the algorithm has constructed a concept of ‘apple’ and ‘orange’ or is it simply the result of the repetition of unrelated accidental traits? To frame the issue within the history of philosophy, the training process behind computer vision can be said to revive the debate between David Hume and Immanuel Kant regarding the conditions of possibility of universal knowledge (i.e., the possibility of science).

In the Introduction to the Critique of Pure Reason, Kant (1998: 138) argues that the possibility of a universal rule ‘would be entirely lost if one sought, as Hume did, to derive it from a frequent association of that which happens with that which precedes and a habit (thus a merely subjective necessity) of connecting representations arising from that association’. For Hume, there are only individual objects and individual experiences of these objects (1960: 20). It is only through the association of these individual experiences in his or her faculty of the imagination that the subject forms a representation of an abstract idea. There are only individual apples. By associating the representations of many individual apples, the faculty of the imagination produces the abstract idea ‘apple’. This abstract idea is itself individual, although its application in our reasoning works ‘as if it were universal’ (Hume 1960: 20). For Kant, this explanation of the origins of abstract ideas is unacceptable because it undermines the principle of necessity behind universal rules and makes science (e.g., mathematics and theoretical physics) impossible. Since mathematics and theoretical physics are not just possible but actually exist, he contends, a series of transcendental (a priori) principles that guarantee the relation between individual objects and universal rules must be in place (Kant 1998: 147). The task of his critical (transcendental) philosophy is precisely the unearthing and systematisation of these transcendental principles (Kant 1998: 149).

One of these transcendental principles is that of ‘schematism’. In Kant’s philosophy, the role of the faculty of understanding is to identify the rules that govern nature. At the same time, the faculty of judgement is ‘the faculty of subsuming under rules, i.e., of determining whether something stands under a given rule or not’ (Kant 1998: 268). For this subsumption of an object under a concept to be possible, ‘the representations of the former must be homogeneous with the latter’ (Kant 1998: 271). The problem is that empirical intuitions and pure concepts of the understanding are heterogeneous. Hence, there must be a ‘third thing’ that stands in homogeneity both with the rule as well as with the empirical appearance in order for judgement to be able to subsume the latter under the former. Kant (1998: 272) suggests that this ‘third thing’ which makes all judgement possible is the ‘transcendental schema’. The schema is a product of the transcendental faculty of the imagination and as such must be distinguished from an image (Kant 1998: 273). An image is a product of the empirical (reproductive) faculty of imagination. Hence, an image is always particular. The schema, on the other hand, is the product of an a priori (productive) imagination. The schema is the condition of possibility that allows connecting an individual (empirical) image to a general concept of the understanding. Kant gives two examples of the schema: one belonging to a pure concept of understanding (a triangle), and one belonging to an empirical one (a dog). Kant (1998: 273) writes:

No image of a triangle would ever be adequate to the concept of it. For it would not attain the generality of the concept, which makes this valid for all triangles, right or acute, etc. … The schema of the triangle can never exist anywhere except in thought, and signifies a rule of the synthesis of the imagination.

And then:

The concept of a dog signifies a rule in accordance with which my imagination can specify the shape of a four-footed animal in general, without being restricted to any single particular shape that experience offers me or any possible image that I can exhibit in concreto. (Kant 1998: 273)

Both Hume (1960: 24) and Kant (1998: 273) considered the faculty of imagination as a crucial mechanism of the faculty of understanding and a ‘deep mystery of the human psyche’ that can only be unravelled through rigorous analysis. The main difference is that while for Hume abstract ideas are simply the result of habit (a repetition of associations in the imagination), for Kant there must be a transcendental principle (schematism) that guarantees the subsumption of empirical objects under universal (necessary) concepts. Kant (1998: 146) considered that in Hume’s philosophy habit took ‘the appearance of necessity’. To safeguard the possibility of universal science, he suggested, empirical objects had to be organised by our faculty of judgement under a priori concepts. The schema is the transcendental principle that guarantees this operation.

If we now return to the topic of machine learning as a new form of (algorithmic) imagination, two different interpretations can be given. From a Humean perspective, the classification of an image under a given category (‘apple’ or ‘dog’) can be seen as the result of habit. This means that during the training process, the algorithm associates thousands of images in order to produce a statistical model, an abstract idea of a given object. This abstract idea is totally contingent. It does not correspond to any general rule or essence. This perspective matches what Matteo Pasquinelli and Vladan Joler refer to as the ‘brute force approach’ of machine learning algorithms. According to these authors, machine learning ‘is not driven by exact formulas of mathematical analysis, but by algorithms of brute force approximation’ (Pasquinelli and Joler 2020). The reason why these algorithms are so efficient is not because they distil an essence or abstract idea out of the training set, but simply because they ‘can approximate the shape of any function given enough layers of neurons and abundant computer resources’ (Pasquinelli and Joler 2020). For Pasquinelli and Joler (2020), this is a key aspect for understanding the potentialities and the limitations of today’s algorithmic technologies (including their escalating carbon footprint).

From a Kantian perspective, on the contrary, machine learning algorithms can be read as a form of technical schematism. The training process would thus consist of a process of extracting out from the data a schema for each given object (a fork is ‘a handle and 2–4 tines’, a dog is ‘a four-footed fury animal’, an apple is ‘a round, green or red fruit’, etc.). As McQuillan (2018: 256) puts it, the training process behind neural networks seems to ‘distil’ a specific ‘set of features’ from the training data, identifying hidden patterns that make object recognition possible. Until now, this pattern recognition capacity was thought of as taking place exclusively in the transcendental schema of human imagination. In computer vision technologies, however, this pattern recognition technology seems to become automated. McQuillan (2018: 257) notes that these new pattern recognition technologies are so powerful that they allow identifying ‘schemata’ even where human judgement would only see noise and randomness. This, he warns us, may create the impression that these patterns ‘pre-exist’ an observer’s empirical experience, as some sort of Platonic idea: a true ‘mathematical order’ concealed behind the world of ‘visible evidence’ (McQuillan 2018: 261).9 To avoid this pitfall, a clear distinction between schema (in the Kantian sense) and idea (in the Platonic sense) must be established. While the Platonic idea refers to an essence that exists outside time and space (and is thus immutable), the schema refers to a set of rules that bridge a concept from the understanding with a particular spatiotemporal object. As Deleuze tells us in his 1978 course Sur Kant, the schema is a ‘rule of production’, that is, a rule that allows us to produce ‘in space and time’ the ‘experience of an object conforming to a concept’:

Consider the two following judgements: ‘the straight line is a line equal in all its points’; there you have a logical or conceptual definition, you have the concept of the straight line. If you say ‘the straight line is black’, you have an encounter in experience; not all straight lines are black. ‘The straight line is the shortest path from one point to another’, it is a type of judgment, a quite extraordinary one according to Kant, and why? Because it cannot be reduced to either of the two extremes that we have just seen. What is the shortest path? Kant tells us that the shortest path is the rule of production of a straight line. If you want to produce a straight line, you take the shortest path … The shortest path is the rule of production of a straight line in space and time. (Deleuze 1978)

Returning to Mordvintsev and Tyka’s description of the training process involved in computer vision, it could be said that through this process the neural network extracts an algorithmic schema from the training datasets, that is, a rule of production of a given object that will later be utilised to identify that object in new images. Neural networks would hence produce not a Platonic idea (a mathematical order outside spatiotemporal empirical experience), but an algorithmic schema in the Kantian sense (a spatiotemporal rule of production). For Kant, schematism is an a priori principle of human reason that ensures the subsumption of spatiotemporal objects under the pure concepts of the understanding. Equivalently, algorithmic schematism would appear as an a priori faculty that allows subsuming individual images under a mathematical (formal) abstraction. From a Kantian perspective, the ‘brute force approximation’ thesis would be insufficient because it would not be able to explain how the associations are being produced in the first place. In this sense, the optimisation equations behind the training process of neural networks operate as a form of a priori principles that make the association between individual images possible.

If we now return to the debate on the relation between imagination and technology sketched above, we could outline three different responses to the novelty and challenges posed by machine learning algorithms:

  1. First, we could identify a series of approaches that reproduce the difference between humans and technology, establishing a radical separation between human thought and machine learning algorithms. This is the most widespread and accepted approach among both computer engineers and cultural critics. It conceives machine learning algorithms as pure statistical approximation based on habit and association. As such, machine learning algorithms appear as essentially different from human thought (which, unlike algorithms, is based on the free play of the imagination). This view ensures a strict separation between the mechanism of algorithmic processes and the spontaneity of human imagination.10 Some examples of this approach are Pasquinelli and Joler’s (2020) account of artificial intelligence as ‘brute force approximation’ and Finn’s (2017) appeal for an ‘augmented imagination’ (a combination of the speed and scale of algorithmic processing and the creativity and spontaneity of human schematism). These approaches reproduce Adorno and Horkheimer’s (2002) distinction between a pure, transcendental schematism, and its technological standardisation. They also repeat Marx’s (1976: 283–84) definition of labour as a strictly human activity grounded on the singularity of human imagination: what distinguishes the ‘worst architect’ from the ‘best of bees’, Marx tells us, is that the architect first defines in his or her imagination the object to be built.11 In all these approaches, imagination is the key aspect separating human enterprises from the merely instinctive existence of animals and the mechanic repetition of machines (including that of algorithms and neural networks).

  2. Second, we could outline an approach which, following Hume’s perspective, posits an analogy between human imagination and machine learning algorithms. According to this approach, human knowledge is possible thanks to a process of habit (association) that takes places in human imagination in order to produce an abstract idea that will later function as if it were universal. Likewise, machine learning algorithms operate by approximating an immense amount of individual data to extract a pattern that can later be used to identify new elements. From this perspective, then, there would be no radical difference between the inner workings of human cognition and those of machine learning: they both operate as machines of ‘brute force approximation’. In Hume’s time, one might assume, it was unconceivable that a machine could execute tasks involving innate pattern-recognition abilities. Hence, Hume could define imagination as an ‘associating machine’ without threatening the singularity of human understanding and human nature. Today, however, in light of the technical revolution put forth by neural networks, it would no longer be possible to establish a difference between these two machinic forms of pattern recognition. Both, humans and machine learning algorithms, appear as machines that relate to the world by approximating the sum of individual experiences in order to produce an abstract idea. Hui’s (2019) latest book, Recursivity and Contingency, could be placed under this second category.

  3. Third, we could define both human imagination and neural networks as pattern recognition machines capable not only of subsuming the particular under universal rules (determinative judgement) but also of extracting universal rules from the particular (reflective judgement).12 Like human imagination, algorithmic imagination could be said to function by producing schemata that allows linking particular experiences with general rules. From this perspective, machine learning algorithms could be be said to constitute a proper faculty of the (technical) imagination. Bernard Stiegler and Vilém Flusser advance radical theses that could help developing this third approach.13 As mentioned above, Stiegler (2011a: 53) contended that schematism is not an a priori principle, but a concrete result of the technical surfaces of inscription that shape empirical experience. For him, there could be no mental image (schema) without an objective, external, surface of inscription. Hence, the internal faculty of schematism will be, in each specific context, the result of the available external memory supports. For Kant, the condition of possibility of an individual image is a transcendental schema. For Stiegler (2011a: 53), instead, the possibility of the schema as the bridge between an individual image and a general rule is always the external technical surface on which that individual image is inscribed. Alternatively, Flusser (2002: 115) defines the technical imagination as the automation of abstract calculation that allows ‘projecting’ new possibilities into the future. This new technical imagination is not based on any sort of subjective interiority, but rather on the potentialities inscribed on the programme itself. The future, then, is no longer the realisation of a specific set of human values, but the execution of a ‘calculated game of chance’ (Flusser 2002: 119).

Post-historical Imagination

In 2015, Google engineers Alexander Mordvintsev, Christopher Olah and Mike Tyka designed Google’s Deep Dream project, a piece of software that ‘inverted’ Google’s object recognition algorithm in a Flusserian attempt to visualise what was taking place inside the programme’s black box (Mordvintsev, Olah and Tyka 2015). As mentioned above, neural networks contain hidden layers that conceal from the human programmer the abstract features and patterns that have been extracted from the training data. As McQuillan (2018: 257) puts it, ‘by definition, no human software engineer defines what these abstracted features are, and even if the contents of the hidden layer are examined, it is not necessarily possible to translate that back into comprehensible reasoning’. Once again, we find a parallel between neural networks and imagination. In both cases, their internal operation remains a hidden mystery, a black box of the human psyche on the one case, and of the inner workings of the programme on the other. Hence, Google’s Deep Dream project can be seen as an effort to open this black box and try to visualise the internal processes that make machinic schematism possible. Steyerl describes this project as ‘a feat of genius’ that

manages to visualise the unconscious of prosumer networks: images surveilling users, constantly registering their eye movements, behaviour, preferences … Walter Benjamin’s ‘optical unconscious’ has been upgraded to the unconscious of computational image divination. (2017: 56–57)

That same year, Chilean visual artist Felipe Rivas San Martín employed Google’s Deep Dream software to create a series of 17 images titled El sueño neoliberal [The Neoliberal Dream] (Figure 1).

Figure 1
Figure 1
Figure 1

Felipe Rivas San Martín, El sueño neoliberal, 2015.

He began with a well-known photograph of the 11 September 1973 bombing of La Moneda, Chile’s presidential palace (Figure 2). This picture has become a symbol of Pinochet’s military-coup against Salvador Allende’s government. It also symbolises the end of the country’s socialist project, interrupted by an orchestration of reactionary forces comprising Chile’s economic elite and the United States of America’s foreign office. This coup led to 17 years of a bloody dictatorship and to the establishment of a true neoliberal experiment in Chilean economic, political and social relations. Felipe Rivas San Martín fed this photograph into the Deep Dream algorithm and the output, besides adding colour to the original black and white image, highlighted some new features: dogs, pagodas, buildings, cars, etc. (Figure 3).

Figure 2
Figure 2
Figure 2

Bombing of La Moneda, 11 September 1973, Chile.

Figure 3
Figure 3
Figure 3

Felipe Rivas San Martín, El sueño neoliberal, 2015.

The artist then fed the new image back to the algorithm, repeating this procedure until he had a total of 17 images, one for each year of Pinochet’s dictatorship. Each time he fed the image to the algorithm, the features that had been highlighted became intensified through a process of positive feedback (Rivas San Martín 2019: 257). The seventeenth image, then, offers a defined and detailed version of the features in that first image produced by the algorithm, creating a sharp contrast with the original photograph of the bombing of La Moneda (Figure 4). This sharp contrast between the first and the last image in Felipe Rivas San Martín’s artwork makes it possible to illustrate some of the key transformations of the relationship between imagination and politics in a context in which the automation of schematism has become a technical possibility. Most significantly, this artwork unveils a tension between two ages of the relation between imagination, politics and historical time.

Figure 4
Figure 4
Figure 4

Felipe Rivas San Martín, El sueño neoliberal, 2015.

First, we can identify an age of history, grounded on human imagination and defined by notions such as progress and emancipation. This was an age in which the present was still open to the future, in which human imagination still had the potential (and the responsibility) to delineate new forms of political, economic and social arrangements. Salvador Allende’s socialist project belongs to that age in which history was the realisation of a human ideal. As Flusser (2002: 118) puts it, history was a humanist and anthropocentric project. As such, it embraced an attitude of ‘engagement in world changes’, of exploiting a natural world ‘devoid of value’ in order to achieve the ‘realisation of human values’ (Flusser 2002: 118). History then is a strictly human affair. It entails distinguishing historical time (impregnated with meaning and value) from a natural or astronomical time (as a meaningless movement of bodies).14 From this perspective, imagination is that peculiar faculty that allows the human animal to exit the natural realm of astronomical time and enter the meaningful and symbolic realm of historical time. The photograph of the bombing of La Moneda chosen by Felipe Rivas San Martín belongs to this context. More precisely, it could be argued that the bombing of La Moneda marks the interruption of historical time, an interruption that made way for 17 years in which a new relation to time was forced upon Chile’s social, political and economic relations: a post-historical time.

On the contrary, the images produced by Google’s Deep Dream in Rivas’ artwork represent that post-historical time: an age that has been brought forward precisely through the interruption of Allende’s historical project and the successive development of a neoliberal model. Chile’s neoliberal landscape has effectively replaced an age of history (in which imagination and politics were deeply interconnected) for a post-historical age in which politics has been reduced to the algorithmic administration of data (Rouvroy and Berns 2013). Writing in 1985, Flusser stated: ‘According to the suggested model of cultural history, we are about to leave the one-dimensionality of history for a new, dimensionless level, one to be called, for lack of a more positive designation, post-history’ (2011: 15). Furthermore, he argued that there is a strong link between the surge of a technical imagination and the transition from history to post-history (Flusser 2011: 57). In historical time, human imagination was responsible for mediating between the present and the unpredictability of the future, establishing a sharp distinction between a nature void of meaning and the historical realisation of human (anthropocentric) values. In post-historical time, instead, the anticipation of the future appears as a mere ‘calculated game of chance’ (Flusser 2002: 119). This means that according to the ‘post-historical world picture’ suggested by Flusser, the future is reduced to a ‘field of possibilities inscribed in a program’ (2002: 119). Furthermore, the passage from history to post-history appears as a crucial aspect of the current crisis of (political) imagination. For Flusser the future is a specific experience of time that belongs to the age of history. In the post-historical age, this experience of the future is replaced by predictability. Hence, in post-history the future is no longer imagined. It is calculated. Human imagination then loses its privileged ground for outlining future political projects:

If society’s behaviour is progressively experienced and interpreted as absurdly programmed by programmes without aim and purpose, the problem of freedom, which is the problem of politics, becomes inconceivable. From a programmatic perspective, politics, and therefore history, comes to an end. (Flusser 2013: 24)

In this context, McQuillan (2018) and Mackenzie (2015) have both referred to the ‘performativity’ of algorithmic prediction. For these authors, predictive algorithms do not simply anticipate a ‘natural behaviour’, but in many cases they themselves ‘change the people’s behaviour in ways that the model did not learn about when it was trained, leading to a recursive reinforcement as actual social practice’ (McQuillan 2018: 258).

Rivas San Martin’s appropriation of Google Deep Dream illustrates the difficulties of outlining a political project in a context in which human imagination is being replaced by algorithmic calculability. What becomes clear is that a critique of this technology can no longer come from a humanist standpoint in which imagination appears as a strictly human faculty that ensures the realisation of an anthropocentric political project. Hence, the greatest challenge of today’s political imagination is no longer related to key issues of modern political thought (agency, privacy, intentionality, etc.) but requires a new (post-anthropocentric) understanding of the relation between humans and machines.

Conclusion

This article began by referring to Fisher and Berardi’s theses regarding the present crisis of political imagination. We have contended, using mainly Stiegler and Flusser’s insights, that these diagnoses were grounded on an anthropocentric conception of imagination: a unique human faculty that allows projecting the new out of the given. In the emerging post-historical context governed by apparatuses, the future as a political (human) promise of emancipation is threatened by the inhuman calculation of probabilities enacted by a new faculty of algorithmic imagination. Faced with this, we have outlined three possible responses. Response one calls for a humanist political project that safeguards the singularity of human imagination against the inhuman calculation of algorithmic machines. Responses two and three go beyond the opposition between humans and technology in order to argue that either (2) humans operate like algorithms of ‘brute force approximation’, or that (3) neural networks operate as a technical faculty of the imagination capable of ‘pattern recognition’ and ‘reflective judgement’.

From the standpoint of the second and third responses, the faculty of the imagination appears not as a strictly human affair, separating us from animals and machines, but rather as a transversal capability through which an organism regulates its permanent exchange with an outside (an environment in the wider sense of the term).15 We believe that as long as we continue to assume an anthropocentric concept of imagination (response one), technological automation will continue to appear antagonistic to autonomy (as the core normative value that grounds the humanist definition of political emancipation). On the contrary, if we assume the perspective of responses two or three, we could eventually overcome the opposition between technics and politics and, with it, overcome the current crises of the imagination. Put differently, a new understanding of the relation between human and machinic imaginations is needed to offer a more sustainable, non-anthropocentric idea of the future beyond the current stalemate of economic, social and ecological crises.

Notes

  1. For a thorough critique of Berardi and the alleged ‘crisis of the future’, see Osborne (2015).
  2. In Modern Western philosophy, imagination is connected to a humanist definition of the human as that particular animal caught between the finitude of material existence and the infinitude of reason. In the specific case of Kantian philosophy, imagination appears as that distinctive human faculty that allows bridging the particular to the universal, the realm of need to the realm of freedom (Matherne 2016: 66).
  3. For an analysis of the political dimension of the faculty of imagination as a sensus communis, see Hannah Arendt, Lectures on Kant’s Political Philosophy (1992: 71). See also George Didi-Huberman (2019).
  4. Haraway (2016) is probably the author who has contributed the most to popularizing the political dimension of this idea of the human as a hybrid being.
  5. A similar argument is put forth by Sloterdijk (2017: 235) for whom the source of ‘anti-technological ressentiments’ is the ‘double-morality […] of thinking pre-technologically and living technologically’. Furthermore, current technical developments are forcing us into a paradoxical situation in which ‘classical humanism […] is practically exhausted’ and where ‘one must become a cyberneticist to be able to remain a humanist’ (Sloterdijk 2017: 236).
  6. For a detailed introduction to machine learning and neural networks see Kurenkov (2015) and Greenfield (2017).
  7. For an introduction to the specific topic of computer vision and object recognition algorithms, see Crawford and Paglen (2019).
  8. The issue of algorithmic bias has been one of the most explored topics within critical algorithmic studies. Some key references addressing this issue are Angwin et al. (2016); O’Neil (2016); Buolamwini and Gebru (2018); and Noble (2018).
  9. One example of this ‘neo-platonism’ can be found in Anderson’s (2008) piece in Wired Magazine ‘The End of Theory: The Data Deluge Makes the Scientific Method Obsolete’.
  10. This radical distinction between the mechanisms of computer processes and the spontaneity of human thought is also found in Searle’s (1980) critique of the Turing Test and Dreyfus’ (1999) critique of artificial reason.
  11. For a critical analysis of the issue of the relation between labour and imagination in machine learning algorithms from a Marxist perspective, see Dyer-Witheford, Kjosen and Steinhoff. (2019: 120–124).
  12. For the distinction between determinative and reflective judgment, see Kant (1987: 18–19).
  13. Beyond the work of Stiegler and Flusser, we could also mention here the thinking of Simondon (2018) and Kittler (1997). Simondon (2018: 172) contends that Kant’s Critique of Judgement represents the starting point for a new cybernetic understanding of reality. This is so because of the category of ‘reflective judgement’, which begins to explore the relation between ‘operations and structures’ from a processual perspective (Simondon 2018: 172). Similarly, Kittler (1997: 130) conceives Kant’s treatment of ‘reflective judgement’ as a mechanism of pattern recognition in the ‘second degree’, that is, as a mechanism aimed at optimising the ‘mechanism of recognition in general’. Given his historical context, Kant was incapable of imagining that the human ability for reflective judgment could be transferred to a machine, hence safeguarding the singularity of human imagination (Kittler 1997: 131). After the invention of the Turing machine and the rise of cybernetic theory; however, it becomes possible to imagine a pattern recognition machine capable of perceiving, remembering and processing data automatically (Kittler 1997: 135). With the recent development of neural networks and machine learning algorithms, this pattern recognition machine could be said to reach new heights, executing concrete processes of ‘reflective judgment’ in which general rules are effectively induced from particular data (see Greenfield 2017: 220–222; and Dyer-Witheford, Kjosen and Steinhoff 2019: 120–124).
  14. For a discussion on the difference between historical and astronomical time from a humanist and anthropocentric perspective, see Panofsky (2004). For a critical and posthumanist reflection on this distinction, see Celis (2020).
  15. For an analysis of the notions of imagination and creativity as the informational exchange between an organism and the environment, see Zylinska (2020: 67–68).

Competing Interests

The authors have no competing interests to declare.

References

1 Adorno, T., & Horkheimer, M. (2002). Dialectic of Enlightenment: Philosophical Fragments. Stanford, CA: Stanford University Press.

2 Anderson, C. (2008). The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Wired. Available at: https://www.wired.com/2008/06/pb-theory/

3 Angwin, J., et al. (2016). Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica.Org. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

4 Arendt, H. (1992). Lectures on Kant’s Political Philosophy. Chicago, IL: The University of Chicago Press.

5 Berardi, F. (2011). After the Future. Edinburgh: AK Press.

6 Berardi, F. (2014). And: Phenomenology of the End. Helsinki: Aalto Books.

7 Bradley, A. (2011). Originary Technicity: The Theory of Technology from Marx to Derrida. Basingstoke: Palgrave Macmillan. DOI:  http://doi.org/10.1007/978-0-230-30765-0

8 Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Paper presented at the 1st Conference on Fairness, Accountability and Transparency, New York.

9 Celis, C. (2020). La Allagmática En Cuanto Disciplina Poshumanista: Nuevas Metodologías Para El Estudio De Las Imágenes En El Contexto De Las Máquinas De Visión Algorítmica. Revista, 180(46), 26–37.

10 Crawford, K., & Paglen, T. (2019). Excavating AI: The Politics of Training Sets for Machine Learning. Available at: https://www.excavating.ai/ Retrieved May 3, 2020. DOI:  http://doi.org/10.1007/s00146-021-01162-8

11 Deleuze, G. (1978). Sur Kant (translated by Melissa McMahon). 2020, Available at: https://www.webdeleuze.com/textes/65

12 Didi-Huberman, G. (2019). L’imagination, notre Commune. Available at: https://2019.lhistoireavenir.eu/evt/191/

13 Dreyfus, H. (1999). What Computers Still Can’t Do: A Critique of Artificial Reason. London: The MIT Press.

14 Dyer-Witheford, N., Mikkola Kjosen, A., & Steinhoff, J. (2019). Inhuman Power. London: Pluto Press. DOI:  http://doi.org/10.2307/j.ctvj4sxc6

15 Finn, E. (2017). What Algorithms Want: Imagination in the Age of Computing. Cambridge, MA: MIT Press. DOI:  http://doi.org/10.7551/mitpress/9780262035927.001.0001

16 Fisher, M. (2009). Capitalist Realism: Is There No Alternative? London: Zero Books.

17 Flusser, V. (2000). Towards a Philosophy of Photography. London: Reaktion Books.

18 Flusser, V. (2002). Writings. Minneapolis, MN: University of Minnesota Press.

19 Flusser, V. (2011). Into the Universe of Technical Images. Minneapolis, MN: University of Minnesota Press. DOI:  http://doi.org/10.5749/minnesota/9780816670208.001.0001

20 Flusser, V. (2013). Post-History. Minneapolis, MN: Univocal.

21 Fry, H. (2018). Hello World: How to Be Human in the Age of the Machine. New York: W. W. Norton & Company.

22 Fukuyama, F. (1989). The End of History? The National Interest, 16, 3–18.

23 Greenfield, A. (2017). Radical Technologies: The Design of Everyday Life. London: Verso.

24 Haraway, D. J. (2016). Staying with the Trouble: Making Kin in the Chthulucene. Durham, NC: Duke University Press. DOI:  http://doi.org/10.2307/j.ctv11cw25q

25 Hui, Y. (2019). Recursivity and Contingency. London: Rowman & Littlefield.

26 Hume, D. (1960). A Treatise of Human Nature. Oxford: Clarendon Press.

27 Jameson, F. (2005). Archaeologies of the Future. New York: Verso.

28 Kant, I. (1987). Critique of Judgement. Cambridge: Hackett Publishing Company.

29 Kant, I. (1998). Critique of Pure Reason. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9780511804649

30 Kittler, F. (1997). The World of the Symbolic – A World of the Machine. In J. Johnston (Ed.). Literature, Media, Information Systems (pp. 130–146). Amsterdam: G+B Arts.

31 Kurenkov, A. (2015). A ‘Brief’ History of Neural Nets and Deep Learning. Available at: http://www.andreykurenkov.com/writing/ai/a-brief-history-of-neural-nets-and-deep-learning/

32 Mackenzie, A. (2015). The Production of Prediction: What does Machine Learning Want? European Journal of Cultural Studies, 18(4–5), 429–445. DOI:  http://doi.org/10.1177/1367549415577384

33 Marx, K. (1976). Capital: A Critique of Political Economy, 1. London, New York: Harmondsworth: Penguin/New Left Review.

34 Matherne, S. (2016). Kant’s Theory of the Imagination. In A. Kind (Ed.), The Routledge Handbook of Philosophy of Imagination (pp. 55–68). London: Routledge.

35 McQuillan, D. (2018). Data Science as Machinic Neoplatonism. Philosophy & Technology, 31, 253–272. DOI:  http://doi.org/10.1007/s13347-017-0273-3

36 Mordvintsev, A., Olah, C., & Tyka, M. (2015). Inceptionism: Going Deeper into Neural Networks. Google AI Blog, 2019. Available at: https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

37 Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press. DOI:  http://doi.org/10.2307/j.ctt1pwt9w5

38 O’Neil, C. (2016). Weapons of Math Destruction. New York: Crown.

39 Osborne, P. (2015). Future Present: Lite, Dark, and Missing. Radical Philosophy, 191, 39–46.

40 Panofsky, E. (2004). Reflections on Historical Time. Critical Inquiry, 30(4), 691–701. DOI:  http://doi.org/10.1086/423768

41 Pasquinelli, M., & Joler, V. (2020). The Nooscope Manifested: AI as Instrument of Knowledge Extractivism. Available at: https://nooscope.ai/. DOI:  http://doi.org/10.1007/s00146-020-01097-6

42 Rivas San Martín, F. (2019). Internet, Mon Amour. Santiago: Écfrasis.

43 Rouvroy, A., & Berns, T. (2013). Algorithmic Governmentality and the Prospects of Emancipation. Reseaux, 177(1), 163–196. DOI:  http://doi.org/10.3917/res.177.0163

44 Searle, J. (1980). Minds, Brains and Programs. Behavioral and Brain Sciences, 3(3), 417–457. DOI:  http://doi.org/10.1017/S0140525X00005756

45 Sloterdijk, P. (2017). Wounded by Machines. In Not Saved: Essays After Heidegger (pp. 217–236). Cambridge: Polity.

46 Simondon, G. (2018). On the Mode of Existence of Technical Objects. Minneapolis, MN: University of Minnesota Press.

47 Steyerl, H. (2017). A Sea of Data: Apophenia and Pattern (Mis)Recognition. Duty Free Art: Arte in the Age of Planetary Civil War. London: Verso.

48 Stiegler, B. (2010). For a New Critique of Political Economy. Malden, MA: Polity.

49 Stiegler, B. (2011a). Technics and Time, 3: Cinematic Time and the Question of Malaise. Stanford, CA: Stanford University Press.

50 Stiegler, B. (2011b). Suffocated Desire or How the Cultural Industry Destroys the Individual: Contribution to a Theory of Mass Consumption. Parrhesia, 13, 52–61.

51 Zylinska, J. (2020). AI Art: Machine Visions and Warped Dreams. London: Open Humanities Press.