<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<!--<?xml-stylesheet type="text/xsl" href="article.xsl"?>-->
<article article-type="research-article" dtd-version="1.2" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id journal-id-type="issn">2633-4321</journal-id>
<journal-title-group>
<journal-title>Anthropocenes &#8211; Human, Inhuman, Posthuman</journal-title>
</journal-title-group>
<issn pub-type="epub">2633-4321</issn>
<publisher>
<publisher-name>University of Westminster Press</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.16997/ahip.1016</article-id>
<article-categories>
<subj-group>
<subject>Research</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Notes on an Algorithmic Faculty of the Imagination</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Celis</surname>
<given-names>Claudio</given-names>
</name>
<email>claudiocelisbueno@gmail.com</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Schultz</surname>
<given-names>Mar&#237;a Jes&#250;s</given-names>
</name>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
</contrib-group>
<aff id="aff-1"><label>1</label>Sant&#8217;Anna School of Advanced Studies, IT</aff>
<aff id="aff-2"><label>2</label>Independent Artist, CL</aff>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2021-12-16">
<day>16</day>
<month>12</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>2</volume>
<issue>1</issue>
<elocation-id>12</elocation-id>
<history>
<date date-type="received" iso-8601-date="2021-03-01">
<day>01</day>
<month>03</month>
<year>2021</year>
</date>
<date date-type="accepted" iso-8601-date="2021-03-01">
<day>01</day>
<month>03</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright: &#x00A9; 2021 The Author(s)</copyright-statement>
<copyright-year>2021</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See <uri xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</uri>.</license-p>
</license>
</permissions>
<self-uri xlink:href="http://www.anthropocenes.net/articles/10.16997/ahip.1016/"/>
<abstract>
<p>Following Bernard Stiegler&#8217;s perspective on an &#8216;originary technicity&#8217;, this article explores the relationship between imagination and politics in light of recent developments in neural networks technologies (also known as machine learning algorithms). It examines how this new technology is reshaping the political role and place of human imagination. Furthermore, it uses Vil&#233;m Flusser&#8217;s terminology to examine to what extent this technology can be understood as a new &#8216;technical faculty of the imagination&#8217;. The first part will argue, following Stiegler and Flusser, for a type of approach to the notion of imagination that challenges the human-technology opposition. The second part will introduce the topic of neural networks technologies using the specific example of algorithmic image recognition systems and then, through the prism of the Kant-Hume debate on the foundations of universal knowledge, it will set three possible perspectives on the question of an algorithmic imagination. The third and final section will return to Flusser to see how the relation between imagination and politics is shifting from a modern and human-centred perspective to a post-historical and post-anthropocentric one.</p>
</abstract>
<kwd-group>
<kwd>Machine learning</kwd>
<kwd>Immanuel Kant</kwd>
<kwd>David Hume</kwd>
<kwd>technology</kwd>
<kwd>post-history</kwd>
<kwd>imagination</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec>
<title>Introduction</title>
<p>In <italic>Capitalist Realism</italic>, Mark Fisher builds on the idea, usually attributed to Slavoj Zizek or Fredric Jameson, &#8216;that it is easier to imagine the end of the world than it is to imagine the end of capitalism&#8217; (<xref ref-type="bibr" rid="B16">2009: 2</xref>). For Fisher, capitalism has become so naturalised that a non-capitalist way of life becomes &#8216;unimaginable&#8217;. The term &#8216;capitalist realism&#8217; refers to the &#8216;widespread sense that not only is capitalism the only viable political and economic system, but also that it is now impossible even to imagine a coherent alternative to it&#8217; (<xref ref-type="bibr" rid="B16">Fisher 2009: 2</xref>). Once, cinema and literature were exercises of the imagination aimed at describing such coherent alternatives (<xref ref-type="bibr" rid="B27">Jameson 2005</xref>). Today, these media have been reduced to the endless repetition of social and ecological catastrophe as if it were the inevitable fate of human nature (<xref ref-type="bibr" rid="B16">Fisher 2009: 2</xref>).</p>
<p>In a similar argument, Berardi (<xref ref-type="bibr" rid="B5">2011</xref>) explores how the idea of &#8216;no future&#8217; has become a common place in contemporary culture. During the 20th century, he claims, we went from the &#8216;enthusiastic expectations and proclamations of the Futurists&#8217;, to the &#8216;no future&#8217; of punk culture, and finally to the &#8216;there is no alternative&#8217; of Thatcherism and Reaganomics (<xref ref-type="bibr" rid="B5">Berardi 2011</xref>). This process belongs to a long history of Western civilisation, from the opening up of the &#8216;New World&#8217; to &#8216;Spanish colonisation up to the Hollywood colonisation of the planetary mind&#8217;, in which our imagination has become the main gate through which capitalism has penetrated the &#8216;collective unconscious&#8217; (<xref ref-type="bibr" rid="B6">Berardi 2014: 98</xref>). Nowadays, the radical proliferation of images and information is exposing the imagination to a process of &#8216;vertiginous acceleration&#8217; (<xref ref-type="bibr" rid="B6">Berardi 2014: 34</xref>). In this context, human imagination is struggling more and more to associate significant pieces of data in order to imagine a coherent alternative future (<xref ref-type="bibr" rid="B6">Berardi 2014: 195</xref>).<xref ref-type="fn" rid="n1">1</xref> Both Fisher and Berardi present radical responses to a scenario that had been once defined with enthusiasm by Fukuyama (<xref ref-type="bibr" rid="B22">1989</xref>). What Fukuyama addressed in terms of the &#8216;end of history&#8217; (and hence as the end of conflict between different ideologies), Fisher and Berardi interpret as the triumph of a hegemonic social order that has undermined every possibility of imagining otherwise.</p>
<p>Around the same years in which Fisher and Berardi were both diagnosing this profound crisis of imagination, Bernard Stiegler published a short essay titled <italic>For a New Critique of Political Economy</italic> (<xref ref-type="bibr" rid="B48">2010</xref>). In it, Stiegler called for a renewal of the critique of contemporary societies that could satisfactorily integrate the question of technology. As he put it, &#8216;I would like to demonstrate here that the question of tertiary retention opens up a new perspective on political economy and its critique, and, now more than ever, that it makes a new critique of political economy the essential task of philosophy&#8217; (<xref ref-type="bibr" rid="B48">2010: 8</xref>). The question of &#8216;tertiary retention&#8217; is the question of how technical (external) objects define and shape the internal faculties of human subjectivity (perception, memory, imagination, desire, etc.). In doing so, technics play a fundamental role in shaping and reshaping the conditions of possibility of intersubjective (&#8216;transindividual&#8217;) relations. Stiegler calls this &#8216;originary technicity&#8217;, an approach that conceives technics as actively and constantly redefining the link between imagination and politics. From the perspective of this originary technicity, the current impossibility of imagining a post-capitalist future, diagnosed both by Fisher and Berardi, could not be properly addressed without an analysis of the technical objects (tertiary memory) that shape our present.</p>
<p>Following Stiegler, this article explores the relationship between imagination and politics in light of recent developments in neural networks technologies (also known as machine learning algorithms). It examines how this new technology is reshaping the political role and place of human imagination. Furthermore, it uses Flusser&#8217;s (<xref ref-type="bibr" rid="B18">2002</xref>) terminology in order to examine to what extent this technology can be understood as a new &#8216;technical faculty of the imagination&#8217;. The first part will argue, following Stiegler and Flusser, for a type of approach to the notion of imagination that challenges the human-technology opposition. The second part will introduce the topic of neural networks technologies using the specific example of algorithmic image recognition systems and then, through the prism of the Kant-Hume debate, it will set three possible perspectives on the question of an algorithmic imagination. The third and final section will return to Flusser to see how the relation between imagination and politics is shifting from a modern and human-centred perspective to a post-historical and post-anthropocentric one.</p>
</sec>
<sec>
<title>Imagination and Technology</title>
<p>Writing in the 1940s, Adorno and Horkheimer claimed that the &#8216;industrialisation of culture&#8217; was &#8216;infecting everything with sameness&#8217; (<xref ref-type="bibr" rid="B1">2002: 94</xref>). The authors refer to Kant&#8217;s concept of schematism to explain the catastrophic consequences brought forward by this industrialisation. In Kant&#8217;s (<xref ref-type="bibr" rid="B29">1998</xref>) philosophical system, schematism is defined as a specific function of the faculty of the imagination that allows subjects to bridge the manifold given to the senses with the unity of a concept of the understanding. Schematism hence requires an active contribution from the subject&#8217;s imagination (<xref ref-type="bibr" rid="B29">Kant 1998: 271</xref>). In the culture industry, however, this active contribution is &#8216;denied to the subject&#8217; (<xref ref-type="bibr" rid="B1">Adorno and Horkheimer 2002: 98</xref>). The industrialisation of culture implies that the consumer is no longer required to use his or her imagination in order to subsume sensible data under a concept of the understanding. &#8216;For the consumer&#8217;, Adorno and Horkheimer state, &#8216;there is nothing left to classify since the classification has been pre-empted by the schematism of production&#8217; (<xref ref-type="bibr" rid="B1">2002: 98</xref>). If we walk into a supermarket and buy a can of fruit, for example, we are not required to choose an individual fruit that fits a general rule or schema (i.e., check for its ripeness, look for flaws, etc.). The promise of industrialisation is that all cans of fruit are, and will remain, the same. The faculty of imagination, responsible for the active task of individualising the right fruit, is not necessary any more since this operation has already been realised in the production line. In a world of sameness, imagination becomes &#8216;atrophied&#8217;. For Adorno and Horkheimer this is not only a phenomenon restricted to the consumption of basic commodities, but it is also taking place within the sphere of culture itself. By building on &#8216;ready-made clich&#233;s&#8217;, the culture industry is putting an end to the unexpected and, hence, denying its audience &#8216;any dimension in which they might roam freely in imagination&#8217; (<xref ref-type="bibr" rid="B1">Adorno and Horkheimer 2002: 100</xref>). The atrophy of imagination, these authors suggest, &#8216;needs not to be traced back to psychological mechanisms&#8217; but to the &#8216;objective makeup&#8217; of cultural products themselves (<xref ref-type="bibr" rid="B1">Adorno and Horkheimer 2002: 100</xref>). From their perspective, culture should have a critical role, challenging the ideological premises that guide social reproduction. In their modernist vision, this is achieved not through the representation of specific political or ideological contents, but through the artwork&#8217;s formal potential to construct an internal logic different to the one &#8216;guiding social reproduction&#8217; (<xref ref-type="bibr" rid="B1">Adorno and Horkheimer 2002: 95</xref>). With the industrialisation of culture, however, the artwork adopts the same logic as the production line: that of instrumental reason. Hence, the world of culture is not only unable to offer an alternative logic from where to imagine new forms of social reproduction but also becomes an active agent on behalf of that reproduction.</p>
<p>Following Adorno and Horkheimer&#8217;s (<xref ref-type="bibr" rid="B1">2002</xref>) analysis, the current impossibility to imagine a world beyond capitalism identified by both Fisher (<xref ref-type="bibr" rid="B16">2009</xref>) and Berardi (<xref ref-type="bibr" rid="B5">2011</xref>, <xref ref-type="bibr" rid="B6">2014</xref>) could be read as the logical consequence of the &#8216;atrophy of the faculty of imagination&#8217; set in motion by the &#8216;industrialisation of culture&#8217;. Nonetheless, as Stiegler (<xref ref-type="bibr" rid="B49">2011a: 40</xref>) warns us, Adorno and Horkheimer&#8217;s explanation has one key oversight: by opposing an innate form of schematism to an industrial one, these authors present imagination as a natural and ahistorical faculty which functions as a measuring rod against which they evaluate the dehumanising process put forth by industrial capitalism. In other words, Adorno and Horkheimer reproduce a humanist and anthropocentric opposition between the purity and spontaneity of human imagination and the mechanistic nature of technology and industry. Human imagination is thus referred to as &#8216;a secret mechanism within the psyche&#8217; that has now been replaced by the inhuman logic of industry (<xref ref-type="bibr" rid="B1">Adorno and Horkheimer 2002: 98</xref>). By presenting the issue in these terms, Adorno and Horkheimer reproduce an anthropocentric conception of imagination as that which ensures the singularity of humans as opposed to both machines and animals.<xref ref-type="fn" rid="n2">2</xref></p>
<p>From this modern, humanist, and anthropocentric perspective, the relation between imagination and politics can be said to respond to at least these two premises: first, imagination is a transcendental faculty that constitutes the common ground (the <italic>sensus communis</italic>) which unifies the community of humans as a universal species, separating them from both animals and machines.<xref ref-type="fn" rid="n3">3</xref> Second, imagination is that <italic>a priori</italic> principle that allows creating the new out of the given, hence making progress and social transformation possible. The &#8216;atrophy of imagination&#8217; brought forward by the culture industry denounced by Adorno and Horkheimer could hence be seen as the decline of these two modern and humanist premises: the weakening of the common ground that ensures a universal community of human beings, and the impossibility of imagining other possible futures for those beings beyond the sameness of the present.</p>
<p>As aforementioned, however, the problem with this line of thought is that it fails to address how human imagination is intertwined with the historical, social and technical dimensions. As a response, Stiegler (<xref ref-type="bibr" rid="B48">2010</xref>, <xref ref-type="bibr" rid="B49">2011a</xref>, <xref ref-type="bibr" rid="B50">2011b</xref>) has addressed the issue of the atrophy of imagination and the impossibility of imagining a future beyond capitalism without falling back upon an ahistorical and transcendental notion of imagination. Instead of opposing imagination to technology, Stiegler develops a critique of contemporary capitalism from the perspective of how recent technological and social transformations have remodelled our faculty of imagination and are hence creating a short circuit between the acceleration of the flows of information on the one hand, and the limits of human subjectivity on the other. As mentioned above, Stiegler defines a theory of &#8216;originary technicity&#8217; according to which human imagination and technical objects form a hybrid and intertwined notion of subjectivity (<xref ref-type="bibr" rid="B7">Bradley 2011: 102</xref>). From this perspective, it would still be correct to say that imagination is that unique faculty that defines us as human beings. That faculty, however, is not an ahistorical trait hidden in the depths of our psyche, but the result of our technical (hybrid) exchange with the world. This does not only highlight the historical, social and technical nature of the faculty of imagination but also blurs the limit between interiority and exteriority, rendering human beings as the outcome of a hybrid intertwining with technics that precedes and exceeds that limit. Stiegler&#8217;s theory of originary technicity hence conceives humans not as pure natural beings opposed to technology, but rather as hybrid entities composed of biological, social and technical components.<xref ref-type="fn" rid="n4">4</xref></p>
<p>This puts into question the two modern premises regarding human imagination mentioned above. If the imagination is exposed to technical, historical and social mutations, then it can no longer function as the common ground for the universal community of human beings (if the imagination changes historically, then it cannot be posed as the <italic>sensus communis</italic> that holds the human community together; or at least this community cannot be posed as universal, but rather as situated). Moreover, if the imagination is the result of a constant exchange with technics, then the possibility of imagining an alternative future will always depend on the interplay between the available technical surfaces of information and the available capacity to process this information. By challenging these two premises, Stiegler offers an alternative explanation to the current crisis of imagination beyond the framework of Adorno and Horkheimer. The problem of contemporary capitalism, Stiegler (<xref ref-type="bibr" rid="B48">2010: 107</xref>) claimed, is not that it replaces human (pure and natural) schematism with an industrial (technical and impure) one, but rather that it is creating a process of acceleration that is &#8216;intrinsically self-destructive&#8217;. The current acceleration of capitalist production is causing an &#8216;annihilation of time&#8217;, that is, an acceleration of the flows of information beyond the limits of individual subjectivity. This has two consequences. First, by accelerating time and destroying the temporal experience of individual subjects, capitalism is undermining the time necessary for &#8216;human desire&#8217; (i.e., &#8216;the gap between the drive and its satisfaction&#8217;), which is the engine that drives capitalist consumption (<xref ref-type="bibr" rid="B50">Stiegler 2011b</xref>). Second, the acceleration of information is destroying individual imagination (i.e., the gap between different elements that makes association possible in order to anticipate the future and invent the new). Thus, the current impossibility to imagine the future would be the result of the &#8216;annihilation of time&#8217; and the &#8216;withering of desire&#8217; put forth by capitalism&#8217;s intrinsic need for constant acceleration (<xref ref-type="bibr" rid="B48">Stiegler 2010</xref>, <xref ref-type="bibr" rid="B50">2011b</xref>).</p>
<p>Another way of understanding the relation between imagination and technics is found in the work of Flusser (<xref ref-type="bibr" rid="B18">2002</xref>, <xref ref-type="bibr" rid="B19">2011</xref>). Motivated by the new image-production technologies of the 19th and 20th centuries, Flusser (<xref ref-type="bibr" rid="B18">2002: 114</xref>) argued for the differentiation between two ages of the imagination: an age in which image production was entirely dependent on human agency, and an age of technical images in which image production depends more and more on the workings of an apparatus. In the first case, imagination appears as a unique faculty of human beings: the &#8216;ability to step back from the objective world into one&#8217;s own subjectivity&#8217; (<xref ref-type="bibr" rid="B18">2002: 111</xref>). In the second case, apparatuses replace human imagination in the production of images (<xref ref-type="bibr" rid="B17">Flusser 2000: 14</xref>). In both cases imagination appears as having the structure of a &#8216;black box&#8217;, that is, a closed system that conceals its operation from the &#8216;user&#8217;. Hence, just as the critique of aesthetic judgement in the age of human-produced images required a critique of the faculty of imagination, the critique of technical images requires a critique of the technical imagination which must begin by elucidating the inner workings of the black box (<xref ref-type="bibr" rid="B17">Flusser 2000: 16</xref>). Flusser defines the apparatus as a black box that carries out the tasks programmed in it. The privileged position of the apparatus resides in the fact that it can carry out these operations faster and with far less mistakes than human beings (<xref ref-type="bibr" rid="B17">Flusser 2000: 32</xref>). For this reason, humans are becoming &#8216;less and less competent&#8217; to deal with these complex programmes and are hence having &#8216;to rely more and more on apparatuses&#8217; (<xref ref-type="bibr" rid="B17">Flusser 2000: 32</xref>). Put differently, humans are becoming less dependent on human imagination and more dependent on technical imagination. For Flusser, however, the technophobic responses to the current crisis of political imagination are the result of assessing the new technical imagination using normative categories inherited from a previous (humanist) framework.<xref ref-type="fn" rid="n5">5</xref> In a world governed by apparatuses, programmes, and technical images, human imagination is no longer sufficient for offering a suitable idea of the future. Hence, instead of continuing to oppose human and machines, we need to consider the idea that the future can no longer be imagined in anthropocentric terms but needs to be &#8216;projected&#8217; by &#8216;operators&#8217; using the potentialities of the new technical imagination (<xref ref-type="bibr" rid="B18">Flusser 2002: 115</xref>).</p>
<p>Following Flusser&#8217;s reflections, the next section will explore how the recent development of machine learning algorithms (sometimes referred to also as neural networks) can be understood as the emergence of a new technical faculty of the imagination.<xref ref-type="fn" rid="n6">6</xref> This requires redefining the limit between humans and machines (overcoming the anthropocentric opposition between imagination and technology), as well as re-evaluating the relation between imagination and politics. Special attention will be given to neural networks that have been trained for object recognition, also known as computer vision systems.<xref ref-type="fn" rid="n7">7</xref> As it will be argued, these systems seem to challenge the anthropocentric notion of imagination outlined above, posing new questions for our definition of imagination as a strictly human faculty.</p>
</sec>
<sec>
<title>Towards an algorithmic imagination</title>
<p>In their 2019 report <italic>Excavating AI</italic>, Kate Crawford and Trevor Paglen develop an extensive analysis of the political dimension of computer vision, paying particular attention to the training process behind this technology. Crawford and Paglen (<xref ref-type="bibr" rid="B10">2019</xref>) write:</p>
<disp-quote>
<p>to build a computer vision system that can, for example, recognise the difference between pictures of apples and oranges, a developer has to collect, label, and train a neural network on thousands of labelled images of apples and oranges. On the software side, the algorithms conduct a statistical survey of the images, and develop a model to recognise differences between the two &#8216;classes&#8217;. If all goes according to plan, the trained model will be able to distinguish the difference between images of apples and oranges that it has never encountered before. Training sets, then, are the foundation on which contemporary machine-learning systems are built. They are central to how AI systems recognise and interpret the world.</p>
</disp-quote>
<p>Crawford and Paglen go on to show that the datasets utilised in the training process of these algorithms are composed of skewed, shaky and biased elements. This means that in many cases, the outcome of the training process is a biased algorithm that reproduces social stereotypes and structural prejudices.<xref ref-type="fn" rid="n8">8</xref> In response to these critiques, some software developers have promised to improve their datasets to make them less biased and more representative. Despite these efforts, however, Crawford and Paglen (<xref ref-type="bibr" rid="B10">2019</xref>) insist that &#8216;the whole endeavour of collecting images, categorising them and labelling them is itself a form of politics, filled with questions about who gets to decide what images mean and what kinds of social and political work those representations perform&#8217;.</p>
<p>Crawford and Paglen&#8217;s publication represents a significant effort to denounce the complex political dimension of training datasets in computer vision systems. While recognising its contribution, this article focuses on a more elemental question regarding these systems. How do these algorithms identify an object? How do they connect a singular image with a general category or class? These questions lead to a more general reflection on the issue of judgement, that is, on the issue of how a series of particular objects, despite their individual differences, can be subsumed under a general rule.</p>
<p>In rule-based algorithms, the programmer designs a general rule that must fit all possible individual inputs. This means that all individual differences have to be anticipated by the human programmer (<xref ref-type="bibr" rid="B21">Fry 2018: 11</xref>). This is why rule-based algorithms can hardly be used for computer vision systems (unless they are restricted to extremely controlled environments and very specific tasks). The complexity of human vision entails that it is practically impossible to write a general rule that can include and anticipate all singular cases. As Dan McQuillan (<xref ref-type="bibr" rid="B35">2018: 256</xref>) illustrates it, &#8216;faces or handwritten letters come in many different forms; while humans learn from an early age to recognise them, it is tricky to write a specification that is precise enough for a machine yet flexible enough to deal with all the natural variations&#8217;. The fact that humans can incorporate this pattern-recognition ability so &#8216;naturally&#8217; reinforces the idea that imagination is both a &#8216;deep mystery of the human psyche&#8217; (a black box) and a defining trait that separates us from animals and machines. Put in these terms, object recognition algorithms can be said to achieve no small task. Through the training of neural networks, these systems manage to formulate a specific set of rules that successfully automate the complexities of visual perception. Thanks to machine learning technologies, computer vision is becoming a concrete technical imagination, a new black box responsible for the subsumption and classification of the multiplicity that defines human vision.</p>
<p>According to Google&#8217;s software engineers Mordvintsev, Olah and Tyka (<xref ref-type="bibr" rid="B36">2015</xref>), computer vision is the result of a training process that is capable of &#8216;extracting the essence&#8217; of a specific object. As they put it:</p>
<disp-quote>
<p>we train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (e.g., a fork needs a handle and 2&#8211;4 tines), and learn to ignore what does not matter (a fork can be any shape, size, colour or orientation) (<xref ref-type="bibr" rid="B36">Mordvintsev Olah and Tyka 2015</xref>).</p>
</disp-quote>
<p>This brief and straightforward account of the training process behind computer vision has far-reaching philosophical implications. When an algorithm is trained to distinguish between the images of an apple and those of an orange, could it be said that the algorithm has constructed a concept of &#8216;apple&#8217; and &#8216;orange&#8217; or is it simply the result of the repetition of unrelated accidental traits? To frame the issue within the history of philosophy, the training process behind computer vision can be said to revive the debate between David Hume and Immanuel Kant regarding the conditions of possibility of universal knowledge (i.e., the possibility of science).</p>
<p>In the <italic>Introduction</italic> to the <italic>Critique of Pure Reason</italic>, Kant (<xref ref-type="bibr" rid="B29">1998: 138</xref>) argues that the possibility of a universal rule &#8216;would be entirely lost if one sought, as Hume did, to derive it from a frequent association of that which happens with that which precedes and a habit (thus a merely subjective necessity) of connecting representations arising from that association&#8217;. For Hume, there are only individual objects and individual experiences of these objects (<xref ref-type="bibr" rid="B26">1960: 20</xref>). It is only through the association of these individual experiences in his or her faculty of the imagination that the subject forms a representation of an abstract idea. There are only individual apples. By associating the representations of many individual apples, the faculty of the imagination produces the abstract idea &#8216;apple&#8217;. This abstract idea is itself individual, although its application in our reasoning works &#8216;as if it were universal&#8217; (<xref ref-type="bibr" rid="B26">Hume 1960: 20</xref>). For Kant, this explanation of the origins of abstract ideas is unacceptable because it undermines the principle of necessity behind universal rules and makes science (e.g., mathematics and theoretical physics) impossible. Since mathematics and theoretical physics are not just possible but actually exist, he contends, a series of transcendental (<italic>a priori</italic>) principles that guarantee the relation between individual objects and universal rules must be in place (<xref ref-type="bibr" rid="B29">Kant 1998: 147</xref>). The task of his critical (transcendental) philosophy is precisely the unearthing and systematisation of these transcendental principles (<xref ref-type="bibr" rid="B29">Kant 1998: 149</xref>).</p>
<p>One of these transcendental principles is that of &#8216;schematism&#8217;. In Kant&#8217;s philosophy, the role of the faculty of understanding is to identify the rules that govern nature. At the same time, the faculty of judgement is &#8216;the faculty of subsuming under rules, i.e., of determining whether something stands under a given rule or not&#8217; (<xref ref-type="bibr" rid="B29">Kant 1998: 268</xref>). For this subsumption of an object under a concept to be possible, &#8216;the representations of the former must be homogeneous with the latter&#8217; (<xref ref-type="bibr" rid="B29">Kant 1998: 271</xref>). The problem is that empirical intuitions and pure concepts of the understanding are heterogeneous. Hence, there must be a &#8216;third thing&#8217; that stands in homogeneity both with the rule as well as with the empirical appearance in order for judgement to be able to subsume the latter under the former. Kant (<xref ref-type="bibr" rid="B29">1998: 272</xref>) suggests that this &#8216;third thing&#8217; which makes all judgement possible is the &#8216;transcendental schema&#8217;. The schema is a product of the transcendental faculty of the imagination and as such must be distinguished from an image (<xref ref-type="bibr" rid="B29">Kant 1998: 273</xref>). An image is a product of the empirical (reproductive) faculty of imagination. Hence, an image is always particular. The schema, on the other hand, is the product of an <italic>a priori</italic> (productive) imagination. The schema is the condition of possibility that allows connecting an individual (empirical) image to a general concept of the understanding. Kant gives two examples of the schema: one belonging to a pure concept of understanding (a triangle), and one belonging to an empirical one (a dog). Kant (<xref ref-type="bibr" rid="B29">1998: 273</xref>) writes:</p>
<disp-quote>
<p>No image of a triangle would ever be adequate to the concept of it. For it would not attain the generality of the concept, which makes this valid for all triangles, right or acute, etc. &#8230; The schema of the triangle can never exist anywhere except in thought, and signifies a rule of the synthesis of the imagination.</p>
</disp-quote>
<p>And then:</p>
<disp-quote>
<p>The concept of a dog signifies a rule in accordance with which my imagination can specify the shape of a four-footed animal in general, without being restricted to any single particular shape that experience offers me or any possible image that I can exhibit <italic>in concreto</italic>. (<xref ref-type="bibr" rid="B29">Kant 1998: 273</xref>)</p>
</disp-quote>
<p>Both Hume (<xref ref-type="bibr" rid="B26">1960: 24</xref>) and Kant (<xref ref-type="bibr" rid="B29">1998: 273</xref>) considered the faculty of imagination as a crucial mechanism of the faculty of understanding and a &#8216;deep mystery of the human psyche&#8217; that can only be unravelled through rigorous analysis. The main difference is that while for Hume abstract ideas are simply the result of habit (a repetition of associations in the imagination), for Kant there must be a transcendental principle (schematism) that guarantees the subsumption of empirical objects under universal (necessary) concepts. Kant (<xref ref-type="bibr" rid="B29">1998: 146</xref>) considered that in Hume&#8217;s philosophy habit took &#8216;the appearance of necessity&#8217;. To safeguard the possibility of universal science, he suggested, empirical objects had to be organised by our faculty of judgement under <italic>a priori</italic> concepts. The schema is the transcendental principle that guarantees this operation.</p>
<p>If we now return to the topic of machine learning as a new form of (algorithmic) imagination, two different interpretations can be given. From a Humean perspective, the classification of an image under a given category (&#8216;apple&#8217; or &#8216;dog&#8217;) can be seen as the result of habit. This means that during the training process, the algorithm associates thousands of images in order to produce a statistical model, an abstract idea of a given object. This abstract idea is totally contingent. It does not correspond to any general rule or essence. This perspective matches what Matteo Pasquinelli and Vladan Joler refer to as the &#8216;brute force approach&#8217; of machine learning algorithms. According to these authors, machine learning &#8216;is not driven by exact formulas of mathematical analysis, but by algorithms of brute force approximation&#8217; (<xref ref-type="bibr" rid="B41">Pasquinelli and Joler 2020</xref>). The reason why these algorithms are so efficient is not because they distil an essence or abstract idea out of the training set, but simply because they &#8216;can approximate the shape of any function given enough layers of neurons and abundant computer resources&#8217; (<xref ref-type="bibr" rid="B41">Pasquinelli and Joler 2020</xref>). For Pasquinelli and Joler (<xref ref-type="bibr" rid="B41">2020</xref>), this is a key aspect for understanding the potentialities and the limitations of today&#8217;s algorithmic technologies (including their escalating carbon footprint).</p>
<p>From a Kantian perspective, on the contrary, machine learning algorithms can be read as a form of technical schematism. The training process would thus consist of a process of extracting out from the data a schema for each given object (a fork is &#8216;a handle and 2&#8211;4 tines&#8217;, a dog is &#8216;a four-footed fury animal&#8217;, an apple is &#8216;a round, green or red fruit&#8217;, etc.). As McQuillan (<xref ref-type="bibr" rid="B35">2018: 256</xref>) puts it, the training process behind neural networks seems to &#8216;distil&#8217; a specific &#8216;set of features&#8217; from the training data, identifying hidden patterns that make object recognition possible. Until now, this pattern recognition capacity was thought of as taking place exclusively in the transcendental schema of human imagination. In computer vision technologies, however, this pattern recognition technology seems to become automated. McQuillan (<xref ref-type="bibr" rid="B35">2018: 257</xref>) notes that these new pattern recognition technologies are so powerful that they allow identifying &#8216;schemata&#8217; even where human judgement would only see noise and randomness. This, he warns us, may create the impression that these patterns &#8216;pre-exist&#8217; an observer&#8217;s empirical experience, as some sort of Platonic idea: a true &#8216;mathematical order&#8217; concealed behind the world of &#8216;visible evidence&#8217; (<xref ref-type="bibr" rid="B35">McQuillan 2018: 261</xref>).<xref ref-type="fn" rid="n9">9</xref> To avoid this pitfall, a clear distinction between schema (in the Kantian sense) and idea (in the Platonic sense) must be established. While the Platonic idea refers to an essence that exists outside time and space (and is thus immutable), the schema refers to a set of rules that bridge a concept from the understanding with a particular spatiotemporal object. As Deleuze tells us in his 1978 course <italic>Sur Kant</italic>, the schema is a &#8216;rule of production&#8217;, that is, a rule that allows us to produce &#8216;in space and time&#8217; the &#8216;experience of an object conforming to a concept&#8217;:</p>
<disp-quote>
<p>Consider the two following judgements: &#8216;the straight line is a line equal in all its points&#8217;; there you have a logical or conceptual definition, you have the concept of the straight line. If you say &#8216;the straight line is black&#8217;, you have an encounter in experience; not all straight lines are black. &#8216;The straight line is the shortest path from one point to another&#8217;, it is a type of judgment, a quite extraordinary one according to Kant, and why? Because it cannot be reduced to either of the two extremes that we have just seen. What is the shortest path? Kant tells us that the shortest path is the rule of production of a straight line. If you want to produce a straight line, you take the shortest path &#8230; The shortest path is the rule of production of a straight line in space and time. (<xref ref-type="bibr" rid="B11">Deleuze 1978</xref>)</p>
</disp-quote>
<p>Returning to Mordvintsev and Tyka&#8217;s description of the training process involved in computer vision, it could be said that through this process the neural network extracts an algorithmic schema from the training datasets, that is, a rule of production of a given object that will later be utilised to identify that object in new images. Neural networks would hence produce not a Platonic idea (a mathematical order outside spatiotemporal empirical experience), but an algorithmic schema in the Kantian sense (a spatiotemporal rule of production). For Kant, schematism is an <italic>a priori</italic> principle of human reason that ensures the subsumption of spatiotemporal objects under the pure concepts of the understanding. Equivalently, algorithmic schematism would appear as an <italic>a priori</italic> faculty that allows subsuming individual images under a mathematical (formal) abstraction. From a Kantian perspective, the &#8216;brute force approximation&#8217; thesis would be insufficient because it would not be able to explain how the associations are being produced in the first place. In this sense, the optimisation equations behind the training process of neural networks operate as a form of <italic>a priori</italic> principles that make the association between individual images possible.</p>
<p>If we now return to the debate on the relation between imagination and technology sketched above, we could outline three different responses to the novelty and challenges posed by machine learning algorithms:</p>
<list list-type="order">
<list-item><p>First, we could identify a series of approaches that reproduce the difference between humans and technology, establishing a radical separation between human thought and machine learning algorithms. This is the most widespread and accepted approach among both computer engineers and cultural critics. It conceives machine learning algorithms as pure statistical approximation based on habit and association. As such, machine learning algorithms appear as essentially different from human thought (which, unlike algorithms, is based on the free play of the imagination). This view ensures a strict separation between the mechanism of algorithmic processes and the spontaneity of human imagination.<xref ref-type="fn" rid="n10">10</xref> Some examples of this approach are Pasquinelli and Joler&#8217;s (<xref ref-type="bibr" rid="B41">2020</xref>) account of artificial intelligence as &#8216;brute force approximation&#8217; and Finn&#8217;s (<xref ref-type="bibr" rid="B15">2017</xref>) appeal for an &#8216;augmented imagination&#8217; (a combination of the speed and scale of algorithmic processing and the creativity and spontaneity of human schematism). These approaches reproduce Adorno and Horkheimer&#8217;s (<xref ref-type="bibr" rid="B1">2002</xref>) distinction between a pure, transcendental schematism, and its technological standardisation. They also repeat Marx&#8217;s (<xref ref-type="bibr" rid="B33">1976: 283&#8211;84</xref>) definition of labour as a strictly human activity grounded on the singularity of human imagination: what distinguishes the &#8216;worst architect&#8217; from the &#8216;best of bees&#8217;, Marx tells us, is that the architect first defines in his or her imagination the object to be built.<xref ref-type="fn" rid="n11">11</xref> In all these approaches, imagination is the key aspect separating human enterprises from the merely instinctive existence of animals and the mechanic repetition of machines (including that of algorithms and neural networks).</p></list-item>
<list-item><p>Second, we could outline an approach which, following Hume&#8217;s perspective, posits an analogy between human imagination and machine learning algorithms. According to this approach, human knowledge is possible thanks to a process of habit (association) that takes places in human imagination in order to produce an abstract idea that will later function as if it were universal. Likewise, machine learning algorithms operate by approximating an immense amount of individual data to extract a pattern that can later be used to identify new elements. From this perspective, then, there would be no radical difference between the inner workings of human cognition and those of machine learning: they both operate as machines of &#8216;brute force approximation&#8217;. In Hume&#8217;s time, one might assume, it was unconceivable that a machine could execute tasks involving innate pattern-recognition abilities. Hence, Hume could define imagination as an &#8216;associating machine&#8217; without threatening the singularity of human understanding and human nature. Today, however, in light of the technical revolution put forth by neural networks, it would no longer be possible to establish a difference between these two machinic forms of pattern recognition. Both, humans and machine learning algorithms, appear as machines that relate to the world by approximating the sum of individual experiences in order to produce an abstract idea. Hui&#8217;s (<xref ref-type="bibr" rid="B25">2019</xref>) latest book, <italic>Recursivity and Contingency</italic>, could be placed under this second category.</p></list-item>
<list-item><p>Third, we could define both human imagination and neural networks as pattern recognition machines capable not only of subsuming the particular under universal rules (determinative judgement) but also of extracting universal rules from the particular (reflective judgement).<xref ref-type="fn" rid="n12">12</xref> Like human imagination, algorithmic imagination could be said to function by producing schemata that allows linking particular experiences with general rules. From this perspective, machine learning algorithms could be be said to constitute a proper faculty of the (technical) imagination. Bernard Stiegler and Vil&#233;m Flusser advance radical theses that could help developing this third approach.<xref ref-type="fn" rid="n13">13</xref> As mentioned above, Stiegler (<xref ref-type="bibr" rid="B49">2011a: 53</xref>) contended that schematism is not an <italic>a priori</italic> principle, but a concrete result of the technical surfaces of inscription that shape empirical experience. For him, there could be no mental image (schema) without an objective, external, surface of inscription. Hence, the internal faculty of schematism will be, in each specific context, the result of the available external memory supports. For Kant, the condition of possibility of an individual image is a transcendental schema. For Stiegler (<xref ref-type="bibr" rid="B49">2011a: 53</xref>), instead, the possibility of the schema as the bridge between an individual image and a general rule is always the external technical surface on which that individual image is inscribed. Alternatively, Flusser (<xref ref-type="bibr" rid="B18">2002: 115</xref>) defines the technical imagination as the automation of abstract calculation that allows &#8216;projecting&#8217; new possibilities into the future. This new technical imagination is not based on any sort of subjective interiority, but rather on the potentialities inscribed on the programme itself. The future, then, is no longer the realisation of a specific set of human values, but the execution of a &#8216;calculated game of chance&#8217; (<xref ref-type="bibr" rid="B18">Flusser 2002: 119</xref>).</p></list-item>
</list>
</sec>
<sec>
<title>Post-historical Imagination</title>
<p>In 2015, Google engineers Alexander Mordvintsev, Christopher Olah and Mike Tyka designed Google&#8217;s <italic>Deep Dream</italic> project, a piece of software that &#8216;inverted&#8217; Google&#8217;s object recognition algorithm in a Flusserian attempt to visualise what was taking place inside the programme&#8217;s black box (<xref ref-type="bibr" rid="B36">Mordvintsev, Olah and Tyka 2015</xref>). As mentioned above, neural networks contain hidden layers that conceal from the human programmer the abstract features and patterns that have been extracted from the training data. As McQuillan (<xref ref-type="bibr" rid="B35">2018: 257</xref>) puts it, &#8216;by definition, no human software engineer defines what these abstracted features are, and even if the contents of the hidden layer are examined, it is not necessarily possible to translate that back into comprehensible reasoning&#8217;. Once again, we find a parallel between neural networks and imagination. In both cases, their internal operation remains a hidden mystery, a black box of the human psyche on the one case, and of the inner workings of the programme on the other. Hence, Google&#8217;s <italic>Deep Dream</italic> project can be seen as an effort to open this black box and try to visualise the internal processes that make machinic schematism possible. Steyerl describes this project as &#8216;a feat of genius&#8217; that</p>
<disp-quote>
<p>manages to visualise the unconscious of prosumer networks: images surveilling users, constantly registering their eye movements, behaviour, preferences &#8230; Walter Benjamin&#8217;s &#8216;optical unconscious&#8217; has been upgraded to the unconscious of computational image divination. (<xref ref-type="bibr" rid="B47">2017: 56&#8211;57</xref>)</p>
</disp-quote>
<p>That same year, Chilean visual artist Felipe Rivas San Mart&#237;n employed Google&#8217;s <italic>Deep Dream</italic> software to create a series of 17 images titled <italic>El sue&#241;o neoliberal</italic> [<italic>The Neoliberal Dream</italic>] (Figure <xref ref-type="fig" rid="F1">1</xref>).</p>
<fig id="F1">
<label>Figure 1</label>
<caption>
<p>Felipe Rivas San Mart&#237;n, El sue&#241;o neoliberal, 2015.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="ahip-2-1-1016-g1.jpg"/>
</fig>
<p>He began with a well-known photograph of the 11 September 1973 bombing of <italic>La Moneda</italic>, Chile&#8217;s presidential palace (Figure <xref ref-type="fig" rid="F2">2</xref>). This picture has become a symbol of Pinochet&#8217;s military-coup against Salvador Allende&#8217;s government. It also symbolises the end of the country&#8217;s socialist project, interrupted by an orchestration of reactionary forces comprising Chile&#8217;s economic elite and the United States of America&#8217;s foreign office. This coup led to 17 years of a bloody dictatorship and to the establishment of a true neoliberal experiment in Chilean economic, political and social relations. Felipe Rivas San Mart&#237;n fed this photograph into the <italic>Deep Dream</italic> algorithm and the output, besides adding colour to the original black and white image, highlighted some new features: dogs, pagodas, buildings, cars, etc. (Figure <xref ref-type="fig" rid="F3">3</xref>).</p>
<fig id="F2">
<label>Figure 2</label>
<caption>
<p>Bombing of La Moneda, 11 September 1973, Chile.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="ahip-2-1-1016-g2.jpg"/>
</fig>
<fig id="F3">
<label>Figure 3</label>
<caption>
<p>Felipe Rivas San Mart&#237;n, El sue&#241;o neoliberal, 2015.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="ahip-2-1-1016-g3.jpg"/>
</fig>
<p>The artist then fed the new image back to the algorithm, repeating this procedure until he had a total of 17 images, one for each year of Pinochet&#8217;s dictatorship. Each time he fed the image to the algorithm, the features that had been highlighted became intensified through a process of positive feedback (<xref ref-type="bibr" rid="B42">Rivas San Mart&#237;n 2019: 257</xref>). The seventeenth image, then, offers a defined and detailed version of the features in that first image produced by the algorithm, creating a sharp contrast with the original photograph of the bombing of <italic>La Moneda</italic> (Figure <xref ref-type="fig" rid="F4">4</xref>). This sharp contrast between the first and the last image in Felipe Rivas San Mart&#237;n&#8217;s artwork makes it possible to illustrate some of the key transformations of the relationship between imagination and politics in a context in which the automation of schematism has become a technical possibility. Most significantly, this artwork unveils a tension between two ages of the relation between imagination, politics and historical time.</p>
<fig id="F4">
<label>Figure 4</label>
<caption>
<p>Felipe Rivas San Mart&#237;n, El sue&#241;o neoliberal, 2015.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="ahip-2-1-1016-g4.jpg"/>
</fig>
<p>First, we can identify an age of history, grounded on human imagination and defined by notions such as progress and emancipation. This was an age in which the present was still open to the future, in which human imagination still had the potential (and the responsibility) to delineate new forms of political, economic and social arrangements. Salvador Allende&#8217;s socialist project belongs to that age in which history was the realisation of a human ideal. As Flusser (<xref ref-type="bibr" rid="B18">2002: 118</xref>) puts it, history was a humanist and anthropocentric project. As such, it embraced an attitude of &#8216;engagement in world changes&#8217;, of exploiting a natural world &#8216;devoid of value&#8217; in order to achieve the &#8216;realisation of human values&#8217; (<xref ref-type="bibr" rid="B18">Flusser 2002: 118</xref>). History then is a strictly human affair. It entails distinguishing historical time (impregnated with meaning and value) from a natural or astronomical time (as a meaningless movement of bodies).<xref ref-type="fn" rid="n14">14</xref> From this perspective, imagination is that peculiar faculty that allows the human animal to exit the natural realm of astronomical time and enter the meaningful and symbolic realm of historical time. The photograph of the bombing of <italic>La Moneda</italic> chosen by Felipe Rivas San Mart&#237;n belongs to this context. More precisely, it could be argued that the bombing of <italic>La Moneda</italic> marks the interruption of historical time, an interruption that made way for 17 years in which a new relation to time was forced upon Chile&#8217;s social, political and economic relations: a post-historical time.</p>
<p>On the contrary, the images produced by Google&#8217;s <italic>Deep Dream</italic> in Rivas&#8217; artwork represent that post-historical time: an age that has been brought forward precisely through the interruption of Allende&#8217;s historical project and the successive development of a neoliberal model. Chile&#8217;s neoliberal landscape has effectively replaced an age of history (in which imagination and politics were deeply interconnected) for a post-historical age in which politics has been reduced to the algorithmic administration of data (<xref ref-type="bibr" rid="B43">Rouvroy and Berns 2013</xref>). Writing in 1985, Flusser stated: &#8216;According to the suggested model of cultural history, we are about to leave the one-dimensionality of history for a new, dimensionless level, one to be called, for lack of a more positive designation, post-history&#8217; (<xref ref-type="bibr" rid="B19">2011: 15</xref>). Furthermore, he argued that there is a strong link between the surge of a technical imagination and the transition from history to post-history (<xref ref-type="bibr" rid="B19">Flusser 2011: 57</xref>). In historical time, human imagination was responsible for mediating between the present and the unpredictability of the future, establishing a sharp distinction between a nature void of meaning and the historical realisation of human (anthropocentric) values. In post-historical time, instead, the anticipation of the future appears as a mere &#8216;calculated game of chance&#8217; (<xref ref-type="bibr" rid="B18">Flusser 2002: 119</xref>). This means that according to the &#8216;post-historical world picture&#8217; suggested by Flusser, the future is reduced to a &#8216;field of possibilities inscribed in a program&#8217; (<xref ref-type="bibr" rid="B18">2002: 119</xref>). Furthermore, the passage from history to post-history appears as a crucial aspect of the current crisis of (political) imagination. For Flusser the future is a specific experience of time that belongs to the age of history. In the post-historical age, this experience of the future is replaced by predictability. Hence, in post-history the future is no longer imagined. It is calculated. Human imagination then loses its privileged ground for outlining future political projects:</p>
<disp-quote>
<p>If society&#8217;s behaviour is progressively experienced and interpreted as absurdly programmed by programmes without aim and purpose, the problem of freedom, which is the problem of politics, becomes inconceivable. From a programmatic perspective, politics, and therefore history, comes to an end. (<xref ref-type="bibr" rid="B20">Flusser 2013: 24</xref>)</p>
</disp-quote>
<p>In this context, McQuillan (<xref ref-type="bibr" rid="B35">2018</xref>) and Mackenzie (<xref ref-type="bibr" rid="B32">2015</xref>) have both referred to the &#8216;performativity&#8217; of algorithmic prediction. For these authors, predictive algorithms do not simply anticipate a &#8216;natural behaviour&#8217;, but in many cases they themselves &#8216;change the people&#8217;s behaviour in ways that the model did not learn about when it was trained, leading to a recursive reinforcement as actual social practice&#8217; (<xref ref-type="bibr" rid="B35">McQuillan 2018: 258</xref>).</p>
<p>Rivas San Martin&#8217;s appropriation of Google <italic>Deep Dream</italic> illustrates the difficulties of outlining a political project in a context in which human imagination is being replaced by algorithmic calculability. What becomes clear is that a critique of this technology can no longer come from a humanist standpoint in which imagination appears as a strictly human faculty that ensures the realisation of an anthropocentric political project. Hence, the greatest challenge of today&#8217;s political imagination is no longer related to key issues of modern political thought (agency, privacy, intentionality, etc.) but requires a new (post-anthropocentric) understanding of the relation between humans and machines.</p>
</sec>
<sec>
<title>Conclusion</title>
<p>This article began by referring to Fisher and Berardi&#8217;s theses regarding the present crisis of political imagination. We have contended, using mainly Stiegler and Flusser&#8217;s insights, that these diagnoses were grounded on an anthropocentric conception of imagination: a unique human faculty that allows projecting the new out of the given. In the emerging post-historical context governed by apparatuses, the future as a political (human) promise of emancipation is threatened by the inhuman calculation of probabilities enacted by a new faculty of algorithmic imagination. Faced with this, we have outlined three possible responses. Response one calls for a humanist political project that safeguards the singularity of human imagination against the inhuman calculation of algorithmic machines. Responses two and three go beyond the opposition between humans and technology in order to argue that either (2) humans operate like algorithms of &#8216;brute force approximation&#8217;, or that (3) neural networks operate as a technical faculty of the imagination capable of &#8216;pattern recognition&#8217; and &#8216;reflective judgement&#8217;.</p>
<p>From the standpoint of the second and third responses, the faculty of the imagination appears not as a strictly human affair, separating us from animals and machines, but rather as a transversal capability through which an organism regulates its permanent exchange with an outside (an environment in the wider sense of the term).<xref ref-type="fn" rid="n15">15</xref> We believe that as long as we continue to assume an anthropocentric concept of imagination (response one), technological automation will continue to appear antagonistic to autonomy (as the core normative value that grounds the humanist definition of political emancipation). On the contrary, if we assume the perspective of responses two or three, we could eventually overcome the opposition between technics and politics and, with it, overcome the current crises of the imagination. Put differently, a new understanding of the relation between human and machinic imaginations is needed to offer a more sustainable, non-anthropocentric idea of the future beyond the current stalemate of economic, social and ecological crises.</p>
</sec>
</body>
<back>
<fn-group>
<fn id="n1"><p>For a thorough critique of Berardi and the alleged &#8216;crisis of the future&#8217;, see Osborne (<xref ref-type="bibr" rid="B39">2015</xref>).</p></fn>
<fn id="n2"><p>In Modern Western philosophy, imagination is connected to a humanist definition of the human as that particular animal caught between the finitude of material existence and the infinitude of reason. In the specific case of Kantian philosophy, imagination appears as that distinctive human faculty that allows bridging the particular to the universal, the realm of need to the realm of freedom (<xref ref-type="bibr" rid="B34">Matherne 2016: 66</xref>).</p></fn>
<fn id="n3"><p>For an analysis of the political dimension of the faculty of imagination as a <italic>sensus communis</italic>, see Hannah Arendt, <italic>Lectures on Kant&#8217;s Political Philosophy</italic> (<xref ref-type="bibr" rid="B4">1992: 71</xref>). See also George Didi-Huberman (<xref ref-type="bibr" rid="B12">2019</xref>).</p></fn>
<fn id="n4"><p>Haraway (<xref ref-type="bibr" rid="B24">2016</xref>) is probably the author who has contributed the most to popularizing the political dimension of this idea of the human as a hybrid being.</p></fn>
<fn id="n5"><p>A similar argument is put forth by Sloterdijk (<xref ref-type="bibr" rid="B45">2017: 235</xref>) for whom the source of &#8216;anti-technological ressentiments&#8217; is the &#8216;double-morality [&#8230;] of thinking pre-technologically and living technologically&#8217;. Furthermore, current technical developments are forcing us into a paradoxical situation in which &#8216;classical humanism [&#8230;] is practically exhausted&#8217; and where &#8216;one must become a cyberneticist to be able to remain a humanist&#8217; (<xref ref-type="bibr" rid="B45">Sloterdijk 2017: 236</xref>).</p></fn>
<fn id="n6"><p>For a detailed introduction to machine learning and neural networks see Kurenkov (<xref ref-type="bibr" rid="B31">2015</xref>) and Greenfield (<xref ref-type="bibr" rid="B23">2017</xref>).</p></fn>
<fn id="n7"><p>For an introduction to the specific topic of computer vision and object recognition algorithms, see Crawford and Paglen (<xref ref-type="bibr" rid="B10">2019</xref>).</p></fn>
<fn id="n8"><p>The issue of algorithmic bias has been one of the most explored topics within critical algorithmic studies. Some key references addressing this issue are Angwin et al. (<xref ref-type="bibr" rid="B3">2016</xref>); O&#8217;Neil (<xref ref-type="bibr" rid="B38">2016</xref>); Buolamwini and Gebru (<xref ref-type="bibr" rid="B8">2018</xref>); and Noble (<xref ref-type="bibr" rid="B37">2018</xref>).</p></fn>
<fn id="n9"><p>One example of this &#8216;neo-platonism&#8217; can be found in Anderson&#8217;s (<xref ref-type="bibr" rid="B2">2008</xref>) piece in <italic>Wired Magazine</italic> &#8216;The End of Theory: The Data Deluge Makes the Scientific Method Obsolete&#8217;.</p></fn>
<fn id="n10"><p>This radical distinction between the mechanisms of computer processes and the spontaneity of human thought is also found in Searle&#8217;s (<xref ref-type="bibr" rid="B44">1980</xref>) critique of the Turing Test and Dreyfus&#8217; (<xref ref-type="bibr" rid="B13">1999</xref>) critique of artificial reason.</p></fn>
<fn id="n11"><p>For a critical analysis of the issue of the relation between labour and imagination in machine learning algorithms from a Marxist perspective, see Dyer-Witheford, Kjosen and Steinhoff. (<xref ref-type="bibr" rid="B14">2019: 120&#8211;124</xref>).</p></fn>
<fn id="n12"><p>For the distinction between determinative and reflective judgment, see Kant (<xref ref-type="bibr" rid="B28">1987: 18&#8211;19</xref>).</p></fn>
<fn id="n13"><p>Beyond the work of Stiegler and Flusser, we could also mention here the thinking of Simondon (<xref ref-type="bibr" rid="B46">2018</xref>) and Kittler (<xref ref-type="bibr" rid="B30">1997</xref>). Simondon (<xref ref-type="bibr" rid="B46">2018: 172</xref>) contends that Kant&#8217;s <italic>Critique of Judgement</italic> represents the starting point for a new cybernetic understanding of reality. This is so because of the category of &#8216;reflective judgement&#8217;, which begins to explore the relation between &#8216;operations and structures&#8217; from a processual perspective (<xref ref-type="bibr" rid="B46">Simondon 2018: 172</xref>). Similarly, Kittler (<xref ref-type="bibr" rid="B30">1997: 130</xref>) conceives Kant&#8217;s treatment of &#8216;reflective judgement&#8217; as a mechanism of pattern recognition in the &#8216;second degree&#8217;, that is, as a mechanism aimed at optimising the &#8216;mechanism of recognition in general&#8217;. Given his historical context, Kant was incapable of imagining that the human ability for reflective judgment could be transferred to a machine, hence safeguarding the singularity of human imagination (<xref ref-type="bibr" rid="B30">Kittler 1997: 131</xref>). After the invention of the Turing machine and the rise of cybernetic theory; however, it becomes possible to imagine a pattern recognition machine capable of perceiving, remembering and processing data automatically (<xref ref-type="bibr" rid="B30">Kittler 1997: 135</xref>). With the recent development of neural networks and machine learning algorithms, this pattern recognition machine could be said to reach new heights, executing concrete processes of &#8216;reflective judgment&#8217; in which general rules are effectively induced from particular data (see <xref ref-type="bibr" rid="B23">Greenfield 2017: 220&#8211;222</xref>; and <xref ref-type="bibr" rid="B14">Dyer-Witheford, Kjosen and Steinhoff 2019: 120&#8211;124</xref>).</p></fn>
<fn id="n14"><p>For a discussion on the difference between historical and astronomical time from a humanist and anthropocentric perspective, see Panofsky (<xref ref-type="bibr" rid="B40">2004</xref>). For a critical and posthumanist reflection on this distinction, see Celis (<xref ref-type="bibr" rid="B9">2020</xref>).</p></fn>
<fn id="n15"><p>For an analysis of the notions of imagination and creativity as the informational exchange between an organism and the environment, see Zylinska (<xref ref-type="bibr" rid="B51">2020: 67&#8211;68</xref>).</p></fn>
</fn-group>
<sec>
<title>Competing Interests</title>
<p>The authors have no competing interests to declare.</p>
</sec>
<ref-list>
<ref id="B1"><label>1</label><mixed-citation publication-type="book"><string-name><surname>Adorno</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Horkheimer</surname>, <given-names>M</given-names></string-name>. (<year>2002</year>). <source>Dialectic of Enlightenment: Philosophical Fragments</source>. <publisher-loc>Stanford, CA</publisher-loc>: <publisher-name>Stanford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B2"><label>2</label><mixed-citation publication-type="webpage"><string-name><surname>Anderson</surname>, <given-names>C.</given-names></string-name> (<year>2008</year>). <article-title>The End of Theory: The Data Deluge Makes the Scientific Method Obsolete</article-title>. <source>Wired</source>. Available at: <uri>https://www.wired.com/2008/06/pb-theory/</uri></mixed-citation></ref>
<ref id="B3"><label>3</label><mixed-citation publication-type="webpage"><string-name><surname>Angwin</surname>, <given-names>J.</given-names></string-name>, et al. (<year>2016</year>). <article-title>Machine Bias: There&#8217;s software used across the country to predict future criminals. And it&#8217;s biased against blacks</article-title>. <source><ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://propublica.org/">ProPublica.Org</ext-link></source>. Available at: <uri>https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing</uri></mixed-citation></ref>
<ref id="B4"><label>4</label><mixed-citation publication-type="book"><string-name><surname>Arendt</surname>, <given-names>H.</given-names></string-name> (<year>1992</year>). <source>Lectures on Kant&#8217;s Political Philosophy</source>. <publisher-loc>Chicago, IL</publisher-loc>: <publisher-name>The University of Chicago Press</publisher-name>.</mixed-citation></ref>
<ref id="B5"><label>5</label><mixed-citation publication-type="book"><string-name><surname>Berardi</surname>, <given-names>F.</given-names></string-name> (<year>2011</year>). <source>After the Future</source>. <publisher-loc>Edinburgh</publisher-loc>: <publisher-name>AK Press</publisher-name>.</mixed-citation></ref>
<ref id="B6"><label>6</label><mixed-citation publication-type="book"><string-name><surname>Berardi</surname>, <given-names>F.</given-names></string-name> (<year>2014</year>). <source>And: Phenomenology of the End</source>. <publisher-loc>Helsinki</publisher-loc>: <publisher-name>Aalto Books</publisher-name>.</mixed-citation></ref>
<ref id="B7"><label>7</label><mixed-citation publication-type="book"><string-name><surname>Bradley</surname>, <given-names>A.</given-names></string-name> (<year>2011</year>). <source>Originary Technicity: The Theory of Technology from Marx to Derrida</source>. <publisher-loc>Basingstoke</publisher-loc>: <publisher-name>Palgrave Macmillan</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1007/978-0-230-30765-0</pub-id></mixed-citation></ref>
<ref id="B8"><label>8</label><mixed-citation publication-type="confproc"><string-name><surname>Buolamwini</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Gebru</surname>, <given-names>T.</given-names></string-name> (<year>2018</year>). <source>Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification</source>. <conf-name>Paper presented at the 1st Conference on Fairness</conf-name>, <conf-sponsor>Accountability and Transparency</conf-sponsor>, <conf-loc>New York</conf-loc>.</mixed-citation></ref>
<ref id="B9"><label>9</label><mixed-citation publication-type="journal"><string-name><surname>Celis</surname>, <given-names>C.</given-names></string-name> (<year>2020</year>). <article-title>La Allagm&#225;tica En Cuanto Disciplina Poshumanista: Nuevas Metodolog&#237;as Para El Estudio De Las Im&#225;genes En El Contexto De Las M&#225;quinas De Visi&#243;n Algor&#237;tmica</article-title>. <source>Revista</source>, <volume>180</volume>(<issue>46</issue>), <fpage>26</fpage>&#8211;<lpage>37</lpage>.</mixed-citation></ref>
<ref id="B10"><label>10</label><mixed-citation publication-type="webpage"><string-name><surname>Crawford</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Paglen</surname>, <given-names>T.</given-names></string-name> (<year>2019</year>). <article-title>Excavating AI: The Politics of Training Sets for Machine Learning</article-title>. Available at: <uri>https://www.excavating.ai/</uri> Retrieved May 3, 2020. DOI: <pub-id pub-id-type="doi">10.1007/s00146-021-01162-8</pub-id></mixed-citation></ref>
<ref id="B11"><label>11</label><mixed-citation publication-type="webpage"><string-name><surname>Deleuze</surname>, <given-names>G.</given-names></string-name> (<year>1978</year>). <article-title>Sur Kant (translated by Melissa McMahon)</article-title>. 2020, Available at: <uri>https://www.webdeleuze.com/textes/65</uri></mixed-citation></ref>
<ref id="B12"><label>12</label><mixed-citation publication-type="webpage"><string-name><surname>Didi-Huberman</surname>, <given-names>G.</given-names></string-name> (<year>2019</year>). <article-title>L&#8217;imagination, notre Commune</article-title>. Available at: <uri>https://2019.lhistoireavenir.eu/evt/191/</uri></mixed-citation></ref>
<ref id="B13"><label>13</label><mixed-citation publication-type="book"><string-name><surname>Dreyfus</surname>, <given-names>H.</given-names></string-name> (<year>1999</year>). <source>What Computers Still Can&#8217;t Do: A Critique of Artificial Reason</source>. <publisher-loc>London</publisher-loc>: <publisher-name>The MIT Press</publisher-name>.</mixed-citation></ref>
<ref id="B14"><label>14</label><mixed-citation publication-type="book"><string-name><surname>Dyer-Witheford</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Mikkola Kjosen</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Steinhoff</surname>, <given-names>J.</given-names></string-name> (<year>2019</year>). <source>Inhuman Power</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Pluto Press</publisher-name>. DOI: <pub-id pub-id-type="doi">10.2307/j.ctvj4sxc6</pub-id></mixed-citation></ref>
<ref id="B15"><label>15</label><mixed-citation publication-type="book"><string-name><surname>Finn</surname>, <given-names>E.</given-names></string-name> (<year>2017</year>). <source>What Algorithms Want: Imagination in the Age of Computing</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>. DOI: <pub-id pub-id-type="doi">10.7551/mitpress/9780262035927.001.0001</pub-id></mixed-citation></ref>
<ref id="B16"><label>16</label><mixed-citation publication-type="book"><string-name><surname>Fisher</surname>, <given-names>M.</given-names></string-name> (<year>2009</year>). <source>Capitalist Realism: Is There No Alternative?</source> <publisher-loc>London</publisher-loc>: <publisher-name>Zero Books</publisher-name>.</mixed-citation></ref>
<ref id="B17"><label>17</label><mixed-citation publication-type="book"><string-name><surname>Flusser</surname>, <given-names>V.</given-names></string-name> (<year>2000</year>). <source>Towards a Philosophy of Photography</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Reaktion Books</publisher-name>.</mixed-citation></ref>
<ref id="B18"><label>18</label><mixed-citation publication-type="book"><string-name><surname>Flusser</surname>, <given-names>V.</given-names></string-name> (<year>2002</year>). <source>Writings</source>. <publisher-loc>Minneapolis, MN</publisher-loc>: <publisher-name>University of Minnesota Press</publisher-name>.</mixed-citation></ref>
<ref id="B19"><label>19</label><mixed-citation publication-type="book"><string-name><surname>Flusser</surname>, <given-names>V.</given-names></string-name> (<year>2011</year>). <source>Into the Universe of Technical Images</source>. <publisher-loc>Minneapolis, MN</publisher-loc>: <publisher-name>University of Minnesota Press</publisher-name>. DOI: <pub-id pub-id-type="doi">10.5749/minnesota/9780816670208.001.0001</pub-id></mixed-citation></ref>
<ref id="B20"><label>20</label><mixed-citation publication-type="book"><string-name><surname>Flusser</surname>, <given-names>V.</given-names></string-name> (<year>2013</year>). <source>Post-History</source>. <publisher-loc>Minneapolis, MN</publisher-loc>: <publisher-name>Univocal</publisher-name>.</mixed-citation></ref>
<ref id="B21"><label>21</label><mixed-citation publication-type="book"><string-name><surname>Fry</surname>, <given-names>H.</given-names></string-name> (<year>2018</year>). <source>Hello World: How to Be Human in the Age of the Machine</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>W. W. Norton &amp; Company</publisher-name>.</mixed-citation></ref>
<ref id="B22"><label>22</label><mixed-citation publication-type="journal"><string-name><surname>Fukuyama</surname>, <given-names>F.</given-names></string-name> (<year>1989</year>). <article-title>The End of History?</article-title> <source>The National Interest</source>, <volume>16</volume>, <fpage>3</fpage>&#8211;<lpage>18</lpage>.</mixed-citation></ref>
<ref id="B23"><label>23</label><mixed-citation publication-type="book"><string-name><surname>Greenfield</surname>, <given-names>A.</given-names></string-name> (<year>2017</year>). <source>Radical Technologies: The Design of Everyday Life</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Verso</publisher-name>.</mixed-citation></ref>
<ref id="B24"><label>24</label><mixed-citation publication-type="book"><string-name><surname>Haraway</surname>, <given-names>D. J.</given-names></string-name> (<year>2016</year>). <source>Staying with the Trouble: Making Kin in the Chthulucene</source>. <publisher-loc>Durham, NC</publisher-loc>: Duke <publisher-name>University Press</publisher-name>. DOI: <pub-id pub-id-type="doi">10.2307/j.ctv11cw25q</pub-id></mixed-citation></ref>
<ref id="B25"><label>25</label><mixed-citation publication-type="book"><string-name><surname>Hui</surname>, <given-names>Y.</given-names></string-name> (<year>2019</year>). <source>Recursivity and Contingency</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Rowman &amp; Littlefield</publisher-name>.</mixed-citation></ref>
<ref id="B26"><label>26</label><mixed-citation publication-type="book"><string-name><surname>Hume</surname>, <given-names>D.</given-names></string-name> (<year>1960</year>). <source>A Treatise of Human Nature</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Clarendon Press</publisher-name>.</mixed-citation></ref>
<ref id="B27"><label>27</label><mixed-citation publication-type="book"><string-name><surname>Jameson</surname>, <given-names>F.</given-names></string-name> (<year>2005</year>). <source>Archaeologies of the Future</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Verso</publisher-name>.</mixed-citation></ref>
<ref id="B28"><label>28</label><mixed-citation publication-type="book"><string-name><surname>Kant</surname>, <given-names>I.</given-names></string-name> (<year>1987</year>). <source>Critique of Judgement</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Hackett Publishing Company</publisher-name>.</mixed-citation></ref>
<ref id="B29"><label>29</label><mixed-citation publication-type="book"><string-name><surname>Kant</surname>, <given-names>I.</given-names></string-name> (<year>1998</year>). <source>Critique of Pure Reason</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1017/CBO9780511804649</pub-id></mixed-citation></ref>
<ref id="B30"><label>30</label><mixed-citation publication-type="book"><string-name><surname>Kittler</surname>, <given-names>F.</given-names></string-name> (<year>1997</year>). <chapter-title>The World of the Symbolic &#8211; A World of the Machine</chapter-title>. In <string-name><given-names>J.</given-names> <surname>Johnston</surname></string-name> (Ed.). <source>Literature, Media, Information Systems</source> (pp. <fpage>130</fpage>&#8211;<lpage>146</lpage>). <publisher-loc>Amsterdam</publisher-loc>: <publisher-name>G+B Arts</publisher-name>.</mixed-citation></ref>
<ref id="B31"><label>31</label><mixed-citation publication-type="webpage"><string-name><surname>Kurenkov</surname>, <given-names>A.</given-names></string-name> (<year>2015</year>). <article-title>A &#8216;Brief&#8217; History of Neural Nets and Deep Learning</article-title>. Available at: <uri>http://www.andreykurenkov.com/writing/ai/a-brief-history-of-neural-nets-and-deep-learning/</uri></mixed-citation></ref>
<ref id="B32"><label>32</label><mixed-citation publication-type="journal"><string-name><surname>Mackenzie</surname>, <given-names>A.</given-names></string-name> (<year>2015</year>). <article-title>The Production of Prediction: What does Machine Learning Want?</article-title> <source>European Journal of Cultural Studies</source>, <volume>18</volume>(<issue>4&#8211;5</issue>), <fpage>429</fpage>&#8211;<lpage>445</lpage>. DOI: <pub-id pub-id-type="doi">10.1177/1367549415577384</pub-id></mixed-citation></ref>
<ref id="B33"><label>33</label><mixed-citation publication-type="book"><string-name><surname>Marx</surname>, <given-names>K.</given-names></string-name> (<year>1976</year>). <source>Capital: A Critique of Political Economy</source>, <volume>1</volume>. <publisher-loc>London, New York: Harmondsworth</publisher-loc>: <publisher-name>Penguin/New Left Review</publisher-name>.</mixed-citation></ref>
<ref id="B34"><label>34</label><mixed-citation publication-type="book"><string-name><surname>Matherne</surname>, <given-names>S.</given-names></string-name> (<year>2016</year>). <chapter-title>Kant&#8217;s Theory of the Imagination</chapter-title>. In <string-name><given-names>A.</given-names> <surname>Kind</surname></string-name> (Ed.), <source>The Routledge Handbook of Philosophy of Imagination</source> (pp. <fpage>55</fpage>&#8211;<lpage>68</lpage>). <publisher-loc>London</publisher-loc>: <publisher-name>Routledge</publisher-name>.</mixed-citation></ref>
<ref id="B35"><label>35</label><mixed-citation publication-type="journal"><string-name><surname>McQuillan</surname>, <given-names>D.</given-names></string-name> (<year>2018</year>). <article-title>Data Science as Machinic Neoplatonism</article-title>. <source>Philosophy &amp; Technology</source>, <volume>31</volume>, <fpage>253</fpage>&#8211;<lpage>272</lpage>. DOI: <pub-id pub-id-type="doi">10.1007/s13347-017-0273-3</pub-id></mixed-citation></ref>
<ref id="B36"><label>36</label><mixed-citation publication-type="webpage"><string-name><surname>Mordvintsev</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Olah</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Tyka</surname>, <given-names>M.</given-names></string-name> (<year>2015</year>). <article-title>Inceptionism: Going Deeper into Neural Networks</article-title>. <source>Google AI Blog</source>, <volume>2019</volume>. Available at: <uri>https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html</uri></mixed-citation></ref>
<ref id="B37"><label>37</label><mixed-citation publication-type="book"><string-name><surname>Noble</surname>, <given-names>S. U.</given-names></string-name> (<year>2018</year>). <source>Algorithms of Oppression: How Search Engines Reinforce Racism</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>New York University Press</publisher-name>. DOI: <pub-id pub-id-type="doi">10.2307/j.ctt1pwt9w5</pub-id></mixed-citation></ref>
<ref id="B38"><label>38</label><mixed-citation publication-type="book"><string-name><surname>O&#8217;Neil</surname>, <given-names>C.</given-names></string-name> (<year>2016</year>). <source>Weapons of Math Destruction</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Crown</publisher-name>.</mixed-citation></ref>
<ref id="B39"><label>39</label><mixed-citation publication-type="journal"><string-name><surname>Osborne</surname>, <given-names>P.</given-names></string-name> (<year>2015</year>). <article-title>Future Present: Lite, Dark, and Missing</article-title>. <source>Radical Philosophy</source>, <volume>191</volume>, <fpage>39</fpage>&#8211;<lpage>46</lpage>.</mixed-citation></ref>
<ref id="B40"><label>40</label><mixed-citation publication-type="journal"><string-name><surname>Panofsky</surname>, <given-names>E.</given-names></string-name> (<year>2004</year>). <article-title>Reflections on Historical Time</article-title>. <source>Critical Inquiry</source>, <volume>30</volume>(<issue>4</issue>), <fpage>691</fpage>&#8211;<lpage>701</lpage>. DOI: <pub-id pub-id-type="doi">10.1086/423768</pub-id></mixed-citation></ref>
<ref id="B41"><label>41</label><mixed-citation publication-type="webpage"><string-name><surname>Pasquinelli</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Joler</surname>, <given-names>V.</given-names></string-name> (<year>2020</year>). <article-title>The Nooscope Manifested: AI as Instrument of Knowledge Extractivism</article-title>. Available at: <uri>https://nooscope.ai/</uri>. DOI: <pub-id pub-id-type="doi">10.1007/s00146-020-01097-6</pub-id></mixed-citation></ref>
<ref id="B42"><label>42</label><mixed-citation publication-type="book"><string-name><surname>Rivas San Mart&#237;n</surname>, <given-names>F.</given-names></string-name> (<year>2019</year>). <source>Internet, Mon Amour</source>. <publisher-loc>Santiago</publisher-loc>: <publisher-name>&#201;cfrasis</publisher-name>.</mixed-citation></ref>
<ref id="B43"><label>43</label><mixed-citation publication-type="journal"><string-name><surname>Rouvroy</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Berns</surname>, <given-names>T.</given-names></string-name> (<year>2013</year>). <article-title>Algorithmic Governmentality and the Prospects of Emancipation</article-title>. <source>Reseaux</source>, <volume>177</volume>(<issue>1</issue>), <fpage>163</fpage>&#8211;<lpage>196</lpage>. DOI: <pub-id pub-id-type="doi">10.3917/res.177.0163</pub-id></mixed-citation></ref>
<ref id="B44"><label>44</label><mixed-citation publication-type="journal"><string-name><surname>Searle</surname>, <given-names>J.</given-names></string-name> (<year>1980</year>). <article-title>Minds, Brains and Programs</article-title>. <source>Behavioral and Brain Sciences</source>, <volume>3</volume>(<issue>3</issue>), <fpage>417</fpage>&#8211;<lpage>457</lpage>. DOI: <pub-id pub-id-type="doi">10.1017/S0140525X00005756</pub-id></mixed-citation></ref>
<ref id="B45"><label>45</label><mixed-citation publication-type="book"><string-name><surname>Sloterdijk</surname>, <given-names>P.</given-names></string-name> (<year>2017</year>). <chapter-title>Wounded by Machines</chapter-title>. In <source>Not Saved: Essays After Heidegger</source> (pp. <fpage>217</fpage>&#8211;<lpage>236</lpage>). <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Polity</publisher-name>.</mixed-citation></ref>
<ref id="B46"><label>46</label><mixed-citation publication-type="book"><string-name><surname>Simondon</surname>, <given-names>G.</given-names></string-name> (<year>2018</year>). <source>On the Mode of Existence of Technical Objects</source>. <publisher-loc>Minneapolis, MN</publisher-loc>: <publisher-name>University of Minnesota Press</publisher-name>.</mixed-citation></ref>
<ref id="B47"><label>47</label><mixed-citation publication-type="book"><string-name><surname>Steyerl</surname>, <given-names>H.</given-names></string-name> (<year>2017</year>). <chapter-title>A Sea of Data: Apophenia and Pattern (Mis)Recognition</chapter-title>. <source>Duty Free Art: Arte in the Age of Planetary Civil War</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Verso</publisher-name>.</mixed-citation></ref>
<ref id="B48"><label>48</label><mixed-citation publication-type="book"><string-name><surname>Stiegler</surname>, <given-names>B.</given-names></string-name> (<year>2010</year>). <source>For a New Critique of Political Economy</source>. <publisher-loc>Malden, MA</publisher-loc>: <publisher-name>Polity</publisher-name>.</mixed-citation></ref>
<ref id="B49"><label>49</label><mixed-citation publication-type="book"><string-name><surname>Stiegler</surname>, <given-names>B.</given-names></string-name> (<year>2011a</year>). <source>Technics and Time, 3: Cinematic Time and the Question of Malaise</source>. <publisher-loc>Stanford, CA</publisher-loc>: <publisher-name>Stanford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B50"><label>50</label><mixed-citation publication-type="journal"><string-name><surname>Stiegler</surname>, <given-names>B.</given-names></string-name> (<year>2011b</year>). <article-title>Suffocated Desire or How the Cultural Industry Destroys the Individual: Contribution to a Theory of Mass Consumption</article-title>. <source>Parrhesia</source>, <volume>13</volume>, <fpage>52</fpage>&#8211;<lpage>61</lpage>.</mixed-citation></ref>
<ref id="B51"><label>51</label><mixed-citation publication-type="book"><string-name><surname>Zylinska</surname>, <given-names>J.</given-names></string-name> (<year>2020</year>). <source>AI Art: Machine Visions and Warped Dreams</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Open Humanities Press</publisher-name>.</mixed-citation></ref>
</ref-list>
</back>
</article>