Human computers, the labourious history of computing
Creators
Introduction
Human Computers is a media archaeology research that aims to unravel the intricate entanglement between computing and capitalism through the prism of labor. It is an attempt to analyze the historical bonds that tie together labor division with mechanized computing, and, by extent, with what is nowadays called "Artificial Intelligence".#media archeology, #labor
Human Computers does not unfold the whole colonial and extractivist logic behind computing – the extraction of rare minerals on which it is built, the whole extent of the infrastructures it requires, its ecological footprint and its increasingly growing ned for water and energy, nor its end-of-life. We – the art collective RYBN – have here narrowed down our focus on the human condition inside the loop of computational processes, defending the hypothesis that human workers are historically and structurally at the core of these processes from Gaspard de Prony's 1793 calculation manufacture to the latest generative machine learning models. En passant, autonomous and self-driving systems reveal themselves to be a structural myth, both for technology and liberalism, as they are indifferently invoked by the Big Tech behemoths, the car industry, or the financial markets – the famous invisible hand.
From 2005 to 2015, with the project Antidatamining,[1] we investigated the growing use of data on the financial markets. By extension, it led us to reverse-engineer the designs of some of the High-Frequency Trading algorithms [HFT]. We inspected the specificities of the technical infrastructures allowing those algorithms to run. Further we researched the causes of some the most iconic algorithmic market crashes – the original Flash Crash (May 6, 2010), the Knightmare (2012), and Hash Crash (2013) and learned about the bizarre folklore of the financial world, from the most eccentric pre-algorithmic signal based trading methods (Bachelier's Brownian movement, Gann's esoterism, Malkiel's darts, etc.) to the computerized hacking practices used to beat the market (Front Running, Quote Stuffing, Momentum Ignition, etc.), passing by the bizarre menagerie of Wall Street animal-traders (blindfolded monkeys, circus cats, turtles, etc.).#finance, #algorithm
On September 11, 2011 we launched our home-made low frequency kamikaze trading bot as an extra-disciplinary performance that involved buying and selling stock at the stock market. Our algorithm ADM8[2] was built over a multilayer Perceptron, based on Frank Rosenblatt's implementation (1958) of McCulloch and Pitts' model (1943), the first neural network, one of the most primitive forms of machine learning.#finance, #Frank Rosenblatt, #neuron

Two years later, a post on Twitter's engineering blog caught our attention. Improving Twitter Search with Real-Time Human Computation[3] explained how Twitter engineers were trying to correlate the users' searches and commercial results with the help of workers hired on MTurk, the infamous crowdsourcing platform launched by Amazon in 2005. This post broke up with the narrative of algorithmic autonomy, very popular at the time. It hinted at how algorithms were embedding human workers in their chain of operations. This concept, known in computer engineering as Human-in-the-loop, became the starting point of our investigation on digital labor, from which RYBN's Human Computers, a performative installation emerged.[4]#amazon, #crowdsourcing
This text ties together different motives of our research for Human Computers. Its structure follows Amazon's motto, written in capital letters on the walls of all its warehouses: "Work hard. Have fun. Make history". We take this imperative and split it into different sections to provides a materialist re-reading of that slogan. 'Work hard' focuses on labor and its quantification, 'Have fun' discusses aspects of gamification, and 'Make history' suggests a materialist reading of historical events.#amazon
Human Computers is constructed upon multiple genealogies. We borrow the term genealogy from Michel Foucault who suggested moving behind the views of established institutions "to discover in a wider and more overall perspective what we can broadly call a technology of power." He suggested to replace "a genetic analysis through filiation with a genealogical analysis", which includes the reconstruction of "a whole network of alliances, communications, and points of support."[5]#Michel Foucault
In short, the goal is not to provide an exhaustive historical account, but an analysis beyond the established canon. Ultimately, into the emerging narration we weave our own artistic projects. These demonstrate how reality cannot just be described, but in an act of artistic and potentially social intervention, altered.
Work Hard
The first genealogy stretches from Gaspard de Prony's calculation manufacture in 1793, to Amazon's algorithmic management in the 2010's.[6]#Gaspard de Prony
Manufacturing computation
Prony's calculation manufacture stresses out the influence of Adam Smith's Division of labor over computation and offers an interesting matrix to understand computer's origins and effects.[7] In 1791, under the French revolution, Gaspard de Prony was appointed the direction of Bureau du Cadastre to establish a national cadaster in order to enable the emerging civic state to collect land taxes. The cadaster brought a centralized and uniformed vision of the whole territory according to the needs of the state. It also generalized the model of individual land property at the expense of a diversity of customs, right for use, and lands of the commons. The stakes were high. The task was huge – inhuman, one might say. To facilitate land measurements, Prony wanted to assemble a set of printed logarithmic tables in the new decimal metric system, which afforded the centesimal division of the circle into grades using logarithms. That became the main motivation for his calculation manufacture that one could describe as the very first mechanized computer.#Gaspard de Prony, #Adam Smith, #division of labor
The organizational layout was inspired by Adam Smith's theory of the division of labor. In The Wealth of Nations (1776), Smith included a famous description of the pin factoryand demonstratedthat instead of having one worker to complete all production steps, it was more efficient to assign each worker to a specific task of the pin production and move the parts in-between them. Prony knows about Adam Smith's theory and decides to organize the calculation process and teamwork as a modern manufacture, as he pointed out in his later memoirs: "One could manufacture logarithms as easily as one manufactures pins".[8]#division of labor, #Gaspard de Prony

vol. 4. Paris, 1765.)
The computation labor was strictly divided and hierarchized: on the top of the pyramid, a group of skilled mathematicians designed the logarithmic functions. Then, a team of intermediary mathematicians transposed the main formulas into a step-by-step process of simple calculations and controls the results. Then, at the bottom of the pyramid, a team of up to seventy-five human computers executed basic and repetitive calculations all day long to compute all the intermediary steps, and wrote down, by hand, the results on pages compiled in sixteen in-folio volumes.[9] Each human computer was able to execute an average of 1000 additions and subtractions each day, and to produce 200 results per day. According to Lorraine Daston, the computing work was so repetitive and simplified, that one can say it was really executed mechanically.#division of labor, #computing, #human computers
The application of the division of labor to computation at such a scale brought new kind of errors – errors that, later on, Charles Babbage and his followers precisely attributed to human computers. Prony's organization produced two specific types of errors: miscalculations, attributed to the fast and repetitive conditions of work, and typographical errors in the table printing process. The calculations were done in a first quick round, handwritten as tables and the work was completed after a few months, in March 1795. To adjust the errors ratio, Prony decided to recompute everything, and all the calculations were reprocessed again, from scratch. The search for errors, using this new version as a comparator, took three years to be completed. In between, the typeset of the still erroneous tables was already undertaken by the publisher, Didot, as typesetting and layout took a long time as such. Only a half of them was completed when money dried up, due to the financial crisis of the assignat and the project ultimately failed.[10] With Bonaparte's coup d'État, the change from regime to Empire marks the end of Prony's Bureau du Cadastre in 1802. The publication of the Prony's logarithmic tables remained unfinished. However, the handwritten manuscripts were preserved at the Paris Astronomical Observatory archives. Since then, they became an important legacy for mechanized computing. Despite this failure, if anything, Prony's manufacture marks the birth of the profession of the human computer.#Charles Babbage, #Gaspard de Prony, #algorithm

A few years later, in 1820, Charles Babbage, who was researching on how to automate the calculation of logarithmic tables, visited Paris to consult Prony's folio manuscripts at the Paris Observatory. In 1832, he published On the Economy of Machinery and Manufacture, where he also discussed Prony's system. In a prior letter to Sir Humphry Davy, he proposed to replace the human workers by machines, to reduce the error rate of Prony's organization, which he attributed to the human factor. Babbage conceived his difference and analytical engines with the explicit goal of reducing the need for human labor, and reducing the overall costs: "I think I am justified in presuming that if engines were made purposely for this object, […] the tables could be produced at a much cheaper rate; and of their superior accuracy there could be no doubt".[11] However, without unfolding the whole Babbage history, his analytical engine, often presented as a the very first computer, only existed in diagrams and writings, and was never constructed during Babbage's life. Therefore, like Prony's manufacture, it joins the extensive list of 'white elephants'.#Charles Babbage, #algorithmic intensification

At the same time, during the intensive scientification, the demand for calculation was rising, and computer labor power was required by a growing number of disciplines. According to Grier, calculations were primarily required in astronomy and meteorology, from 1810 to 1950, and human computers were working in observatories all over the world, with increasing manpower. Two notable projects exemplify the growing need for calculations, the Sky Map Project and the Astrographic Catalogue, both initiated in 1887 by the observatory of Paris – whose Bureau of Measurements is directed by Dorothea Klumpke – in collaboration with twenty observatories around the world.[12] The global ambition of these projects, to index and map the positions of millions of stars, as well as the internationalization of these projects, lead to an explosion in computing power needs, and thus, for more human computers.#human computers

Slowly, the gender balance of the computers shifted to be almost exclusively female, as suggested by the short name attributed in 1895 to the Computing Division of Toulouse Observatory, "Bureau des dames", or by the name given to Harvard Observatory's computing department in 1876, informally known as 'Pickering's Harem'.[13] This shift was initially motivated by economic reasons, to save costs, as women were paid half as much as men, but also by deeply rooted misogynistic prejudices, attributing to women the ability to work on small, painstaking, and unimportant tasks.[14] According to Loraine Daston and her research on the women computers of the Greenwich Observatory, the profession became gender-exclusive around 1930.[15] In 1944, a unit of power was even proposed, the 'Kilogirl'.[16]#human computers, #algorithm
The need for more calculation continued to spread in the scientific world with the emergence of statistics at the end of the 19th century, following the works of Francis Galton and Karl Pearsons – two notorious eugenicists[17] – and, for instance, the installment of the Department of Applied Statistics in University College London in 1913, which employed around ten computers. Then, inevitably, the first World War generated a high demand in computing power for military needs, a labor being assumed in the US, at the Aberdeen Maryland Proving Ground, by a group of around sixty computers, and in the Washington Experimental Ballistics Office – where Norbert Wiener, father of cybernetics, was working as a mathematician – with an additional group of ten.#eugenics, #human computers

In 1938, due to the ever-growing needs for calculation power, the US government opened an entire state department of calculation, known as the Mathematical Tables Project. During the economic crisis, it was a direct child of the New Deal's Work Project Administration, under the direction of Gertrude Blanch, with at some point, up to 450 computers. With the second World War, the Mathematical Tables Project also responded to a growing demand for calculations, from laboratories, military, nuclear, aeronautics, etc.[18] Obviously, the development of the nuclear bomb and the Manhattan project largely contributed to this growing demand, even though Los Alamos had its own internal computing division, known as the T-5 group.[19]#human computers, #computing
After WWII, with the Cold War and the Space Race, Langley, became the nodal center of the demand for calculations, from 1943 to 1953.[20] In 1948, John von Neumann conducted a comparative speed study on linear programming between the ENIAC electron-mechanical computer and the Mathematical Tables Project workers.[21] The calculation required "29,856 additions, 15,315 multiplications, and 1,243 divisions", and the study concluded that machines could achieve this type of calculation much faster than humans, in nine hours only. This study, according to D.A. Grier, marks the date of the decline of the profession. The cyberneticians began to build their glorious narrative of autonomous machines on von Neumann's study, a narrative that retroactively invisibilized the human computers in the general public eyes for years.[22]#computing, #human computers


During a residency in Pact Zollverein, Essen, while working on the project 'Human Computers', RYBN subscribed to the Amazon MTurk platform and recorded the movement of the keyboard and the mouse of several tasks, visualizing them using a method inspired by Gilbreth's chronocyclegraph.#division of labor, #art
Computing the factory
A second genealogy of the Human Computers links together human metrics, the science of work and management, and how computation has backpropagated into the factory. Measurement, quantification and computation of workers' gestures and timings are constitutive of the whole labor rationalization process that prefigures the invention of management. With the division of labor, information needs to transit horizontally and vertically, through the production line and through the chain of command to organize and prioritize orders, tasks, production, control and surveillance. According to the historian Anson Rabinbach, the optimization of work becomes the obsession of the modern period, science is gradually placed at the service of industry and the productivist society.[23]#management, #division of labor
Human metrics was first designed in the laboratories of physiologists and ergographers. At the turn of the 20th century, it makes its way into factories and becomes the vanguard of the scientific organization of work. From the industry, metrics then spreads to the whole society: war, school, professional orientation, before being massively deployed in the first quarter of the 21st century by the gig economy platforms, as well as on our leisure time, our social relationships, our homes and our intimacies. Thus, this genealogy integrates the increasing optimization and rationalization of labor up to its 'algorithmic' version. With the Industrial Revolution, the human being become a 'human motor', incorporated into the thermodynamic equation: "The expense of a motor is its consumption of energy, either in fuel for inanimate motors, or in food for animate motors".[24]#metrics, #algorithmic intensification
With the Industrial Revolution, technique became the almost exclusive interface for understanding and studying the world, with an asymmetrical distinction between an 'observing subject' on the one hand, and an 'observed object' on the other. According to Hannah Arendt, modernity starts with the distancing from the world observed through the telescope,[25] and, according to Johnathan Sterne, the distancing from the body through the stethoscope.[26] The metric of the human body is accompanied by the fragmentation of bodies and the instrumentalization of beings.#Hannah Arendt, #Modernity
Human-motor thermodynamics
The measuring of human-motor thermodynamics began, so to speak, with the physiologist Hermann von Helmholtz, who transposed the principle of conservation of energy to humans at work. The mechanistic, thermodynamic conception of the human body became the norm, punctuated by the invention of all kinds of instruments and devices to measure human activity, quantified and translated into numerical terms. With the first thermodynamic principle, metrics moves from organ to organ and explored and isolated the organs under the condition of labor.#thermodynamics
We'll illustrate this point focusing on the lungs, the heart, the metabolism, and muscles. To measure lungs and breathing capacities, Edward Kentish's Pulmometer (1813) was followed by a long series of Spirometers, including that of English physician John Hutchinson (1846). Labor physiologist Jules Amar later used it to calculate the lowest energy expenditure required to achieve a simple industrial task involving work tools. The broader metabolism was studied and quantified with the Respiratory Chamber, conceived by Max von Pettenkofer and Carl von Voit in 1866, and then with the Calorimeter, an invention of Max Rubner of 1889. In parallel, the heart and circulatory system became focus of Carl Ludwig's Kymograph (1850), and the German physiologist Karl von Vierordt's Sphygmograph (1854) was able to measure blood pressure. That apparatus was improved by Étienne-Jules Marey in 1863, to graphically record pulse waves using pneumatic tubes. Even if Marey never directly worked on labor metrics, his Graphical Method (1878) marks a turning point in the development of labor measuring instruments, by offering the possibility to record graphically data.[27] For the field of labor and muscle studies, Helmholtz and Marey were also responsible for the Myograph, which provided the first graphical recordings of muscular fatigue, and inspired Angelo Mosso to create his fatigue curves. Mosso, a physiologist and social educator from Torino spent his life searching for a vaccine against fatigue. With the second law of thermodynamics and the discovery of entropy, fatigue was seen as a disease and a resistance to productivity. Mosso then developed the Ergograph in 1884, enabling muscular fatigue to be traced and measured.#measuring, #dead media
In parallel these scientists realized that the ability to work was not just a muscular function, but also a question of will, followed by an avalanche of apparatuses that the next section explores.


Mental effort metrics
In 1850, Helmholtz succeeded in measuring the speed of nerve impulses, which he called physiological time. In the same vein, Franciscus Donders, a Dutch ophthalmologist, developed the Noematachograph in 1865 – a derivative of the famous Phonautograph by Edouard-Léon Scott de Martinville. With this apparatus, Donders was able to measure and record the 'speed of thought', or more precisely, the reaction time to a given stimulus. Opening the field of mental chronometry, this apparatus is the ancestor of attention tests in cognitive neuroscience. In 1895, Herman Griesbach developed the Esthesiometer, to measure mental fatigue by measuring the tactile sensitivity of the skin. At the turn of the twentieth century Alfred Binet, a French psychologist and hygienist, studied mental fatigue and carried out intelligence tests in schools to identify bad pupils. His efforts culminated in 1905 with the adoption of a metric intelligence scale (the precursors of IQ tests), designed to assess children's aptitudes in relation to the occupations they are destined to exercise. This also marks the emergence of professional orientation.
The german psychiatrist and eugenicist Emile Kraepelin counted errors observed in a series of simple tasks to be performed within a given timeframe – such as memorizing a sequence of numbers. Errors were taken as the most obvious sign of mental fatigue. Kraepelin also invented a system for tracking students according to their work capacity, which went hand in hand with the needs of a civilization based on productivity. And to complete the panel on mental effort metrics, in 1922, Jules Amar invented the Psychograph – a device largely inspired by Donders' Noematachograph – that he applied to workers and soldiers, to measure and record their sensory acuity and attention.[28]#measuring, #neuroscience, #dead media


Out of the laboratory to infuse society
In 1913, Frederick Winslow Taylor published his Principles of Scientific Management. Task timing and the Chronometer were at the heart of his method. One year later, Jules Amar suggested to divide individuals according to "their degree of intelligence and their personal characteristics, and to distinguish those who are good at fast work from those who work slowly." According to Amar, efficiency was imposed by industrial production in order for every worker to "bring out all his abilities, so that he increases his productive power".[30] The same argument was summed up by Frederick Taylor as 'the right man at the right place'. This utilitarian vision of society mingled with eugenicist tendencies that dictates the role of each individual, spread broadly at the beginning of the 20th century.#Taylorism, #division of labor
Frank Gilbreth, an American engineer, embraced the idea of scientific management, and the idea of optimizing industrial work by dividing up and simplifying the worker's movements. In 1913, Frank & Lilian Gilbreth developed the Chronocyclegraph, applying Marey's graphic method and Chronophotography to study and optimize workers' movements in the factory. The rest of the story is a classic tale from economy courses. Taylor's principles radicalized into Fordism in the 1920's. With Henry Ford – a notorious antisemitic propagandist and a Nazi regime sympathizer – the car industry and the assembly-line itself became the laboratory of 'scientific management'. Then, after the second World War, in the 1960's in Japan, the Toyota Production System, under the influence of engineer Taiichi Ohno refined the labor organization with new instruments – the Just-In-Time production method.#Frank & Lilian Gilbreth, #Taylorism, #assembly line, #dead media

Algorithmic management
These two genealogies, the one of manufacturing computation and the one of labor matrix, mix as one in the rise of a new type of management, at the beginning of the 21st century: algorithmic management.#management
This data-driven type of management is derived from the platform's principles of disruption, market concentration, digital surveillance and cybernetic feedback, applied both to labor and market capitalization performances. Amazon incarnates this new type of extreme management so perfectly that some have proposed to eventually name it Amazonification.[32] Several of Amazon's patents unveil the most dystopian aspects of this new management regime, as we explored in our collection of dystopian patents, titled IPPI/CC.[33] Patents like System And Method For Transporting Personnel Within An Active Workspace (2016). or Ultrasonic Bracelet And Receiver For Detecting Position In 2d Plane (2017) present carceral-type devices for workers.
#amazon


The principles of algorithmic management can be summarized by having a quick tour in one of Amazon warehouse. Observing its inner mechanisms and its daily routines offers a crude light on algorithmic governmentality[34] in several acts. First, enter any warehouse of the company and have a look around. On the wall, written in giant letters, find the motto of the company: Work Hard, Have Fun, Make History. Then observe the workers. They are audio guided by an algorithm to the items through text-to-speech.#amazon
In the 2018-2023 version of the installation of the project Human computers, called Zugzwang, visitors are guided in the installation by a synthetic voice within a square of 12 tables. On each table, the history of digital workers and human computers is organized by topics, one per table. At each table, the audio guide asks the visitor to execute a task, with a time limit. At the end of the tour, without knowing it, the visitors have executed the knight's tour problem. Through this process, visitors are able to have a glimpse of Amazon's warehouses internal management methods.#art, #human computers
Instead of replacing workers with robots, workers are turned into robots and follow automatic instructions. The items on the shelves are not organized logically – or at least in terms of human logic – but randomly. An Amazon manager said: "It is not about learning where things are, or having to memorize: we make the tasks as simple as possible" – a reminiscence from the calculation factory. For each worker, productivity is around 1000/1200 items a day, 2 per minute.#amazon
The algorithmic system that runs the warehouse is called Anytime Feedback Tool, it is driven by data, and performance. It monitors and tracks employees' performance continuously, and communicates the results to all employees, through another management tool, the Rank and Yank system, where employees are ranked in a constant competition against each other with annual performance reviews. Workers have no time for rest, or social interactions. Overruling an instruction, for example because a package would be too heavy, is not an option. The human is considered the weak link in the efficiency chain – but hasn't this always been the case since the rationalization of labor? In return, workers feel dehumanized, stressed to their limit, and their work is unsustainable from a health-and-safety perspective.#algorithmic intensification
To summarize: Under the algorithmic regime, humans remain cheaper than robots.[35] Humans are deemed as small cogs of a larger algorithmic engine, where the algorithms direct and command humans. Algorithmic decisions are not to be discussed or criticized by humans. Algorithmic routes and paths are inhumanly designed and can lead to inhuman trajectories to follow, and inhuman timings to respect. These algorithmic management methods echo the warnings of Norbert Wiener, the father of Cybernetics, stated both in his scientific writings on cybernetics, for instance, in The Human Use of Human Beings (1950), and in his science fiction works.[36] The most direct connection however, may be with the letter he wrote in 1949 to Walter Reuther, the president of the Union of Automobile Workers, to warn him against automation and the social consequences of it.#cybernetics, #algorithmic intensification, #Norbert Wiener
Today, all large tech transnational capitalist platforms are using such managerial methods: Amazon, Uber, Deliveroo, etc. But also more traditional companies and even more concerning, state and public services introduced these measures. During the COVID-19 home-office growth for instance, the management regime against workers and students evolved into dystopian surveillance, with more and more 'boss ware' tools. Slowly but steadily, algorithmic management is becoming the norm.[37]#management

Have Fun
Simulacra
The third genealogy of Human Computers follows the history of automata. It is a history of hoaxes and illusionism, perfectly incarnated by two legendary automatons from the 18th century, the Digesting Duck and the Mechanical Turk.
In 1764, Jacques de Vaucanson build the Digesting Duck. This automaton captured the spirit of the time, René Descartes having engaged philosophy in modernist ways just a century earlier, where animals are considered mere machines.[38] The machine of Vaucanson marvels the audience during its demonstrations, due to the realism and perfection of its mechanisms. During these demonstrations, the machine is fed, literally, and inside the Duck, a mechanism, invisible to the audience's eyes, supposedly reproduces the digestive system. Food is then transformed and expelled from the machine, as feces. However, after a closer analysis, the machine was revealed to be a fake. The magician and illusionist Jean-Eugène Robert-Houdini inspected the duck in 1844. He discovered that the duck's excreta consist of "pre-prepared breadcrumb pellets, dyed green". He described the duck as "a piece of artifice I would happily have incorporated in a conjuring trick".[39]#Fake, #automation, #dead media
In 1770, Wolfgang von Kempelen constructed the infamous Mechanical Turk. This automaton was supposed to play chess and even be able to complete the Knight's problem. Like the Digesting Duck, the Mechanical Turk amazed the audiences. However the machine turned out to be fake. Inside, a human operator was hidden. The automaton was bought in 1805 by Johann Nepomuk Maelzel, who also patented the metronome (another disciplinary instrument of labor performance, used for learning stenography).#Fake, #automation, #artificial artifical intelligence, #dead media

The Turk toured around the world, and played chess games against celebrities of the time, like Napoleon Bonaparte, and Charles Babbage. It was depicted by Edgar Allan Poe in his 1874 novel Von Kempelen And His Discovery, in which the author attempts to deduce its fraudulent modus operandi. Charles Babbage again got inspired by his experience against the Turk, and after 1820, when he publicly demonstrated his calculation machinery, he started the exhibits with a presentation of the Silver Lady, a ballerina automaton: "At these soirées, Babbage displayed a model of his early difference engine – a brass calculating machine capable of tabulating higher-order polynomial functions – alongside a silver automaton in the form of a dancing ballerina. Most guests were drawn to the ballerina."[40]#Fake, #Charles Babbage, #automation, #dead media
Illusionism tricks were not reserved to the automaton folklore of the 19th century, they were also at the heart of the pioneering research on Artificial Intelligence. Let's think for example of the Imitation Game, a test proposed by Alan Turing in his 1950 article On Computing machinery and intelligence.[41] The set up is a blind conversation, the tested subject is to distinguish whether his interlocutor is a human, or a machine. Intelligence here is not reproduced by the analysis of its possible workings, but only by the appearance of it. A system that is not so different from the Duck or the Mechanical Turk. As of today, the Turing test is still in use to measure a chat bot performance.#Alan Turing, #chat bot
We could further refer to ELIZA, another conversation-based model, or chat bot, proposed in 1966 by Joseph Weizenbaum. Eliza was able to mimic human conversation, based on a simple language trick, where answers were returned as questions, inspired by the client-centered approach of the Rogerian psychotherapy. Eliza offers an interesting surface of projection: even when its human users do know that the conversation is run by a computer, they still attribute feelings to Eliza, a strong case of humanizing projection. Weizenbaum himself was surprised that such a simple trick managed to catch such empathy. The trick is used nowadays for gathering confidences in situations, where talking to other humans remains an ordeal.[42]#chat bot

The suspicion of cheating continues to haunt all automated systems produced since the Duck and the Turk. In 1997, during the rematch between Deep Blue and Kasparov, a controversy arose in the 2nd game. Kasparov resigned and accused IBM of cheating, alleging that a human grandmaster had been playing one certain move (36.axb5! axb5 37.Be4!). Kasparov requested the logs of the machine, but IBM refused to provide them and dismantled the machine.[43]#automation, #Fake
Artificial Artificial Intelligence
So, it is not by chance that Amazon crowdsourcing platform, MTurk, refers to the fraudulent chess automaton. At the time of the emergence of this platform, Amazon proposes the concept of Artificial Artificial Intelligence, or AAI. Jeff Bezos mentions it in an interview of 2007: "Normally, a human makes a request to a computer, and the computer does the computation of the task. But artificial artificial intelligences like Mechanical Turk invert all that".[44] We can see it also under other naming, such as: Human-in-the-loop – as we have seen earlier – Pseudo-AI, Faux-AI, Wizard of Oz technique, Potemkin-AI, or even sometimes, Human-Powered-AI. Over the years, many systems promoted as AI-powered, were disclosed to be fraudulent, relying on the labor of workers.#Fake, #artificial artifical intelligence
These are not single occurrences: In 2008, a BBC investigation revealed that Spinvox, an app that is supposed to use AI to convert voicemails into text message, was based on human labor, despite having won the 2006 innovation award: "In 2008, Spinvox (…), was accused of using humans in overseas call centres rather than machines to do its work".[45] In 2015, Facebook Digital Assistant M was caught being a fraud. "It's primarily powered by people" said the former Facebook CTO Mike Schroepfer 'But those people are effectively backed up by AIs'". Facebook shut down M in 2018.[46] In 2016, another digital assistant, Clara, from X.ai was accused of being human powered. "Behind the artificial intelligence personal assistants and concierges are actual people, reading e-mails and ordering Chipotle".[47] The same year, we learn that Google Duplex, another virtual assistant, is also based on human labor. "Google Duplex contacts hair salon and restaurant in demo, adding 'er' and 'mmm-hmm' so listeners think it's human".[48] In 2018, the documentary The Cleaners led an investigation on content moderation of the platforms, based on a large set of interviews with workers from Manila. The documentary shed light on the extreme conditions of these workers, confronted to all kinds of visual and verbal violence. In 2019, an article details that behind Alexa, thousands of workers are listening to the conversations, which they transcribe, annotate, and feed back in the system, simply to make it work.[49] In 2023, we learned that human workers are needed to make ChatGPT less toxic, paid $2 per hour to classify and filter harmful content.[50] In 2024, the Amazon Just Walk Out system, an attempt to full automation of a grocery store, was disclosed to rely on thousands of workers in India.[51] The year 2025 started with a memo from DiPLab on the human labor behind DeepSeek, while everyone was celebrating the low costs of the disrupting Chinese AI engine. And last but not least, over the years, many articles have described how self-driving cars were relying on remote human drivers, employed by companies such as Zoox, a self-driving car company owned by Amazon, or Waymo, owned by Google's parent company Alphabet, or Cruise, owned by General Motors. "After a series of high-profile accidents, they have started to acknowledge that the cars require human assistance".[52] These many examples demonstrate a pattern of making human labor invisible, fostering a narration of the 'greatness' of full automation.#artificial artifical intelligence, #google, #openai, #facebook, #data
And these are only a few examples among many. This is not fraudulent systems set by small startups trying to make a name, but by the GAFAM themselves: Amazon, Facebook, Google, etc. Looking behind the curtain of automation – of contemporary computing, of algorithms and AI – one can see the army of human workers, hired to train models, to annotate the data, to moderate the content, and even sometimes, mimic the procedures.[53]#amazon, #google, #facebook
To recruit the workers, the GAFAM do not hire them directly, but through the proxy of various platforms of crowdsourcing,[54] the most famous being Amazon MTurk. But multiple other platforms exist, each with its own specialty, its own work conditions. The most notable are: FigureEight (now called Appen) who provides more than 1,000,000 flexible workers; Scale AI, started in 2016 and funded by Peter Thiel, with crowds of workers in Kenya, Nigeria, and Pakistan, Thailand, Vietnam, and Poland; Sama, or Samasource, specialized in processing "complex data projects for large tech companies into small tasks" that can be completed "by women and youth in developing countries with basic English skills after a few weeks of training"; clickworker.com, who claims to "use the power of our global crowd of clickworkers to generate, validate and label data"; etc.[55]#amazon, #scaleai, #automation, #data, #labor
All these platforms are dedicated to annotating data, create, correct and clean data sets for algorithms and GenAI models, but some also propose to moderate content, process audio materials, etc. They actually do intervene at different steps of the processes – in the training phase, to correct the output or the process, etc.[56] Workers are usually assigned with tasks that they do not – are not supposed to – understand (vertical division) and which are fragmented (horizontal division). The labor is distributed, usually done remotely from home (here, we have come back to the cottage work model of the first human computers working on the almanacs). Workers are outsourced to countries with lower wages (developing countries), perpetuating the colonial logic, and often, with a neocolonial discourse claiming mutual development.#data, #division of labor
Nowadays, platforms, in their never-ending quest of finding the cheapest workers, are experimenting with workers hired in prisons, in China and Finland, drawing the perspective of a Becoming-Carceral for the Internet.[57]

Reacting to these developments we set up another artistic system in 2019, that took advantage of Amazon Mechanical Turk, a reenactment of Kempelen's Mechanical Turk, where human players were invited to play against a worker on Amazon MTurk. We called it: AAI Chess.[58] When we made our MTurk Chronomatograph [Fig. 8, shown above], we were using the platform as workers, executing tasks. But this time for AAI Chess, we used it as a 'requester,' meaning, as an employer.#art

Each time a new game is created, a channel is opened with Amazon MTurk, where the game is segmented as moves, which are proposed to the workers as 'Hits' (tasks). Each move opens a small auction where the fastest 'Turker' get the job, and the reward (salary), when the 'Task' is completed. But each auction has a time limit. If no one takes the job, it is re-proposed with a higher reward. However, if one accepts quickly the job, the salary offered the next task lowers down. As tasks are usually taken as fast as possible, it lowers the salary for most of the coming tasks, and thus the incomes of workers. This market-in-the-market scheme, through its price mechanics and its impact on Turker's rewards, underlines the limits set by the platform for workers to organize collectively and actually allows to game the platform.[59]#art, #amazon, #division of labor
Toward Laborious Computing
In addition to these relative specialized activities of annotating and training data, which employ more than 150 millions person across the globe,[60] digital labor studies have shown that all platforms users have become potential workers.[61] By using a platform, we are all–willingly or unwillingly–working to train their algorithms. Every CAPTCHA solved, every like on Facebook, or on Instagram, every rating submitted for a product on Amazon, every note given to a driver on Uber, or to a Deliveroo driver, every movie watched on Netflix, do participate to train their algorithms. In the age of platforms, we are all human computers.#human computers, #division of labor, #data
That's why, in addition to the already existing names to call the fraudulent systems based on human labor – Human-in-the-loop, AAI, Pseudo-AI, Faux-AI, Potemkine-AI etc. – we propose a new term, Laborious Computing, set to reconnect computing with its manufacture origins, and to designate every computational processes that requires human labor at some point – be it for its training, its cleaning, its process, etc. The term is proposed not only de-invisiblize the workers, but to make visible the factory we're all being taking part of.#computing, #labor

Make History
Silver Lady Syndrome
How do techno-sciences 'make history'? Or, rather, how do they stage themselves, how do they organize their storytelling, how does tech talk about itself? This may amount to the same thing if we consider that 'making history' today relies mostly on imposing their narrative and their semantics. The tech milieu likes to stage its inventions. It can be seen each time the GAFAM introduces a new product, a new gadget, in events where they showcase their financial power. That might be an inheritance from the ancestral figures of technology, like Nikola Tesla, who staged legendary demonstrations, or Babbage and his Silver Lady. Without forgetting the numerous International Exhibitions of the Eighteenth and Twentieth century, and their imperial and colonial sub-texts, starting with the Great Exhibition of the Works of Industry of all Nations, in London in 1851.#history, #Fake
But, let's start from a concrete example. Let's have a look at the TED Talks, the Silicon Valley's nexus of tech's newspeak.[62] In TED conferences, you have seventeen minutes to convince the audience – or, to put it bluntly, seventeen minutes for a scientist or a researcher to get some funding. And to do so, the format encourages the speaker to present the most spectacular content, similar to a click-bait strategy. Some of the most notable TED Talks are Bringing back the Woolly Mammoth to save the world (2017) where genetic engineering is called to fight climate change by reviving Mammoths, or When Nature Becomes Architect: Growing our next generation of buildings (2017), where we are sold houses that will grow thanks to genetically modified organisms. Tech's wildest fantasies are spread throughout viral videos.#TED Talk
Another strategy to channel the narrative and lock the imaginary can be found in GAFAM patents. We have mentioned earlier some of Amazon's mind-boggling patents, famous for their extravagant promises [Figure 14]. Like the spectacular Method and System for Anticipatory Package Shipping (US8615473B2, 2013), which states, in a dystopian minority report style, that Amazon could send a drone to deliver a parcel to a client before he had even ordered it, by only predicting the future purchase on the basis of their internet navigation activity. When minutiously analyzing the GAFAM patents in our work IPPI/CC (2019), we realized that most of the proposed inventions are not based on credible scientific basis, but on a sci-fi engineer's vision of the world, with very poor perspective toward their social and ecological consequences. A lot of patents participate in the construction of a specific narrative, a PR operation, whose operational role is to enlarge the Overton window of acceptance for the social uses of technology, and to overwrite negative social and ecological consequences.#science-fiction, #Fake
The newspeak of Laborious Computing
In a similar fashion, the specific semantics that emerged around Laborious Computing attempt to invisibilize human labor (and its costs), satisfy investors, and create general conditions of acceptance. The PR elements, the communication plan of tech companies, and the media coverage they get, is telling.
That's why we began in 2020 to collect PR elements, news articles titles, and categorized recurrent keywords displayed on websites of startups into the artistic project Laborious Computing. By retracing the evolution of these language elements between 2020 and end of 2024, one can observe how the semantics has evolved, starting from vague promises to save everything (the economy, the world, the climate, etc.), to concrete solutions, pragmatic and affordable outputs in management and human resources. The tech discourse usually helps the acceptance for a given technology, while this technology will probably end as new brick for the surveillance panopticon. A recent example has been the appointment of Paul Nakasone, an ex-NSA agent to the board of OpenAI in June 2024. A trajectory from techno-solutionism to disciplinary surveillance that Web 2.0, big data, blockchain and many others have already followed.#discourse, #Fake
In 2020, technological progress was sometimes depicted in the negative domain, either as an existential "threat" – invasion, replacement – or as a "risk". But, in the world of capital investors, a risk always hides an opportunity. Or a solution. Laborious Computing was there to help in the COVID crisis, to save forests, to fix climate, to fight discrimination, to prevent suicide, to prevent a global economic crisis. But somehow, it ultimately evolved more pragmatically: Laborious Computing became marketed as a solution for business, for police, for banks, for public surveillance. Then the solution became the product, in which one needed to invest to become leader and embrace the revolution. In fact, at the very end of the year 2024, it revolves about management: a responsible solution that shall help to reduce costs, to improve decision making, to transform human resources, and offer more efficient customer service. In the capitalist's language, 'transform human resources', 'improve decision making', and 'efficient customer service' usually means reducing the overall human costs. As Antonio A. Casilli put it: "The boss always calls himself 'AI' when preparing a redundancy plan."[63]#computing, #AI, #management
The complete analysis of the Laborious Computing Newspeak would require a dedicated analysis on the modalities as proposed by Victor Klemperer in his book Lingua Tertia Imperii, the Language of the Third Reich. In 2023 Francis Hunger, on that regard, has proposed a lexicon to Unhype AI,[64] which is a first effort in this direction. The proposed alternative terminology includes terms such as: Automated Pattern Recognition, instead of Artificial Intelligence; Machine Conditioning, instead of Machine Learning; Weighted Network, instead of Neural Network; or Deep Conditioning, instead of Deep Learning. Such terms aim to de-anthropomorphize the discussion and stop using any connotation attached to intelligence.#discourse
The term we propose, on our side, Laborious Computing, aims to achieve a similar goal, name things with more precision, but from an opposite angle. It aims to re-anthropomorphize, re-incarnate, re-embody, and underline the human component at the heart of the operational modalities of this particular technology, and shed light on its extractive and exploitative nature.#discourse
Maslow's Hammer law
Human Computers, as a research, seeks to understand the position of the human being within the algorithmic chain of orders. More than that, we can try to reformulate the question that Human Computers addresses, following Maslow's Hammer law (1966): "Once you got a hammer in hand, everything looks like a nail". The question we ask with Human Computers could be also formulated as such: "What is the Laborious Computing's law ?" – or even: "What does everything look like when you use a computer?" To answer this question properly, we felt to the need to further analyze the hammer in question.#labor
In 2015, on the invitation of PACT Zollverein we proposed the workshop "Human Perceptron",[65] aiming at this very purpose. We took a primitive form of neural network, a Perceptron, in this case a single layer one. It resembled the one we used for our 2011 trading bot artwork, ADM8. And we decided, to make a miniature computing manufacture with the workshop participants.#perceptron, #art

Participants were to compute, by hand, all the single steps required by a Perceptron to complete its goal: identify and separate two classes of objects. The participants were embodying the neural network through all the necessary steps toward the completion of a single data point and the overall architecture. This provided an opportunity to understand and deconstruct the underlying logic behind Laborious Computing. Participants used simple objects, for instance chairs and tables, decided collectively of the metrics to elaborate the two distinct classes, calculated the separator until the two classes were separated, and injected new objects with unconventional forms and dimensions, to get some errors from the system. During the process, participants understood the reductionism of the real which is at stake in computational logic, and the inherent biases that infuse the process. Each object has to be classified using a very limited number of parameters, which doesn't do justice to their inherent complexity. The workshop addresses also the importance of training, annotating, and cleaning data, the importance of clean data. By going through all the steps, the possibility of an error that could challenge the process become obvious, due to the world greater complexity and unpredictability. Last, doing everything by hand, underlines the laborious nature of the work, the division at stake.#perceptron, #computing, #human computers, #art
For us from an artistic perspective, using the body as a recording medium was the most interesting part. Fatigue and repetition inscribed directly the experience onto the body of the participants. The never-ending duration taken by a single operation of classification, made it even more memorable. Like the naming of Laborious Computing, the workshop aims to re-embody the computational logic.#human computers, #art

Conclusion
Let's summarize. We have seen that computing have originated from the Division of Labor, that factories have developed human metrics for optimization purpose, and that they rely now on algorithmic management based on instant data feedback, in which humans are used as cogs in the machine, whether or not they are paid for that. We have seen that automata were historically indistinguishable from illusionism, and that behind the curtains of some of the latest high-tech systems, workers are exploited to maintain the illusion of automation. And that the whole language that surrounds Laborious Computing is set to preserve the narrative. In the meantime, below the surface of the Tech Newspeak, every day, numerous press articles point out all the immense flaws of the Laborious Computing, as is still forced upon us, in all direction at once.#division of labor, #algorithmic intensification, #automation
First, the technology, as it is sold, is fundamentally flawed. The statistical nature of Large Language Models' inner mechanics, and the fact that it has no insights for any possible truth causes them to 'hallucinate' and produce false results (bullshit generators). It may occur because of the quality of the training data. Or because their own inferior quality results are recursively re-injected as training data into the systems, producing even more bullshit. Their data have already polluted and damaged entire other software ecosystems, under a flow of data one could consider as spam. Some researchers have pointed out that basic LLMs cannot even do a simple addition, and several benchmark tools for LLMs regularly test them with basic logic problems that they are unable to solve, etc. Second, Laborious Computing is ecologically irresponsible. Its technical infrastructures require more Data Centers, and thus, more rare minerals, and more concrete. Their consumption of water is at least four times more than expected, and the pollution generated by the GAFAM is 600% greater than what they announced. A recent PR communication plan by Microsoft tried to prepare the opinion in the US for the reopening the Three Miles Island nuclear plant, on the very site where happened a nuclear incident of 1979. Third, all of this feeds a speculative bubble. Warren Buffet has sold all his assets linked to Laborious Computing in August 2024. A speculative bubble has been identified by investors and is declared on the verge of imploding.#LLM, #Fake
Therefore, with all that in mind, if we consider Laborious Computing flawed and ecologically irresponsible, and breeding a possible world financial crisis, is that really the technology that we hallucinate? Or aren't we, in a similar fashion to our conversations with Eliza, the victims of a collective hallucination?
This text is a transcript of the lecture given in the symposium If AI was the answer what was the question, again? Tue, Nov 5, 2024, organized by the Emergent Digital Media class at AdBK Munich
[1] RYBN.ORG. 2005-2015. ANTIDATAMINING, http://www.antidatamining.net.
[2] RYBN.ORG. 2011. ADM8, https://adm8.rybn.org.
[3] Chen, Edwin and Alpa Jain, 2013. "Improving Twitter Search with Real-Time Human Computation | R-Bloggers." R Bloggers. January 8, 2013. https://www.r-bloggers.com/2013/01/improving-twitter-search-with-real-time-human-computation.
[4] RYBN.ORG, since 2016. Human Computers, https://rybn.org/human_computers/.
[5] Michel Foucault, 1978. Sécurité, territoire, population. Cours au Collège de France 1977–1978, Edition Michel Senellart. Paris, Leçon du 8 février 1978, p. 121.
[6] For Prony's calculation manufacture see: Peaucelle, Jean-Louis. 2012. "Le détail du calendrier de calcul des tables de Prony de 1791 à 1802", Preprint, The Loria Collection of Mathematical Tables, https://locomat.loria.fr/cadastre/docs/peaucelle2012prony-calendrier.pdf; Roegel, Denis. 2010. "The great logarithmic and trigonometric tables of the French Cadastre: a preliminary investigation", Preprint, The Loria Collection of Mathematical Tables, https://lru.praxis.dk/Lru/microsites/hvadermatematik/hem2download/kap4_projekt_4_2_ekstra_dennis_roegel_history_of_pronys_tables.pdf; Daston, Lorraine. 1994. "Enlightenment Calculations", Critical Inquiry, Vol. 21, No. 1 (Autumn 1994), p. 182–202, https://sites.tufts.edu/models/files/2019/03/daston-tables.pdf; Grier, David A. 2005. When Computers Were Human. Princeton, N.J., Princetown University Press; Laumonier, Alexandre. 2013. 6: Le soulèvement des machines. Zones Sensible, Bruxelles.
[7] According to Grier (2005, chap. 2), the division of labor method has already been applied to computing in 1757, by the astronomer Jean André Lalande. What Prony's calculation manufacture practically introduces as a precedent in the history of mechanized computation is both horizontal and vertical division of labor, 100 years before Taylor.
[8] Prony. 1824. "Notice sur les grandes tables logarithmiques, adaptées au nouveau système métrique décimal" Didot, Paris, p. 35, Quoted in: Peaucelle. 2012.
[9] Peaucelle 2012. p. 14; The group of skilled mathematicians designed the logarithmic functions, and some pivot results have been prepared every 200 steps, leaving intermediary results blank. The sheets which are given to computers contains 200 lines, with, filled at the top line, the pivot result. The computers only have to fill the rest of the pages, 200 results per page and per day (ibid.).
[10] Narron, James and David Skeie. 2014. "Crisis Chronicles: The Collapse of the French Assignat and Its Link to Virtual Currencies Today." Liberty Street Economics. Blog. Federal Reserve Bank of New York, July 11, 2014. https://libertystreeteconomics.newyorkfed.org/2014/07/crisis-chronicles-the-collapse-of-the-french-assignat-and-its-link-to-virtual-currencies-today/.
[11] Babbage also discussed Prony's results in a letter to Sir Humphry Davy, Bart. Babbage, Charles. 1822. On the application of Machinery to the Purpose of Calculating and Printing Mathematical Tables. Booth/Baldwin/Joy. London, July 3, 1822, p. 8–10. https://archive.org/details/TO0E039268_TO0324_PNI-1546_000000/.
[12] See also McCracken, H.J. 2022."How an enormous project attempted to map the sky without computers", Sep 13, 2022. Ars Technica, Online-Magazine. https://arstechnica.com/science/2022/09/how-an-enormous-project-attempted-to-map-the-sky-without-computers/.
[13] Lamy, Jérome. 2006. "The Sky map and the creation of the 'Bureau des dames' at the Toulouse observatory", January 2006, Nuncius. Vol. 21 No. 1, p. 101–120, http://dx.doi.org/10.1163/182539106X00041.
[14] Lechner, Marie and RYBN. 2018. "Human Computers, ou l'histoire laborieuse de la computation", Conference Proceedings of the 2018 Cerisy International Conference on Art, Littérature et Réseaux sociaux", Cerisy-la-Salle, https://art-et-reseaux.fr/human-computers-ou-lhistoire-laborieuse-de-la-computation/.
[15] Daston, Lorraine. 2012. "Calculation and the Division of Labor, 1750-1950", Bulletin of the German Historical Institute, Washington. p. 9–30. https://www.ghi-dc.org/fileadmin/publications/Bulletin/bu62.pdf.
[16] Grier 2005, p. 276.
[17] Galton was not only a statistician, but also a eugenicist, which emphasizes some reminiscence of eugenics within technologies, as shown by numerous studies, incl. Chan, Anita Say. 2025. Predatory Data – Eugenics in Big Tech and Our Fight for an Independent Future. University of California Press; or more specifically on AI, see Gebru, Timnit and Émile P. Torres. 2024. "The TESCREAL Bundle – Eugenics and the Promise of Utopia through Artificial General Intelligence." First Monday, April 2024. https://doi.org/10.5210/fm.v29i4.13636.
[18] Grier 2005, p. 198–219.
[19] Atomic Heritage Foundation. 2017. "The Human Computers of Los Alamos", May 26, 2017, Website. Albuquerque, NM. https://ahf.nuclearmuseum.org/ahf/history/human-computers-los-alamos/.
[20] The book Hidden Figures – The American Dream and the Untold Story of the Black Women Who Helped Win the Space Race (Shetterly, Margot Lee. 2016) established this story in popular culture and was adapted for cinema the same year.
[21] Grier 2025, p. 296, chapter seventeen.
[22] See for example: von Neumann, John, 1966. Theory of Self-Reproducing Automata. University of Illinois Press, Urbana, whose title already implies machines autonomy.
[23] Rabinbach, Anson. 1990. The Human Motor – Energy, Fatigue, and the Origins of Modernity, Basic Books, New York.
[24] Amar, Jules. 1914. Le Moteur humain et les bases scientifiques du travail professionnel, Dunod Editeur, Paris 1914 [1923 reprint], https://gallica.bnf.fr/ark:/12148/bpt6k930152b/f221.image.
[25] Arendt, Hannah. 1958 [1998 reprint]. The Human condition, 2nd ed., University of Chicago Press, p. 248.
[26] Sterne, Jonathan. 2003. The Audible Past, Cultural Origins of Sound Reproduction. Duke University Press, https://doi.org/10.2307/j.ctv11hpj6z.
[27] Marey, Étienne-Jules. 1878. La méthode graphique dans les sciences expérimentales et principalement en physiologie et en médecine, G. Masson Editeur, Paris, https://gallica.bnf.fr/ark:/12148/bpt6k6211376f.texteImage.
[28] Amar, Jules. 1922. "The Psychograph as an Instrument to Measure Working Capacity", Archives of Occupational Therapy. Vol. 1, No 4. p. 265–267, August 1922. https://babel.hathitrust.org/cgi/pt?id=uc1.b4796061&seq=291.
[29] Mosso Angelo. 1880. Sulla circolazione del sangue nel cervello dell' uomo. Coi tipi del Salviucci. Roma. https://archive.org/details/b2239347x/page/12/mode/1up.
[30] Amar 1914, p. 324.
[31] Quoted from Debatty, Regine. 2012. "The Chronocyclegraph". We Make Money Not Art. Blog, http://we-make-money-not-art.com/the_chronocyclegraph/; Frank Gilbreth's original films: https://youtu.be/xdnhEZ-tkOg.
[32] Also known as Uberisation. But the prevalence of Amazon, its market capitalization, and the extent of its empire over our lives make more sense as a concept. See Del Rey, Jason. 2022. "The Amazonification of the American workforce", Vox, online-magazine. April 21, https://www.vox.com/the-highlight/22977660/amazon-warehouses-work-injuries-retail-labor; and MacGillis, Alec. 2021. Fulfillment – Winning and Losing in One-Click America, Farrar, Straus and Giroux, 2021.
[33] RYBN. 2023. Industrial Property Curiosity Cabinet, http://www.rybn.org/IPPI/CC; RYBN. 2023. "Institute of Diagram Studies. Consulter les œuvres" and "Institute of Diagram Studies Dispositif critique de veille et de contrôle d'expansions vectorialistes (DCVCEV)", Multitudes/Icons, issue 93, December 2023, https://www.multitudes.net/institute-of-diagram-studies-consulter-les-oeuvres/.
[34] Rouvroy, Antoinette and Thomas Berns. 2013. "Algorithmic governmentality and prospects of emancipation" Réseaux, Vol 177, No. 1, 163–196, https://shs.cairn.info/journal-reseaux-2013-1-page-163?lang=en.
[35] See Casilli, Antonio A. 2025. Waiting for Robots – The Hired Hands of Automation. Chicago, The University of Chicago Press.
[36] Cassou-Noguès, Pierre. 2014. "Les rêves cybernétiques de Norbert Wiener", Paris, Seuil.
[37] See for example the recent controversy about French social allocations (CAF) in: 2023. "Notation des allocataires : l'indécence des pratiques de la CAF désormais indéniable", La Quadrature du Net. Blog, https://www.laquadrature.net/2023/11/27/notation-des-allocataires-lindecence-des-pratiques-de-la-caf-desormais-indeniable/; or the Olympics implementation of algorithmic surveillance in: 2025. "VSA jusqu'en 2027 : quand le gouvernement ose tout", La Quadrature du Net. Blog, https://www.laquadrature.net/2025/02/07/vsa-jusquen-2027-quand-le-gouvernement-ose-tout/; or in the US, Elon Musk DOGE's plans to use AI to target cuts, Natanson, Hannah, Gerrit De Vynck, Elizabeth Dwoskin and Danielle Douglas-Gabriel. 2025. "Elon Musk's DOGE is feeding sensitive federal data into AI to target cuts", Washington Post, February 6, 2025. https://www.washingtonpost.com/nation/2025/02/06/elon-musk-doge-ai-department-education/
[38] Descartes, René. 1637. Discours de la Méthode. Paris, L'imprimerie de Ian Maire, p. 56. https://archive.org/details/bub_gb_p6Uz87poRdIC/page/n59/.
[39] Wood, Gaby. 2002."Living Dolls – A Magical History Of The Quest For Mechanical Life by Gaby Wood". The Guardian. February 15, 2002.
[40] Holmes, Richard. 2015. "Computer Science: Enchantress of Abstraction", Nature, vol. 525, p. 30–31, https://doi.org/10.1038/525030a
[41] Turing, Alan M. 1950. "Computing Machinery and Intelligence." Mind, New Series 59, 236: 433–460. https://doi.org/10.1093/mind/LIX.236.433.
[42] Cai, Alice. 2016. "Eliza Re-Examined". Confluence, Online Magazine. January 5, 2016. https://confluence.gallatin.nyu.edu/sections/research/eliza-re-examined
[43] See the documentary, where this controversy is discussed: Jayanti, Vikram. 2003. Game Over: Kasparov and the Machine; a settlement has occurred later between Kasparov and IBM, in 2011. After this, Kasparov did not mention this controversy anymore.
[44] Jeff Bezos in: Pontin, Jason. 2007. "Artificial Intelligence, With Help From the Humans". New York Times, March 25, 2007. https://www.nytimes.com/2007/03/25/business/yourmoney/25Stream.html.
[45] Wray, Richard. 2009. "SpinVox answers BBC allegations over use of humans rather than machines", The Guardian, July 23, 2009, https://www.theguardian.com/business/2009/jul/23/spinvox-answer-back.
[46] Wagner, Kurt. 2015. "Facebook's Virtual Assistant 'M' Is Super Smart. It's Also Probably a Human", Vox, Online Magazine, Nov 3, 2015, https://www.vox.com/2015/11/3/11620286/facebooks-virtual-assistant-m-is-super-smart-its-also-probably-a-human.
[47] Huet, Ellen 2016. "The Humans Hiding Behind the Chatbots", Bloomberg, April 18, 2016, https://www.bloomberg.com/news/articles/2016-04-18/the-humans-hiding-behind-the-chatbots.
[48] Solon, Olivia 2018. "Google's robot assistant now makes eerily lifelike phone calls for you", The Guardian, May 8, 2018, https://www.theguardian.com/technology/2018/may/08/google-duplex-assistant-phone-calls-robot-human.
[49] Day, Matt, Giles Turner and Natalia Drozdiak. 2019. "Thousands of Amazon Workers Listen to Alexa Users' Conversations", Time. April 11, 2019, https://time.com/5568815/amazon-workers-listen-to-alexa/.
[50] Perrigo, Billy. 2023. "OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic", Time, January 18, 2023, https://time.com/6247678/openai-chatgpt-kenya-workers/.
[51] Wayt,Theo. 2024. "Amazon's Grocery Stores to Drop Just Walk Out Checkout Tech", The Information, https://www.theinformation.com/articles/amazons-grocery-stores-to-drop-just-walk-out-checkout-tech.
[52] Metz, Cade, Jason Henry, Ben Laffin, Rebecca Lieberman and Yiwen Lu. 2024. "How Self-Driving Cars Get Help From Humans Hundreds of Miles Away", September 3, 2024, New York Times, https://www.nytimes.com/interactive/2024/09/03/technology/zoox-self-driving-cars-remote-control.html.
[53] Tan, Rebecca and Regine Cabato. 2023. "Behind the AI boom, an army of overseas workers in 'digital sweatshops", August 28, 2023, https://www.washingtonpost.com/world/2023/08/28/scale-ai-remotasks-philippines-artificial-intelligence/.
[54] These platforms have built on the foundation of early internet astronomical initiatives and scientific programs, such as the Nasa Clickworker program that ran from November 17, 2000 to January 3, 2002, with "as many as 101,000 clickworkers volunteering 14,000 work hours, 612,832 sessions, and 2,378,820 entries" (Nasa Clickworkers. 2012. Original Pilot Study (2000-2001), https://www.nasaclickworkers.com/classic.php); Further, the 1999 Search for Extra-Terrestrial Intelligence at home (SETI@Home) project at the University of California Berkeley pioneered the use of distributed calculation power across the Internet.
[55] Amazon MTurk's website claim is: "Access a global, on-demand, 24×7 workforce", https://www.mturk.com/; The data annotation company Appen titles "Make data your differentiator", https://www.appen.com/; Scale AI's business proposition is to "Power Generative AI With Your Data", https://scale.com/; Sama claimst to "Build Generative AI / Computer Vision Models Faster", https://www.sama.com/; and Clickworker advertises: "Make your AI System smarter with high quality, multi-faceted AI Training Data", https://www.clickworker.com/. All websites last accessed on March 27, 2025.
[56] Muldoon, James, Mark Graham and Callum Cant. 2024. Feeding the Machine. The Hidden Human Labour Powering AI. Edinburgh, Canongate.
[57] Buttice, Claudio. 2024. "The Invisible Human Prisoners Helping to Train AI", January 17, 2024, Techopoedia, https://www.techopedia.com/the-invisible-human-prisoners-helping-to-train-ai; Meaker, Morgan. 2023. "These Prisoners Are Training AI." Wired, September 11, 2023, https://www.wired.com/story/prisoners-training-ai-finland/.
[58] RYBN. 2019. AAI Chess, http://aaichess.rybn.org/index.html; See also Miyazaki, Shintaro. 2020. "Critical re-modeling of algorithm-driven intelligence as commonist media practice", NECSUS_European Journal of Media Studies, Vol. 9, No. 1, June 7, 2020, p. 237-257 https://doi.org/10.25969/mediarep/14309.
[59] Compare the RYBN.ORG rating on Turkerview, https://turkerview.com/requesters/A12D4EQ1HM2NVV.
[60]Kässi, Otto, Vili Lehdonvirta and Fabian Stephany. 2021. "How many online workers are there in the world? A data-driven assessment", Open Res Europe, 1:53, https://open-research-europe.ec.europa.eu/articles/1-53.
[61] See Graham, Mark, Fabian Ferrari (eds.). 2022. Digital Work in the Planetary Market, The MIT Press; Cardon, Dominique. Antonio A. Casilli. 2015. Qu'est-ce que le Digital Labor? Bry-sur-Marne, INA Éditions.
[62] French scientist Emmanuel Ferrand debunks this pseudo-science newspeak through a practice of what we could call 'Ted Talk Karaoke', where he first re-enacts a talk, before rewinding and debunking the lecture slide after slide. See, for example, http://tvbruits.org/spip.php?article2381.
[63] Casilli, Antonio 2024 "Le patron se nomme toujours 'I.A.' lorsqu'il prépare un plan social", Mastodon post, https://post.lurk.org/@casilli@mamot.fr/112110546312033733.
[64] Hunger, Francis. 2023. "Unhype Artificial 'Intelligence'! A proposal to replace the deceiving terminology of AI". Pre-print. Zenodo, https://zenodo.org/records/7524493.
[65] RYBN. Human Perceptron. The entire workshop process is documented, open sourced and reproducible, see the documentation here: https://rybn.org/human_computers/humanperceptron.php; See also RYBN 2016. "Human Perceptron", Interactions, Vol. 28, No. 1, p. 16–20, https://doi.org/10.1145/3442196.
Additional details
Description
Introduction Human Computers is a media archaeology research that aims to unravel the intricate entanglement between computing and capitalism through the prism of labor.
Identifiers
- UUID
- d6b7402a-4c1a-44f2-a1b7-dcd393a1e04e
- GUID
- https://carrier-bag.net/?p=1332
- URL
- https://carrier-bag.net/human-computers/
Dates
- Issued
-
2025-03-29T14:03:14
- Updated
-
2025-05-08T11:44:32