Friday, 27 May 2011
These kinds of challenges to cinema and human sensation-based media theories relate perhaps to what in part have been accelerated by software embedded media cultures, and what Katherine Hayles has (following Nigel Thrift’s lead) formulated as the technological non-conscious --- the fact that
"[h]uman cognition increasingly takes place within environments where human behaviour is entrained by intelligent machines through such everyday activities as cursor movement and scrolling, interacting with computerized voice trees, talking and text messaging on cell phones, and searching the Web to find whatever information is needed at the moment. As computation moves out of the desktop into the environment with embedded sensors, smart coatings on walls, fabrics, and appliances, and radio frequency ID (RFID) tags, the cognitive systems entraining human behaviour become even more pervasive, flexible and powerful in their effects on human conscious and non-conscious cognition." (Hayles 2008:27-28).
Thursday, 26 May 2011
"How a media archaeology can constitute itself against self-legitimation or self-reflexivity is crucial as if it is to circumvent the reinvention of unifying, progressive, or “anticipatory” history—even as it is challenged to constitute these very vague histories as an antidote to the gaping lapses in traditional historiography. Indeed it is this very problem that afflicts media archaeology. The mere rediscovery of the forgotten, the establishment of oddball paleontologies, of idiosyncratic genealogies, uncertain lineages, the excavation of antique technologies or images, the account of erratic technical developments, are, in themselves, insufficient to the building of a coherent discursive methodology.” (Druckrey 2006: ix).
Wednesday, 25 May 2011
Instead of just part of discourses of artificial intelligence, many of such were more accurately understood as Augmented Intelligence, as Douglas Engelbart underlined. The question hence was not posthumanism in the sense of replacement of Man from the picture altogether, but a new ecosystem of sorts where humans and machines were synchornized through various equipments and input/output-procedures. This is how Pias (2002: 92-98) sees this culture of interface development, where pedagogy of the non-human algorithmic world was to be fine-tuned as part of the possibilities and speeds of the human one. This involved a perspective on the hardware-software-and wetware (human) systems, even if the last term is of more recent origin. Engelbart’s team was interested in both gestural integration of computers and perception systems (new forms of computer displays) as well as cognitive handling and use of such systems, for example file systems. See Engelbart and English 1968. Also easily found on the web is the famous 1968 tele-presentation by Douglas Engelbart from the San Francisco Computer Conference, where he introduces key elements for future computing interaction, including the mouse and shared collaborative online work platforms. See for example http://www.dougengelbart.org. What is significant, and what is underlined by for example Licklider (1969) is that the fundamentals of computer graphics lie not only in their representation technical values such as colour, detail and such, but in how it is approachable now as an image – the potential for interaction. Licklider (1969: 619) writes: “In my assessment, however, communication is essentially a two-way process, and in my scale of values, interaction predominates over detail, gray scale, color, and even motion. In my judgment, the most important problem in computer graphics is that of establishing excellent interaction—excellent two-way man-computer communication—in a language that recognizes, not only points, lines, triangles, squares, circles, rings, plexes, and three-way associations, but also such ideas as force, flow, field, cause, effect, hierarchy, probability, transformation, and randomness.” The image is, by definition, a call for action and a relation to the perceiving, gesturing body. Any archaeology of contemporary understanding of augmented reality devices for example on smart phones should start with the considerations expressed already by these earlier researchers. Bardini (2000) offers a good elaboration of Engelbart’s work and the early development of a variety of sensory-motor interface systems for computer interaction beyond that of the hand: the knee, the back and the head were considered in various experiments (102, 112-114).
Tuesday, 24 May 2011
Siegfried Giedion’s Mechanization Takes Command (1948):
To illustrate the media archaeological relevance of Giedion’s seminal book, it is worthwhile to mention Paul DeMarinis’s 1990 performance Mechanization Takes Command. DeMarinis, himself at the forefront of media archaeological art, writes of the book in such ways that highlights its key role as a transdisciplinary take on history of modernity and technics that is at the same time much more than “just history” and hence summarizes so much of the book but also of the inspiration where media archaeology has been drawing from: “The title’s active present tense conveys the once-fresh immediacy of the bygone mechanical age that spanned the 19th century, during which human invention overwhelmed and re-defined the human being. Contrasting the natural resources, availability of skilled labor, and historical proclivities of Europe and America, he examines, chapter by chapter, the effects of mechanization on the various realms of human endeavour. The lock and key, bread baking, slaughterhouses, furniture and the very notion of comfort, kitchen appliances, and bathing are among the subjects of Giedion’s scrutiny. Ever attentive to the impact of mechanization on the organic world, our lives and our bodies, Giedion’s critical perspective surpasses mere historical documentation, teleological theory, or scientistic adulation: he bares the roots of the many contradictions underlying our current global crises of life and humanity versus the corporate mechanism and the ruling taste. Mechanization Takes Command is a sourcebook of problems, solutions, and the solutions that became problems.” (Demarinis 2010: 211).
Monday, 23 May 2011
An illustrative example is Baron von Schrenck-Notzing’s Materialisations-Phänomene (1914) which outlines through especially a case study with a medium Eva C key themes of the medium of the medium, in its direct relation to media technologies, such as photography, as well as indirect relations to cinema through phenomena such as somnambulism and psycho-physiological disorders analysed by Väliaho (2010) and Crary (2000). “Mediumship” becomes itself a practice of communication, and as such presented by Schrenck-Notzing as a speculative future practice closely related to science and apparatuses of recording and measurement: “So long as spiritism develops outside scientific laboratories, the traditional usages of the sittings must be put up with. It is only when science has seriously tackled the subject that one can attempt to reduce the phenomena to a system. Modern spiritism has the same relation to the future science of mediumistic processes as astrology had to astronomy, and alchemy to chemistry. We must, therefore, endeavour to get beyond the state of raw empiricism in which we stand at present, to increase the confidence of the mediums in science and its representatives, and use of physical instruments and apparatus. Better even than dynamometers, balances and metronomes, in Morselli’s opinion, is the photographic camera, since it gives positive proofs in the real sense of the word.” (Schrenck-Notzing 1923: 12). A media archaeological reworking of the Schrenck-Notzing case, and the medium in case, Eva C, see Zoe Beloff’s Installation The Ideoplastic Materializations of Eva C (2004).
Another fascinating character in that chapter is Baron Carl du Prel - a 19th century mysticist from Germany too whose ideas resonate with the emergence of the scientific world view, offering both a curious way to understand human evolution (in relation to posthuman theories too) and its mediatic contexts:
Carl Du Prel’s writings were part of his larger worldview that outlined a mystical overview of evolution that developed continuously new transcendental spheres of apperception. What is important to note is that he tried to tie the mysticist views together with sciences such as Darwin, as well as physiological research, even if denying that he was a materialist. Instead, Du Prel emphasized being interested in what seems to escape the scientific methods and modes of observation. One can see how the psychophysiological theories of his age, such as Helmholz’s, had influenced him in how he underlined that such “circuits of knowledge” were entirely tied to both “the number of its senses” as well as the “strength of the stimuli on which its senses react.” (Du Prel 1889: xxiv). He continued to argue that biological development and such phenomena as somnambulism are interlinked, and the latter also had to do with the “displacement of the threshold of sensibility”, and acted as a signal of what he called the “future biological form” (xxv). Hence, we can see such mysticists as part of the larger redefinition of nature and the invisible world that had suddenly scientific backing through Maxwell and other key scientists in relation to “media” phenomena. New media and technologies, echoing in advance what Benjamin wrote of the photographic and cinematic as the scientific-surgeonlike cutting to non-human perceptions and depths, are for du Prel (1889: 8) something we would now call posthuman: “as there are parts of nature which remain invisible to us, being out of relation to our sense of sight—for instance, the microscopic world—so are there parts of nature not existing for us, owing to entire absence of relation to our organism.”
Saturday, 21 May 2011
In addition to arts contexts, the question of archiving and excavating digital material is one that is crucial for post World War II scientific cultures, and hence histories of science and technology. For such cultures of innovation, where for the first time scientific research was inherently articulated through computational media, the materials left for “future archaeologists” present practical problems. As flagged by Tim Lenoir, such “information archaeologies” point towards how a mapping of science is a mapping of the software and hardware platforms instrumental to the research. Of course, also the development of so many aesthetic innovation in terms of HCI and screen technologies rose from similar science-tech labs too. In Lenoir’s (2007: 365-366) words: “Historians will need to add new tools of information archaeology to their tool-kit in order to write the history of recent science and technology born digitally. Among the types of tools we need are, for instance, emulators of older systems, such as the IBM 360, and other machines, such as Burrough’s machines, Osborn’s, and others that do not have legacy systems maintained by large companies or successor firms. Even early-generation Silicon Graphics machines that appeared in the late 1980s and early 1990s are becoming scarce today. In order to research the programs that ranon these machines, we need to construct emulators that run on current generation machines. Beyond this we need to render the original programs in forms readable by current disk drives and other data-reading technologies. While genealogies of software and software languages are being constructed, more attention will have to be devoted to the history of software languages, and their implementations.”
A footnote from what will be chapter III on Imaginary Media:
Actually, it’s not the people that are alive, but the fragments made possible by technical media. Voice is in itself an interesting special case due to its historical relation to death and the uncanny through the technical recording of meticulous accuracy (“vocal vibrations of air waves” as the above-mentioned Scientific American story explains) that was much awed at in the early reports from 1870s onwards as well as in theoretical sense. Mladen Dolar’s (2006) work on the uncannyness of the voice is masterful in how it outlines how the voice always has a possessive, excessive and haunting quality that questions the solidity of body boundaries. The voice seems to have a relation to the body, but we do not own our voices. With speech synthesis technologies, voice becomes furthermore detached from the human organic bodies, inhabiting a further uncanny quality of the dislocated voice as addressed by the sound artist Paul DeMarinis (2010: 212): “The voice, once it is taken away from the body and reconstituted as a being without corporeal substance, without status or place, without viewpoint, without the fleshy vulnerability a bared throat offers, is re-incarnated as a new clarified being. Perhaps a voice of authority, or an oracle that can speak from beyond the grave. It gives us deliriously false confidence, this chest resonance without chest, these nasals without nose, plosives without lips or tongue, this singer of songs-without-throats.”
Friday, 20 May 2011
In this classic of software history and structured programming, Edsger W. Dijkstra’s (1968: 147. Cf. Chun 2004) “Go To Statements” considered harmful he writes:
[…] although the programmer’s activity ends when he has constructed a correct program, the process taking place under control of his program is the true subject matter of his activity, for it is this process that has to accomplish the desired effect; it is this process that in its dynamic behavior has to satisfy the desired specifications. Yet, once the program has been made, the ‘making’ of the corresponding process is delegated to the machine.”
If read through the media theoretical lenses of Kittler and materialist media theory, such an idea is not only part of the emergence of structured programming in the 1960s, but a good crystallization of the bootstrapped autonomy of process of language – also software.
Tuesday, 17 May 2011
Reading Nollet’s (1749) Recherches sur les causes particulières des phénomènes électriques gives an insight to the enthusiasm concerning the Leyden jar at that time, and Nollet’s continous experiments that had to do with the different material characteristics of bodies – organic and non-organic – to convey, to communicate with each other through the medium of electricity. Later, a similar enthusiasm surrounding Tesla’s performances in his laboratory at Colorado Springs were reported both in wider public as well as specialist publications. The Tesla Coil was a spectacular demonstration of the powers of electricity – and the new worlds of different materialism emerging with that worldview, not without implications to how we approach and think about media technology:
“As early as 1890 this savant had produced electrical disturbances in his laboratory at Colorado Springs equal to the lightning produced by Nature. Although a number of years have elapsed since these experiments were conducted, not a single scientist or engineer has been able to produce such awe-inspiring, electrical performance as did Dr. Tesla. It is true that he is far ahead of his time in many of his inventions, yet he has ably demonstrated that it is possible to imitate some of Nature’s secret forces, but he was performing certain experiments on the problem of radio transmission of electrical energy through space.”
(Samuel Cohen, “ “Lightning Made to Order”, The Electrical Experimenter, New York, November 1916, 474. Quoted from Tesla 1961: 93-94).
More garbage coming out from the Media Archaeology and Digital Culture: yet another footnote who hit the wrong note, had a date with the guillotine, and finished his days. This one on Brewster's kaleidoscope.
Sir David Brewster’s ideas concerning the ontological and practical implications of his device are intriguing and hint both towards a history of genetic algorithms and the birth of the creative industries. He outlines the kaleidoscope as an machine for infinity of forms, where to paraphrase Brewster (1858: 132) even one single line as the object of the device is able to vary into “an infinite number of figures from this single line.” Patterns emerging out from objects, lines, and mathematical simplicity reminds of the artificial life patterning that took hold of the aesthetics of the 1990s digital culture, but it also points towards how Brewster imagined this to revolutionalize design and the creative process. Indeed, in the midst of the industrial revolution in England, it was not only the manufacturing of “simple” objects such as pins that could be automated – but visual culture too: “When we consider, that in this busy island thousands of individuals are wholly occupied with the composition of symmetrical designs, and that there is scarcely any profession into which these designs do not enter as a necessary part, so as to employ a portion of the time of every artist, we shall not hesitate in admitting, that an instrument must have no small degree of utility which abridges the labour of so many individuals. If we reflect further on the nature of the designs which are thus composed, and on the methods which must be employed in their composition, the Kaleidoscope will assume the character of the highest class of machinery, which improves at the same time that it abridges the exertions of individuals. There are few machines, indeed, which rise higher above the operations of human skill. It will create, in a single hour, what a thousand artists could not invent in the course of a year; and while it works with such unexampled rapidity, it works also with a corresponding beauty and precision.” (Ibid.: 136).
As an example, see Sutherland (1965) for early speculation of haptic and embodied display design in computer graphic environments, but also mentioning displays based on smell and taste. Of course, what has to be noted is that even the notion of “touch” is itself complex and does not automatically translate as haptic, but is divided into more than one system of sensation. To quote from a haptic interface design perspective: “As described by Klatzky and Lederman [Klatzky and Lederman 2003], touch is one of the main avenues of sensation, and it can be divided into cutaneous, kinesthetic, and haptic systems, based on the underlying neural inputs..The cutaneous system employs receptors embedded in the skin, while the kinesthetic system employs receptors located in muscles, tendons, and joints. The haptic sensory system employs both cutaneous and kinesthetic receptors, but it differs in the sense that it is associated with an active procedure. Touch becomes active when the sensory inputs are combined with controlled body motion. For example, cutaneous touch becomes active when we explore a surface or grasp an object, while kinesthetic touch becomes active when we manipulate an object and touch other objects with it.” (Otaduy and Lin 2005: 1)
Tuesday, 10 May 2011
And indeed, we can find great ideas in Flusser. His essay on the typewriter ("Why do typewriters go 'click'") is one of my favourites, and similarly in this text on "Text and Image" Flusser's thoughts amount to a medium-specific understanding why technical media demands a different attitude to that of a focus on narrative and story-telling. In short, Flusser is saying that it is almost an ethical demand that we do not see technical media such as TV as story-telling...
Hence, his way of pointing towards a post-historical attitude is curious in relation to media archaeology as a possibly post-historical, mediatic way of understanding for instance perception, consciousness and time.
"The new post-historical existential climate which characterises the technoimage culture articulates itself in many ways, for instance in structuralism, cybernetics, scenario-based politology, or trans-ideologisation. It may be concretely observed in the programs impressed into the memories of computers, intelligent tools, and miniprocessors. However, it is as yet very far from having become entirely conscious. We live, all of us, as yet on the magico-mythical and on the historical level. We decipher, all of us, TV programmes as if they were traditional images or as if they were linear texts telling some story. Which means that we find ourselves in the same situation that illiterate Israelites found themselves in faced by the Sinai stone tablets. Instead of deciphering these programmes critically, we adore them. It is difficult for us to live and think on the level on which techno-images are made. This is why they tend to programme us, just as texts programmed the masses during their illiterate situation. Unless we learn how to decipher techno-images, unless we may achieve what may be called "conscious techno-imagination" , we are bound to become dominated by the apparatus-operator complex. Which seems to function objectively, but which in reality manipulates us from the subjective, although inhuman, point of view of the apparatus." (Flusser, in Variantology 4. On Deep Time Relations of Arts, Sciences and technologies in the Arabic-Islamic World and Beyond, edited by Siegfried Zielinski and Echard Fürlus. Walther König, Köln 2010: 115).¨
You can find echoes of such a relation to media competency in Zielinski too (Deep Time of the Media, for instance the final chapter), and it's relation to understanding digital image cultures is intriguing. The links to media archaeology are multiple, and probably we will soon more good work that explicates these links in more detail.