[Provisional text awaiting final editing]
HOME PAGE – CONFERENCE PROGRAMME – ARCHIVE OF PAPERS – REGISTRATION – ACCOMMODATION – WEBSITES
HARRY
HALPIN
Harry
Halpin [University of Edinburgh]
Abstract:
Built upon the foundations of the Internet, the World
Wide Web has been the most significant technological development within recent history,
sparking a reformulation of both capitalism and resistance. The Web is defined
as a “universal information space” by its inventor Tim Berners-Lee of the W3C,
reflecting the universal scope of politics and struggle today. Yet while its
effects have been scrutinized, the Web itself has received little inquiry. The
composition of the governing networks that control the infrastructure of the
Web have only recently been engaged with by activists with the ICANN affair and
WSIS protests. The Web is governed by a network that is composed of an
“immaterial aristocracy” of radical democratic “hackers,” corporations such as
Google and Microsoft, and non-governmental organizations such as the Internet
Engineering Task Force (IETF) and World Wide Web Consortium (W3C). These
networks continually negotiate between the needs of global capitalism and the
desires of immaterial labour on the Web. We explore the evolution of these
governing networks, including the recent hegemony of Google and Tim
Berners-Lee’s vision of the Semantic Web (a term first used by Foucault).
An archeology of the Web
Across the world at
this very moment, a countless multitude of people are sitting, staring into the
ether. In the whole history of our rather active species, this silence and stillness
stands unique. Their bodies are checked in at the front door, immobilized
except for the tiniest of movements – the lighting-speed tapping of fingers.
The steady hum of the clicking of buttons fills the air, reminiscent of the
strange movements of gears in some industrial factory. A factory of angels
perhaps (Yurick, 1985). A translucent light is cast upon their eyes by their
screens, and their sight is fixed upon a world unknown to outside observers.
What would an alien anthropologist, some cosmic exile in the style of McDowell,
make of this state of affairs (McDowell, 1994)? Would it decipher their
language in the frequency of their keystrokes, the words their keystroke
spelled in this silent tongue? Would he observe a grammar composed in the ghostly
flickers of the screen, and deduce the importance of various buttons that exist
only inside the screen itself? More importantly, could this alien understand
their intentions, and offer a theory that structured the behavior of immobile
beings, who seem crippled in comparison to more mobile species?
To offer a theory of
the Web is beyond the scope of this paper. Instead of trying to decipher the
circle of cause and effect between the Web and its users, we instead will offer
something more humble: a hidden history of the Web. Even this is impossible,
for unlike the annals of revolutionary France or the advent of the idea of
evolution, it is neither well-documented nor easily pinned upon one individual
or chronicled in one or a series of books. A true history of the Web, an
application built on top of the Internet, would be possible but would require a
life times of diligent work. Even the archives of the old list-servs, the
photocopied papers, and the other historical documents needed are probably at
this point difficult if not impossible to find due to unforeseen factors such
as the decay of hardware and the corruption of software. I will instead present
an archeology of the Web, the digging up and exposing to the light various
connections in an attempt to explain its origins, and in particular the origins
of those who govern it. “The development of the Internet arose, as we have
seen, form a certain bizarre conjunction between publically funded institutions
– the original militarily research ARPANet – and the autonomous activity of
hackers, techno-hobbyists, and computer dissidents,” and we explore this story
in detail (Dyer-Witherford, 2000).
Those who govern the
Web and its underpinnings I will dub the “immaterial aristocracy.” The word
“aristocracy” is chosen carefully. The concept of aristocracy usually reflects
the control of capital: machines, money, power (Hardt and Negri, 2000). One
immediately thinks of CEOs, stock brokers, attorneys, bureaucrats, doctors.
This is the material aristocracy, whose power comes from material machines, the
ability to purchase material goods, and to command material armies. Theirs is
the ability to move bits of matter about the surface of the earth. This
conception of aristocracy dates back to the feudal aristocracy, where within
the borders of nascent nation-states and decaying empires, those with power and
money could field their own private armies, extort riches, and effectively be
semi-sovereign rulers of their own fiefdoms. If you revolted, you would suffer
the most material of fates, death. In contrast, the immaterial aristocracy
field no armies, and are generally well-off but not wealthy. They possess no
specialized machinery, and for the most part command no employees. They are
nonetheless aristocrats because they have powerful ideas, ideas that can change
the world. They are aristocrats in the ancient Greek sense, aristocrats who are
devout before their task, and are known for their sense of justice and
humility. They forsake immoderate wealth since it would undermine their
fanatical devotion to the perfection of their idea. Previously, one would have
usually considered these people to be the artists, the literati, the
professors, and bohemians. Yet there is a singular difference between the the
immaterial intellectuals and the immaterial aristocracy: the immaterial
aristocracy have a track-record of turning their immaterial ideas into
bioproductive power. Particularly given the advent of mass intellectuality, the
ability anyone to join the realm of ideas makes ideas themselves no longer a
mark of distinction. As the production of linguistic and symbolic meaning is
now the hegemonic mode of production, the crucial difference between our
immaterial aristocrat and the ordinary programmer on the street is that the
immaterial aristocracy create the ideas behind the technologies that others
implement and use.
It would also be
ridiculous to conflate the immaterial aristocrats of fashion designers with the
ascetic hackers who helped create the Web. While both activities require work,
their work is in distinctly different spheres; the fashion designer is the
sovereign of the aesthetic affect, while the hacker is the sovereign over the
ideas that are given flesh in computer technology. It would also be a mistake
to confuse these aristocrats with the aristocratic public intellectual with
best-selling books. Few people, if any, know the names of or ideas of the
immaterial aristocrats of the Web. It would be inaccurate to abstractly
characterize these aristocrats as only hackers, software designers, or even
technical people. Some, such as Lawrence Lessig of the Creative Commons, are
lawyers, while others like Ted Nelson are professional “visionaries.” The
difference is their sphere of sovereignty. We are interested in immaterial
aristocrats who hold domain not over the analogue world, but the digital world.
The digital world is the realm defined by the use of computers. The immaterial
aristocrats are the architects and technicians of this world. They define the
ideas – from artificial intelligence to hypertext – that become so ubiquitous
as to no longer possessor an author. These ideas are defined as open standards
(in alliance with the material aristocrats and often in consultation with
masses of users) to be held in a perpetual common by the digital world. Without
them the digital world would fragment and so lose its universalizing power.
These digital ideas let capitalism flourish by creating decentralized spaces
for the flow of knowledge and currency. Yet some posit that these very technologies
have sown the seeds for the downfall of capital by making real an
ever-expanding digital commons. Is the eschatology of the Internet warranted?
While many have produced inquiries into these new forms of capitalism, we
present here an inquiry into the technological infrastructure that sustains
this new form of “cognitive” capitalism itself.
The first immaterial aristocrats of Artificial
Intelligence
The original immaterial aristocracy was an aristocracy of
professors, directly employed by the military in order to create machines that
could accurately and quickly make the calculations needed to improve the
firings of ballistics. Some, such as Turing, were used and abused by their
masters. Others like von Neumann (the creator of the computer architecture used
to this day), openly embraced their generals, and von Neumann himself went so
far as to calculate the exact co-ordinates of the explosions at Nagasaki and
Hiroshima to maximize their damage. Let us note that history has a sense of
irony, for von Neumann died of cancer due to his witnessing the nuclear
explosions at Bikini Atoll. Most of these professors (with the exception of
Turing) and the military saw computers as primarily giant calculators, whose subject
matter was restricted to Platonic numbers.
As von
Neumann’s gambit of mutually assured destruction became policy, the crisis
faced by Cold War capital became not one of death but one of life. Until
recently, both capitalists and anti-capitalists have been plagued by the same
dream: the replacement of human labor by machines. The anti-capitalists thought
this would bring the final crisis to capitalism: “Marx advances the thesis that
the systematic application of technico-scientific knowledge to production would
achieve the outcome of freeing the worker from the factory, thus making the
measurement of wealth in terms of human labor time completely impossible”
(Piperno, 1995). The capitalist needed compliant machines to replace
antagonistic human labour, for “ss Marx says, machines rush to where there are
strikes,” and so the possibility of human labor replaced completely by machines
fascinated capitalists. (Negri, 1982). Capitalists viewed mechanization as a
way to increase productivity and profits, while anti-capitalists believed this
would liberate human labour. Regardless of their political differences, both
their goals were ultimately the same.
Within the
United States in particular, productivity had to be increased while the
antagonism of labor had to be reduced. From the perspective of capital,as labor
itself began to become intellectual and cybernetic, dumb machines were not
enough. Indeed, the economic coercion of forcing people to do repetitive tasks
was getting expensive, for if wages were kept down, the mass worker would
threaten revolt – yet without the mass worker, the machines could not continue
production. One needed machines that could function without workers, that could
absorb and replicate their labour. In one famous passage from the Grundrisse,
Marx predicts, “What was the living worker's activity becomes the activity of
the machine. Thus the appropriation of labour by capital confronts the worker
in a coursely sensuous form; capital absorbs labour into itself – as though its
body were by love possessed” (Marx, 1973). The military had more than
productivity on their minds, for if the president and his men were destroyed in
a nuclear war, how would retaliation happen from beyond the grave? How could
the military fight back, delivering death-for-death the instant after the bomb
went off? The answer was to put an inhuman thinking machine in control of the
nuclear arsenal.
One group of scientists realized that
these computers could accomplish these goals if they were not confined to
ordinary numbers. Newell and Simon demonstrated convincingly that computers
could manipulate arbitrary symbols, a revolution at the time (Newell and Simon,
1976). In these heady days when the insights of Chomsky about the fundamentally
symbolic nature of human thought were still freshly triumphing over
behaviorism, these ideas were fresh and revolutionary. When Newell and Simon’s
program the “The Logic Theorist” managed to prove 38 of the theorems of the
monumental Principia Mathematica of Russell and Whitehead, Russell responded,
“I am delighted to know that Principia Mathematica can now be done by
machinery. I wish Whitehead and I had known about this possibility before we
both wasted 10 years doing it by hand” (Lewis, 2001). Heady with
accomplishment, Newell and Simon soon predicted that computers would beat
grandmasters at chess, and more importantly, run mechanized factories. Herbert
Simon was, besides the inventor of symbolic computing, the inventor of
organizational decision-making. In his “Architecture of Complexity,” Simon
envisions both the human mind and human organizations can be explained by the
advent of ever more complicated hierarchies (Simon, 1969). Again, history knows
no bounds of irony, for he died in 2001 just in time to see the rise of
decentralization in computers.
The failure of
Artificial Intelligence
Excited by the
prospects of using symbolic computing to mechanize all of knowledge, with the
subtext of mechanizing all of production, a group of professors held the
Dartmouth conference to announce that their “study is to proceed on the basis
of the conjecture that every aspect of learning or any other feature of
intelligence can in principle be so precisely described that a machine can be
made to simulate it” (McCarthy et al., 1955). They dubbed this vision
“artificial intelligence.” McCarthy also had a plan: to formulate all of
intelligence within the framework of deductive logic. The research money began
flowing in from the US military. From a philosophical standpoint, artificial
intelligence as originally conceived was the height of the Enlightenment
project: the definition of all of intelligence as pure rationality, and
mechanized as pure logic (Dreyfus, 1972). Understanding human thought required
investigation into symbolic knowledge representation and its constraints in
space-time. Planning algorithms were developed, AI programs made for everything
from vision to reading, and progress seemed unlimited. The search for
artificial intelligence internationalized, and soon Americans were even terrified
that they would be economically overcome by Japanese industry controlled by
“Fifth Generation” AI computers. Companies were founded to commercialize such
expert systems (systems that formalize human knowledge and transfer it to
machines), and the entire artificial intelligence industry appeared to take
off.
All was not
well for the artificial intelligentsia, as predicted by the Heideggerian
philosopher Hubert Dreyfus, employed by RAND to determine if AI was truly a
good investment (Dreyfus, 1973). While in limited and highly formal domains
such as mathematical proof-proving and fairly immobile factory robotics, AI
made great strides forward. On simple problems, such as getting a machine to
walk across a cluttered floor, AI failed miserably. AI failed because it could
not unify the concrete intuitive and kinesthetic skills with its abstract
formal rationality, or in other words, because it was a mind without a body
(Dreyfus, 1973). Results at machine translation came back as failures, and
knowledge representation languages became so powerful they could not reliably
draw inferences. The military withdrew funding, and the AI industry collapsed
in a strange forerunner of the “dot com” bust known as “AI Winter.” AI
reorganized itself slowly, and a new breed of professors and graduate students
began focusing the next few decades on creating robots with bodies and
simulating emotion. The era of digital sovereignty of the first generation of
immaterial aristocrats, the artificial intelligence researchers who were
primarily based in the University, ended not with the bang they had hoped for,
but with a whimper. As the death knell of AI, Brian Cantwell Smith proclaimed
that researchers should “forget intelligence completely, in other words; take
the project as one of constructing the world's largest hypertext system, with
CYC functioning as a radically improved (and active) counterpart for the Dewey
decimal system. Such a system might facilitate what numerous projects are
struggling to implement: reliable, content-based searching and indexing schemes
for massive textual databases” (Smith, 1991). Although AI was distracted by
creating robots and simulating emotion, this is exactly what happened.
The foundations of the Internet
A new social subject
was created in the artificial intelligence laboratories of MIT. While their
professors, who were by nature philosophers and mathematicians, tried to
determine if mechanization of human intelligence was possible, their students
fell in love with the computers themselves. AI professor and social critic
Weizenbaum describes this new social subject, the “hacker,” in pure disdain,
“Their rumpled clothes, their unwashed hair and unshaved faces, and their
uncombed hair all testify that they are oblivious to their bodies and to the world
in which they move,” for they only exist “only through and for the computers.”
(Weizenbaum, 1976). Arch-hacker Richard Stallman, a drop-out graduate student
of the artificial intelligentsia who would go on to invent free software and
copyleft, explains the root of this new social subject not as a rejection of
humanity, but a community and practice of joy, “It was not at all uncommon to
find people falling asleep at the lab, again because of their enthusiasm; you
stay up as long as you possibly can hacking, because you just don’t want to
stop” (Williams, 2002). In hindsight, it was the artificial intelligentsia who
really were trying to define in their computers an intelligence with a body or
emotions, while hackers accepted the unique potential of computers themselves.
As the project of AI ground to a halt, it is a world-historical surprise that
it would be the hackers that would make up for the failure of AI by the
maintaining the Internet and creating the Web. Note that we distinguish between
the Net and the Web. The Net is the Internet, a set of protocols like TCP/IP,
while the Web is the World Wide Web, technically based on a “universal”
identification scheme for finding resources over the Web, although more
popularly identified with hypertext and browsers . Note that the Web is
generally considered to be an application of the Internet, as there exist many
applications such as Telnet and File-Transfer Protocol (FTP) that do not
necessarily use the Web.
While many academics
were involved in AI were attempting to create an individual in silicon, in
Russia there was a veritable rebirth of interest in cybernetics, the universal
theory of feedback and control. Unlike AI who insisted on replacing humanity,
cyberneticists were primarily interested in using cycles of input and feedback
in order to create ever tighter coupling between humanity and machines, but
they were discredited by many in logicist AI such as Minsky. Led by Axel Berg,
a new generation of soviet cyberneticists came to believe that only through computer-mediated
feedback cycles between workers and producers could the Soviet economy go
beyond the inherent problems of the command economy of Stalin (Lange, 1967).
Also, worrisome to the military, the founder of cybernetics, Norbert Wiener,
was increasingly against the Cold War and the military funding of science, and
became a hero in Russia. In panic to the possibility of losing the
“cybernetics” war, as revealed by John Ford's unclassified “Soviet Cybernetics
and International Development” report, the U.S. military poured funding into
the creation of an American alternative to Soviet cybernetics. In 1962
Licklider of MIT proposed the creation of a “Galactic Network” of machines, and
with his obtaining leadership of DARPA (Defense Advanced Research Projects
Agency), he proceeded to fund this project while the Soviets reverted back to a
more centralized comman economy (Barbrook, in preparation).
Before Licklider's
idea of the “Galactic Network,” networks were assumed to be static and closed
systems. One either communicated with a network or not. However, under the
aegis of DARPA far-seeing groups of scientists determined that there could be
an “open architecture networking” where a meta-level “Internetworking
Architecture” would allow diverse networks to connect to each other, so that
“they required that one be used as a component of the other, rather than acting
as a peer of the other in offering end-to-end service” (Leiner et. al. 2003).
This concept became, foreshadowing later social movements, the “Network of
Networks” or the “Internet.” While the Internet architecture provided the
motivating concepts, it did not define on the onset a scalable transfer
protocol. Robert Kahn and Vint Cerf devised a protocol that took into account,
among other, four key factors (Leiner et. al. 2003):
1. Each distinct network would
have to stand on its own and no internal changes could be required to any such
network to connect it to the Internet.
2. Communications would be on a
best effort basis. If a packet didn’t make it to the final destination, it
would shortly be retransmitted from the source.
3. Black boxes would be used to
connect the networks; these would later be called gateways and routers. There would
be no information retained by the gateways about the individual flows of
packets passing through them, thereby keeping them simple and avoiding
complicated adaptation and recovery from various failure modes.
4. There would be no global
control at the operations level.
The solution to this problem was TCP/IP.
Data is subdivided into “packets” that are all treated independently by the
network. Any data sent over the Internet is divided into relatively equal size
packets by TCP (Transmission Control Protocol), which then sends the packets
over the network using IP (Internet Protocol). Each computer has an Internet
Number, a four byte destination address, such as 152.2.210.122, and IP routes
the system through various black-boxes like gateway and routers that do not try
to reconstruct the original data from the packet. At the recipient end, TCP
collects the incoming packets and then reconstructs the data.
While the system is decentralized in
principle, it is in reality a hybrid network with centralized elements, for the
key assignment of IP addresses to individual machines comes from an authority.
However, in general the central authority (formerly the volunteer Jon Postel
and currently ICANN) is restricted to a fairly minimal amount of control, such
as the top-level bytes, while other organizations can assign the lower-level
bytes in a decentralized manner. By defining the minimal architecture needed
for people to share information in a decentralized fashion and a way of
information to be sent through many possible routes, the immaterial aristocrats
of the Internet exemplified the perennial architecture of the network:
decentralization, redundancy, and possibility.
Immaterial aristocracy: the Internet Engineering Task
Force
Although the
Internet was started by DARPA as a military research project, it soon spread
beyond the rarefied confines of the university. Once news of this “Universal
Network” arrived, universities, corporations, and even governments began to
“plug-in” voluntarily. The Internet became defined by voluntary adherence to
open protocols and procedures defined by Internet standards. However, the
co-ordination of such a world-spanning Internet standards soon became a social
task that DARPA itself was less and less capable and willing to administer. As
more and more nodes joined the Internet, the military-industrial research
complex seemed less willing to fund and research it, perhaps realizing that it
was slowly spinning out of their control and with the threat of Soviet
cybernetics declining. It began with four nodes in 1969, yet by the end of the
1970s was spread across the continents. The Internet Control and Configuration
Board (ICCB) was formed in 1979 to administer and control the standards of the
Internet, and its first board consisted mainly of scientists and engineers
involved in the original Internet development, and the chairman of the ICCB was
called “the Internet Architect.” In a development typical of the immaterial
aristocracy, the loyalty of these engineers seemed to be more to the Internet
itself than anything else, and in 1984 the U.S. Military split its unclassified
military network, MILNET, from the Internet. No longer under the aegis purely
of DARPA and given a large amount funding by the NSF (U.S. National Science
Foundation), the Internet began a political process of self-organization to
establish a degree of autonomous digital sovereignty. An Internet Architecture
Board was devised to begin oversight in an era of decreasing military and
research funding. Many academics and researchers then joined the Internet
Research Steering Group (IRSG) to develop a long-term vision of the Internet.
Perhaps with the academics distracted, creating standards and maintaining the
infrastructure fell into the hands of the Internet Engineering Task Force, a
group squarely in the hands of the people who had committed the most hard-time
and labor to making the Internet: the hackers. Unlike their predecessors, the
hackers often did not have postgraduate degrees in computer science, but
instead had an intense and lifelong joyous commitment to the idea of a
universal computer network.
The organization of
the IETF embodied the anarchic spirit of the hackers. It was an ad-hoc and
informal body with no board of directors, although it soon began electing the
members of the IAB, who soon performed more and more of a symbolic role, and
the sovereignty of the Internet moved to the IETF. The IETF credits as their
main organizational guiding principle the IETF Credo, attributed to MIT
Professor and the first Chair of the IAB David Clark: “We reject kings,
presidents, and voting. We believe in rough consensus and running code.” True
to its Credo, the IETF operates by a radical absolute democratic process. There
are no official or even unofficial membership list, and individuals are not
paid to participate. Even if they belong to an organization they must
participate as an individual, and only participate voluntarily. Anyone may
join, and “joining” is defined only in terms of activity and contribution.
Decisions do not have to be ratified by the form of consensus or even majority
voting, but in general voting takes only a rough measure of agreement on an
idea. IETF members instead prefer to judge an idea by actual implementation
(running code), and arguments are decided by the effectiveness of practice
rather that ideological debate.
The structure of the IETF is defined by
areas such as “Multimedia” and “Security” and then subdivided into Working
Groups on obscure (yet important) subjects such as “atompub” and “smime.” In
these Working Groups most of the work of hashing out RFCs takes place. Groups
have elected Chairs whose task is to keep the group on topic. People are
respected for working for the common good of the Internet, and any work that is
suspected of personal or corporate interests is strongly frowned upon. The
etiquette is self-effacing, and egoism or cults of personality are also openly
discouraged. Even within the always technical yet sometimes partisan debates,
there are no formalities, and everyone from professors to teenagers are
addressed by their first name. Arguments often feature high levels of stubborn
and strenuous intellectual debate, yet a co-operative and practical focus
usually leads to arguments being resolved.
This kind
of informal organization is likely to develop informal hierarchies, and these
informal hierarchies are regarded as beneficial, since they are composed
usually of the brightest and most dedicated who volunteer the most of their
time for the Net: “A weekend is when you get up, put on comfortable clothes,
and go into work to do your Steering Group work.” If the majority of
participants in IETF feel that these informal hierarchies are getting in the
way of practical work, then the chairs of Working Groups and other informal
bureaucrats are removed by a voting process, as happened once en masse in 1992.
The IETF is also mainly a digital organization, as almost all communication is
handled by e-mail, although the IETF holds week-long plenary sessions three
times a year that attract over a thousand participants, with anyone welcome.
Even at these face-to-face gatherings, most of the truly ground-breaking
discussions seem to happen in even more informal “Birds of a Feather”
discussions that take place over refreshments. The most important product of
these list-serv discussions and meetings are IETF RFCs “Request for Comments,”
which define Internet standards such as URIs (RFC 1945) and HTTP (RFC 3986).
Indeed, the IETF exemplifies cognitive capitalism and immaterial labour, as
their labour tends to be part and parcel of their intellectual and recreational
lives, and their product is the result of a social process. The social aspect
of the IETF is intertwined with the technological aspects, as both the rules of
“netiquette” for ordinary users (the Responsible Use of the Network memos) and
their own IETF process guidelines, such as “RFC 1603: IETF Working Group
Guidelnes and Procedures,” are presented as any other technical standard.
The anarchist on the Internet
An anarchic body
such as the IETF by nature attracted technically adept political activists, of
who the most celebrated was probably Edinburgh-based anarchist Ian Heavens. Ian
Heavens was a singular revolutionary force within the Scottish anarchist
movement, and a key organizer of the Scottish Anarchist Federation. He also was
well-travelled, and he combined his love of Latin samba and Scots punk to
create the hybrid punk samba band “Bloco Vomit.” The band did several tours,
even to Brazil, and continually raised money through benefits to keep the
Autonomous Centre of Edinburgh afloat, a formerly squatted social centre in
Edinburgh. The Autonomous Centre of Edinburgh served as the operating base of
the Edinburgh Claimants, one the key groups that operated to organize the
unemployed for better living conditions, and part of the group “Community
Resistance” that sparked the anti-poll tax movement in Britain through its
direct action-based tactics.
Just as he
selflessly served the autonomous political movement in Edinburgh, he selflessly
devoted his time to the Internet. He participated regularly in the IETF, and
became famous for finding “bugs” (errors) in TCP/IP that had not been noticed
for years, co-authoring a IETF RFC 2525: “Known TCP Implementation Problems.”
He was quickly introduced to e-mail, and began the first digital archive of
Internet texts at Spunk.org, still maintained at http://www.spunk.org.
He traveled around Britain, explaining to activist groups how to use the
then-new Internet, and how the Zapatistas had effectively used the Internet to
communicate solidarity with their struggle. Yet in the fall-out of the infamous
“Green Anarchist and ALF” case, the tabloid press soon discovered Ian Heavens.
They printed an article about him and Spunk.org called “Anarchists use Information
Highway for Subversion.” (Adrian, 1995) Besides the outlandish claims linking
Spunk.org to groups like Direct Action in France, the article then proceeded to
proclaim Ian Heavens the mastermind behind the use of computers by the
anarchist movement. While the article noted that Ian “disapproved of violence,”
in a case curiously similar to the indictment of Sherman Austin, the article
noted his website included files on “how to overthrow the government” by
“robbing banks, disabling police vehicles, stealing documents and inciting
readers to arm themselves.” Suddenly at the center of a media controversy, Ian
was arrested for “encouraging violence by computer” and he nearly lost his job
(Adrian, 1995). He was prevented for using the Internet for further anarchist
activity, and had to cut off all ties with the IETF. Having the joy of his
work, both political and technical, taken from him, and facing mounting media
pressure on his family – just before his draft RFC of his famous finding bugs
in TCP/IP was about to published – Ian took his own life. The Sunday Times
article quotes Simon Hill, editor of Computing magazine “We have been amazed at
the level of organization of these extremist groups who have appeared on the
Internet in a short amount of time” (Adrian, 1995). Indeed, little did they
suspect that the organizational structure of the autonomous movement, due to
its strategic and historic focus on decentralization, would within a few short
years use the Internet to organize the anti-globalization movements and wreck
the neoliberal “free trade” program. Furthermore, little did they suspect that
the nature and social organization of the Internet was itself anarchy, and
anarchists were some of its main pioneers.
The Universal Information Space: the World Wide Web
Around the time of
Ian Heavens’s death, another IETF participant, Tim Berners-Lee was seeing the
fruition of his dream of a “universal information space,” which he dubbed the
“World Wide Web” (Berners-Lee, 1999). His original proposal brings his belief
in universality to the forefront: “We should work toward a universal linked
information system, in which generality and portability are more important than
fancy graphics and complex extra facilities” (Berners-Lee, 1989). As a software
engineering consultant at the nuclear physics plant CERN, Berners-Lee was
bedeviled at how to keep the myriad machines and information linked together,
and fell back upon his previous idea for an information-organization system for
his notes called “Enquire Inside On Everything” that would be “a universal
space across which all hypertext links can travel.” (Berners-Lee, 1989). The
IETF, perhaps due to its own anarchic nature, had produced a multitude of
incompatible protocols. While protocols could each communicate other computers
over the Internet, there was no universal format for the various protocols. Tim
Berners-Lee had a number of key concepts (Berners-Lee, 1989):
1. Calling any thing that
someone might want to communicate over the Internet a “resource.”
2. Each resource could be given
a universal resource identifier (URI) that allowed it to identified and perhaps
accessed. The word “universal” was used “to emphasize the importance of
universality, and of the persistence of information.”
3. The idea of simplifying
hypertext as the emergence human-readable format for data over the web, so any
document could link to any other document.
These three
principles formed the foundation of the World Wide Web. In the IETF
Berners-Lee, along with many compatriots such as Larry Masinter and Roy
Fielding, spear-headed development of the HTML (HyperText Markup Language)
standard and the HTTP (HyperText Transfer Protocol) standard. However,
Berners-Lee was frustrated by the IETF, who rejected his idea that any standard
could be universal. While Berners-Lee enjoyed the “T-shirts and jeans, and at
times no footwear” atmosphere, the process in the IETF slowed to a stop in the
URI working group, such that “partly due to the number of endless philosophical
rat holes down which technical conversations would disappear. John Kelsin, IETF
Applications Area director, was to angrily disband it” (Berners-Lee, 1999). At
the time a more hierarchical file-research system known as “Gopher” was
hegemonic way of navigating the Internet. In one of the first cases of digital
enclosure on the Internet, the University of Michigan decided to charge
corporate (but not academic and non-profit) users for the use of Gopher, and
immediately the system became a digital pariah on the Net. Berners-Lee, seeing
an opening for the World Wide Web, surrendered to the IETF and renamed URIs
“Uniform Resource Locators.” Crucially, he got CERN to release any intellectual
property rights they had to the Web, and he also managed to create running code
for his standard in the form of the first Web browser.
Despite the previous narrative, it would
be a mistake to hold Berners-Lee up on a pedestal as the creator the Web. While
some of the original insights such as “universality” are his, the Web was
created by a genuine social movement of hackers. Berners-Lee, Fielding, and the
others served primarily as untiring activists of the Web, convincing talented
hackers to spend their time creating Web servers and Web browsers, as well as
navigating the political and social process of creating Web standards. The Web
still primarily remained in the hands of hackers until a grad student at
Illinois called Marc Anderssen created the Mosaic Browser. In the words of
Terry Winograd, an AI researcher whose students went on to found Google, “I was
surprised at how totally different the feeling was. It was immediately obvious
that the introduction of graphics with text would make a big difference and
that it was a new phenomenon” (Winograd, 2003). Within a year the Web had
spread over the world, making Anderssen’s company Netscape a fortune. In what
would perhaps be another historical irony, years before the idea of a universal
political space was analyzed by Hardt and Negri as “Empire,” a universal
technology space was both articulated created by the hackers (Hardt and Negri,
2000). The idea of “universality” runs deep in the Web: it is precisely this
technological universal space that set into motion the feedback cycles that led
to the emergence of Empire (Hardt and Negri, 2000).
A crisis in digital sovereignty: the browser wars
Within a blink of
the eye, the immaterial aristocracy of the IETF lost control of the Web in the
first crisis of digital sovereignty of the Internet and Web. Previously the
Internet was considered useful for global communications, but the Internet qua
Internet was not viewed as a place where capital could extract value. With the
development of the Web as the hegemonic form of the Internet, the Web stopped
being viewed as a playground of hackers and was viewed instead as the latest
frontier of capital, an even infinite one. It would be the backbone that the
“knowledge economy” of Bell, that promised never-ending returns of
productivity, would be built (1973). The Web’s the rate of adoption
skyrocketed, for soon all the major corporations had a website. The
corporations sent their representatives to the IETF in an attempt to discover
who the power-brokers of the Internet were (and to stack their meetings in
order to control voting process), but instead found themselves immersed in
obscure technical conversations and mystified by the lack of formal body of
which to seize control. So, instead of taking over the IETF, the corporations
began ignoring the IETF.
The corporations did this by violating
standards in order to gain market adoption through “new” features. The battle
for market dominance between the two largest opponents, Microsoft and the
upstart Netscape, was based on an arms-race of features supposedly for the
users of the Web. These “new features” in reality soon led to a “lock-in” of
the Web where certain sites could only be viewed by one particular corporate
browser. This began to fracture the rapidly growing Web into incompatible
corporate fiefdoms, building upon the work but destroying the vision and
sovereignty of the IETF. Furthermore, the entire idea of the Web as an open
area of communication began to be challenged, albeit unsuccessfully, by the
Microsoft’s concept of “push content” and channels, which in effect attempted
to replicate the earlier hierarchical and one-way model of communication of the
television on the Internet. Seemingly fulfilling, Bell’s dreams of an endless
upward economy and other various neoliberal end-of-history fantasies,
capitalist speculation on Web-based companies entered unheard of and supposedly
never-ending heights (Bell, 1973).
Indeed, it
seemed as if the corporate industries had discovered a solution to their labour
problem that they had unsuccessfully attempted to solve earlier via artificial
intelligence. By using the Internet as the backbone of their global
communications system, the corporations could monitor and control with
ever-increasing precision their labour anywhere in the world, as opposed to
earlier where control of productive processes was directly tied to spatial boundaries.
This led to an increase in labour outsourcing and attendant neo-liberalization
as they hounded the world in search of ever cheaper labour. Yet it also led to
an ever-increasing antagonism, an antagonism that realized itself through its
use of the Internet to globally network the dispossessed, and led to the
misnamed “anti-globalization” movement that challenged and eventually helped
eventually bring neoliberalism to its knees at Seattle and beyond. Yet as
Microsoft triumphantly battered Netscape to a shallow shell of its former self,
the frenzied over-speculation led the “dot com” bubble, which exploded into a
crisis as many highy speculated companies failed to make a profit. The new
corporate immaterial aristocrats fell considerably in power, and soon (as in
MSNBC and AOL-TimeWarner) merged with older, more established television-age
aristocrats.
Re-establishing digital sovereignty: the World Wide Web
Consortium
Behind the scenes,
the creators of the Web were horrified by the fractures the corporate browser
wars had caused in their universal information space. In particular, Tim
Berners-Lee felt like his original dream of a universal information space had
been betrayed by corporations trying to create their own mutually incompatible
fiefdoms on the Web for profit. He correctly realized it was in the long-term
interests of both the corporations and the users of the Web to have a new form
of digital sovereignty. With the unique informal position of merit Berners-Lee
had as the “inventor of the Web” (although he admits freely and humbly the
invention of the Web was a collective endeavor), he decided to reconstitute
digital sovereignty in the form of the World Wide Web Consortium (W3C), a
non-profit dedicated to “leading the Web to it’s full potential by developing
protocols and guidelines that ensure long-term growth for the Web” (W3C, 2006).
Because the corporations had ignored the IETF since they found its informal
process far too slow and impenetrable, Berners-Lee moved from the absolute
democracy of the IETF to a model based on representative democracy that the
major corporations would understand and join, allowing the Web to harness the
power of the corporations while preserving its universality. With the rapid
growth of the Web, Berners-Lee believed that an absolute democracy based on
informal principles could not react quickly enough to the desires of users and
prevent corporations from fracturing universality for short-term gain. Instead
of the IETF, that would only standardize protocols that were already widely
used, the W3C would take a proactive stance to deploy standardized universal
formats before various corporations or other forces could deploy them.
Berners-Lee was made Director for life of the W3C, which was based originally
at MIT (although now it maintains headquarters of equal standing at INRIA in
France and Kyoto in Japan).
In the W3C,
“membership was open to any organization: commercial, educational, or
governmental, whether for-profit or not for profit.” Unlike the IETF,
membership came at a price. It would cost fifty thousand dollars for
corporations with revenue in excess of fifty million, and five thousand dollars
for smaller corporations and non-profits. It was organized as a strict
representative democracy, with each member organization sending one member to
the Advisory Committee that oversaw the whole Web. By opening up a “vendor
neutral” space, companies who previously were “ interested primarily in
advancing the technology for their own benefit” could be brought to the table. When
the formation of the W3C was announced, rather shockingly both Microsoft and
Netscape agreed to join, and as a point of pride Netscape even paid the full
fifty thousand dollar fee even though they weren't required. Having the two
parties most responsible for fracturing the Web at the table provided the
crucial breakthrough for the W3C. This allowed them to begin standardization of
HTML in a vendor-neutral format that would allow web pages to be viewed in any
standards-compliant browser. While the IETF still released its RFCs as ordinary
ASCII text files, the more cutting-edge W3C released their specifications as
well-formatted web-pages with color graphics. It soon became a matter of pride
for companies to join the W3C and make sure their people had positions on W3C
Working Groups. Berners-Lee’s cunning strategy to envelop the corporations
within the digital sovereignty of the W3C worked: “The competitive nature of
the group would drive the developments, and always bring everyone to the table
for the next issue. Yet members also knew that collaboration was the most
efficient way for everyone to grab a share of a rapidly growing pie”
(Berners-Lee, 1999). Based in a humble hacker ethos, the original universal
vision of the Web was inscribed into W3C mission statement: To expand the reach
of the Web to “everyone, everything, everywhere” (W3C, 2006).
Despite corporate backing, a W3C without
the hackers would be doomed. It is a truism in Web circles that truly
innovative products, technologies, or standards have always been the product of
committed hackers who “do for love what others would not do for money” in the
pursuit of better technology. The open and informal process of the IETF
encouraged the digital sovereignty of hackers at the expense of the corporations,
but the W3C had to negotiate between both corporate and hacker interests for
the greater benefit of the Web. If the W3C only managed to produce second-rate
specifications that thinly disguised corporate interests at the expense of the
users of the Web and alienated hackers, what digital sovereignty that was
mythologically embodied in the persona of Berners-Lee would be lost quickly.
However, based on the funds gathered from its corporate clients, unlike the
IETF “the W3C would have a small full-time staff to help design and develop the
code when necessary” (Berners-Lee, 1999). This in turn allowed Tim Berners-Lee
to hire as staff many of the brightest and best hackers. The low fees allowed
many hackers based in corporations or who ran their own small businesses to
join. Furthermore, every working group that was deciding on a standard had a
“Invited Expert,” a position usually given to prominent hacker. To those
hackers unable to find otherwise participate, most of the list-servs were open
(including the most important procedural ones, in order to ensure that the
organization remained transparent), and each standard went through extensive
process of open review and comments. In this manner, if anyone from a
well-known web architect to a completely unknown person objected to any
decision, this objection would be made public and the W3C would be accountable.
In this manner the W3C gained the trust of the hackers. The W3C could then both
take advantage of the measureless immaterial productivity of the hackers while
also harnessing the material prosperity and obedience of the corporations, who
needed the hackers to be the motor of innovation behind the wheels of the
digital industry.
After the standardization of HTML, the
next great triumph of the W3C was without a doubt the invention of XML
(Extended Markup Language), a simplified version of the grandfather of HTML
known as SGML, that could serve as a universal data syntax for transferring
almost any kind of data around the Web. It was originally designed to help
separate content from style, but soon became used as a general purpose
data-format, a sort of ASCII for the Web. Combining both a hacker’s instincts
for simplicity and usability with widespread corporate adoption, it soon
overtook the Web, powering everything from RSS (letting web-sites keep each
other aware of their updated content) to the merging of the myriad corporate
and government databases. This led to a whole cottage industry of XML-based
technologies, some of which never took off, while others became de facto
industry standards. Indeed, no small part of the W3C's success could be due to
marketing genius, for the Extensible Markup Language's acronym is, after all,
XML instead of EML.
From Heidegger to search engines
In Silicon Valley, a
distinctly different story was taking place. Terry Winograd had been a student
to Marvin Minsky, one of the founding fathers of artificial intelligence at
Dartmouth. Perhaps under the influence of subversive currents prevalent on the
West Coast, he did two unthinkable things. First, in an act of absolute refusal
that would normally destroy one’s career, he refused to take any military
funding whatsoever and helped found the Computer Professionals for Social
Responsibility to organize against the militarism rampant in academic computer
science. Second, after a series of conversations with Hubert Dreyfus, he began
reading Heidegger and was one of the first AI insiders to realize that the
transcendental programme to create a disembodied classical AI was a failure. Instead,
Winograd championed the sense of immanent “being in the world” and decided to
try to build AI based on principles of human-centred hermeneutics, as opposed
to transcendental analytic philosophy.
Meanwhile, in Chile under the socialist
government of Allende, the work of radical biologist Maturana help continue a
revival of cybernetics theory, discredited by artificial intelligence in the
United States. Instead of trying to formalize knowledge using logic and
representations, Maturana instead tried to describe self-sustaining networks
through his concept of “autopoiesis.” Autopoiesis is defined as “an autopoietic
machine is a machine organized (defined as a unity) as a network of processes
of production (transformation and destruction) of components which through
their interactions and transformations continuously regenerate and realize the
network of processes (relations) that produced them” (Maturana and Varela,
1973). This concept of “autopoiesis” is also a remarkably apt description of
the Internet. This profusion of these ideas in Chile led Allende and a young
systems theorist Ferndando Flores to bring over the social cyberneticist
Stafford Beer. The job of Beer's company, with the aid of Flores and his team,
was to deploy a real-time computer network called Cybersyn to control the
entire Chilean economy. Twenty years ahead of its time, Cybersyn allowed every
factory to be in constant contact with each other via telex in order to
co-ordinate production. Cybersyn straddled the tension between a centralized
command economy and, as Allende himself put it, a “decentralizing,
worker-participative, and anti-bureaucratic network that allowed each worker to
communicate to others” (Medina, 2005). Foreshadowing the eventual cybernetic
nature of immaterial labour, Cybersyn's ideal was to allow a worker “to
contribute both physically and mentally to the production process” (Medina,
2005). The network proved its mettle early, for its protocol Cybernet proved
invaluable in co-ordinating supply trucks, giving a crucial advantage to the
Allende government its its battle against the 1972 counter-revolutionary
strikes, and afterwards Flores was promoted to head of the economy. Cybersyn
was never completed due to Pinochet’s coup, and the military “found the open,
egalitarian aspects of the system unattractive” (Beckett, 2003). Cybersyn was
utterly destroyed just as the Internet in the United States was taking off.
Flores was thrown in jail by Pinochet for
three years, yet managed with the help of Amnesty International to escape to
Palo Alto. There Flores met the disillusioned Terry Winograd, whose project of
re-founding AI upon hermeneutics never truly took off. The two began to
re-envision the purpose of artificial intelligence. Instead of trying to create
an independent silicon intelligence, they wanted computer scientists to focus
on what type of design that would complement human abilities. They explored
these questions of allowing humans and computers to interact in a mutually
beneficial and hybrid (some would say “cyborg”) manner and wrote a book
“Understanding Computers and Cognition” explaining their new ideas (Winograd
and Flores,1986). While Flores went off to become a business consultant,
Winograd continued teaching classes on Heidegger and human-computer interaction.
One of his students Larry Page had the idea that the Web as not only a space of
information, but a social space, and this could be the key to an effective
search engine. Indeed, one giant blind-spot of the original Web was that there
was no index and so no way to search its content. Under the influence of
Winograd, Larry Page and his partner Sergey Brin had an epiphany about the
nature of the Web: “PageRank relies on the uniquely democratic nature of the
web by using its vast link structure as an indicator of an individual page’s
value” (Page et. al.,1999). By realizing the social nature of information is
inscribed in the technical apparatus of the Web itself, and so one could rank
pages by relevance by looking at not only the terms contained in the page, but
how many other pages linked to them. Brin and Page founded a company, Google,
based on this idea and followed rules of human-centered design, such as keeping
their search engine homepage uncluttered with advertisements in order to keep
users happy and to have the page load quickly (which led it not being valid W3C
HTML, much to the consternation of the W3C). The company took off, and within a
few short years it had virtually conquered the search engine market, making
itself into a digital sovereign force on a scope never before seen. Rumor has
it that internally Google's motto is “Google is the Web.” No other search
engine has ever achieved such hegemony over the Web.
The Semantic Web and the hegemony of Google
The W3C’s latest project
is the Semantic Web, to transform the current Web into a “web of meaning” whose
meaning would be accessible both to humans and machines (Berners-Lee, 2001).
Berners-Lee has always had as part of his vision that URIs should not only be
given to people, but to anything, including “dogs and cars.” (Jacobs and Walsh,
2004). The Web should extend its universal information space from web-pages
into reality itself. Berners-Lee calls this “The Semantic Web,” a term
first-used by Foucault when referring to the “semantic web of resemblances” in
his “Les Mots et Les Choses” (Foucault, 1966). The Semantic Web lets one create
new and richer types of links between URIs, and use this format to give data on
the Web a “well-defined meaning” in order for it to be “understandable by
machines” (Berners-Lee, 2001). In other words, these new “links” should have a
mapping to formal logic, that would allow machine agents to combine and reason
about them automatically. This goal is curiously similar to the spirit of
classical artificial intelligence, and it is no surprise that many of the key
movers on the Semantic Web are members of the artificial intelligentsia, such
as McCarthy’s cohort Pat Hayes or the ex-chief lieutenant of Cyc, R.V. Guha.
Unlike previous attempts at knowledge representation, the Semantic Web would be
a universal yet decentralized system: “The problem with knowledge
representation is that theses systems are designed around a central database
... the Web , in contrast, does not try to define a whole system, just one Web
page at any one time” (Berners-Lee, 1999). Stepping down from his usual role as
neutral director, Berners-Lee and the W3C are aggressively pursuing this vision
of the Semantic Web as a ultimate universal space for all data, bound together
by formal logical languages and navigated by intelligent agents. Due to this
endeavour by the W3C, the field of artificial intelligence is experiencing a
full-scale revival.
In stark contrast,
Google scoffs at the Semantic Web vision. Yet the ride has not always been
smooth for Google. As news leaked about is Pagerank algorithm and “link farms”
began abusing it, rumor has it that Google has stopped using its PageRanking
algorithm. Instead of letting the user describe their own data like the
Semantic Web, Google needs the content of the data itself, and uses massive
machine-learning algorithms to discover the relevancy of pages to searches.
This increasingly requires using complex techniques from artificial
intelligence and computational linguistics to inspect the content of the data
itself. However, their gambit on focusing on “the content” seems to be working,
as Google manages confidentially to hold on to an increasingly precarious
position as the hegemonic search engine on the Web – although their position may
have just as much to do with advertising as the quality of their search engine.
This precarious corporate position is causing Google to restrict open flows of
information from within its company to the outside world. Fighting on decidedly
corporate turf, Google closely guards as a secret the details of the successor
to the Pageranking algorithm, and even mere visitors to one of the many Google
compounds now have to sign a non-disclosure form to never mention to anyone
what they have seen there.
Even more mysterious plans are afoot as
Google buys endless miles of “dark fiber” in the United States, perhaps to
shortcut the telecommunications giants that control current broadband services.
Google has proposed to blanket San Francisco in free wireless, and many
speculate that the world may by their goal, so that Google would de facto
become the entire Internet (Hanson,2005). Google once prided itself on being
just a search engine, it is now offering its users more and more services from
mail to calendars, and is even outrightly buying small companies ran by hackers
like Writely in an attempt to outmaneuver Microsoft in the office productivity
sector. Despite its much lauded hacker (and not-so-covertly anti-Microsoft) “Do
No Evil” slogan and its defense of its search logs from the encroaches of the
U.S. Government, Google has co-operated with the Chinese government in
restricting access to the Web on forbidden terms like “democracy” and there is
no doubt now that its bottom line, despite all the marketing, is profit
(Ghitis, 2006).
A crisis in sovereignty for the Web is on
the horizon: Google is the exemplar of the search engine without which the Web
would be unusable, and the W3C is the alliance of industry and hackers that
keep the Web from being monopolized by a single party. Yet Google is
increasingly keeping data on search queries, e-mails, and all other aspects of
its users, and so has the ability on a scale hitherto unimagined even by
governments to monitor and discover information about people. This “data-mining”
will likely be for profit and varied forms of soft control. On the other pole
of digital sovereignty, the W3C is increasingly fanatically devoted to the
Semantic Web vision, and is banking its credibility on the success of a vision
with a remarkable resemblance to the failure of classical artificial
intelligence. Microsoft, Amazon, Yahoo and other companies are now preparing
for the second battle for the Web against the hegemony of Google, and it is
unclear if the W3C can maintain its digital sovereignty so that the Web will
survive as a truly universal information space.
Following Google’s
business model, a host of smaller “Web 2.0” companies unaffiliated with either
Google or the W3C have arisen under the slogan of “social software.” These
small companies offer some sort of free service to their users (such as the
free photo or bookmark sharing). In return, the users are subjected to
advertising. These companies allow users to organize their data not through a
Semantic Web hierarchy but through a flat ontology of natural language key
words, or “tags,” and then offer this tags and data to other users of their
service as a valuable benefit. The Web 2.0 ideology is “to let your users build
your content for you” – for free. These companies then can through contextual
advertising make money from the immaterial labour of their users. What most of
their users don’t understand is that their data is “locked-in” to their
particular Web 2.0 service company much as web-pages were previously
“locked-in” to browsers. With access to their user’s photos, e-mail, bookmarks,
and so on, one could only imagine what these Web 2.0 companies could do in the
pursuit of further profit with personal data. The precedents are already
happening. The Web 2.0 company del.icio.us was bought by Yahoo, and Yahoo in
turn recently gave the Chinese government digital information vital to
imprisoning the journalist Shi Tao (Kerstetter, 2005). Who knows the Chinese
government would do with Shi Tao's bookmarks? Yet as many of the hackers join
these Web 2.0 companies in an attempt to make living, one can see a new
non-Google and non-W3C pole of digital sovereignty in formation.
Digital civil war:
the ICANN affair and beyond
Furthermore, what is
surprising about digital sovereignty is that these immaterial aristocrats,
while often funded by traditional governmental bodies or large corporations,
tend to operate almost completely autonomously from any governmental or
corporate oversight in the style of traditional hackers. In fact, when faced with
complying with bureaucracy they consider useless or rules that they consider to
“close down” access to information, many of the immaterial aristocracy would
rather go on to found their own corporations that respected the “hacker” ethos.
This “Do No Evil” mindset was the narrative spun by Google at its onset. Still,
these halcyon days of the hackers are increasingly over, with governmental and
corporate forces preparing to fully take over or destroy the Internet. This is
not alarmist, as recent reports from the White House have confirmed their
desire for “maximum control of the electromagnetic spectrum, denying,
degrading, disrupting or destroying the full spectrum of communications
equipment” – in other words, their ability to “fight the Net” (MacKay, 2006).
It is always easier
to control rather than to destroy. On a practical level, the United States
Department of Commerce owns the assignment of IP numbers to machines. This is
the crucial naming mechanism that underlies the functioning of TCP/IP. While
for years it was controlled due to government neglect by hackers, it is now
leased to the private corporation ICANN ( Internet Corporation for Assigned
Names and Numbers). The control of IP addresses and domain names is the most
centralized aspect of the otherwise decentralized Internet and Web. Formerly,
this most sensitive of tasks was headed at first on a voluntary role for the
good of the Internet by well-respected IETF participant Jon Postel. After his
death, the service was privatized to the non-profit corporation ICANN by the
United States Department of Commerce (Courtney and Kapur, 2002).
Many members of the
Web community decry ICANN as undemocratic and state that it bears no sense of
no responsibility for the Internet. Worryingly to conspiracy theorists
interested in Google, ICANN also has as its chief board of trustees Vint Cerf,
founding father of the Internet that now works at Google. It holds supposedly
“public” meetings in locations such as Accra, Ghana. At the meeting in Ghana,
ICANN historically decided to reduce public participation in its process.
Behind closed doors, most of the Internet community smells corruption, and this
came to light when ICANN sold the .com registry (the files containing all the
.com domain resolutions) to the for-profit Verisign corporation, who then hiked
the price up for buying .com domains. Verisign even overstepped its bounds by
letting mistyped domain names go to a page full of Verisign advertisements,
breaking a crucial component of the architecture of the Web. Only under
considerable public pressure did ICANN force Verisign to stop. Berners-Lee has
always been forthcoming that he would prefer domain names (and likely IP
addresses) to be given to a legal body (Berners-Lee, 1999). His wish could come
true, since with the advent of the Verisign crisis the United Nations stepped
with an ominous rumbling that it could take over the capabilities of ICANN
(McCullagh, 2004). Indeed, much of the world now sees the United States as
unfairly dominating ICANN and Internet infrastructure in general.
The digital
sovereignty of the W3C is its alliance between hackers and the corporations,
but it now will soon be challenged by nation-states, in particular the United
States. The United States Congress is at the time of this writing passing the
“Communications Opportunity, Promotion and Enhancement Act of 2006” backed by
the telecommunications industry giants to create a multi-tiered Internet in
order to save the obsolete industry model of cable and telephony from being
superceded by video and voice over the Internet (Grebb, 2006). Although it is
unclear what the final result would be – and would violate one of the founding
principles of the Internet, that of “net neutrality” – that every computer on
the Internet would not discriminate against data from any other computer. If
net neutrality is discarded, this could lead large phone corporations like
AT&T to charge web site owners for giving them website bandwidth. In a
worse case scenario, corporate web-sites would download faster, and the
Internet itself could devolve into corporate fiefdoms with no room for revenge.
With a multi-tiered Internet, independent media sites or non-corporate bloggers
would be be intolerably slow, while the news of MSNBC or AOL-TimeWarner would
download almost instantly. The independent decentralized nature of the Web
would be choked to death, and this prospect has outraged everyone from Google
to Berners-Lee. With such privatization in the works, combined with ominous war
games to shut down the Internet being played by the neoconservatives, the world
has a right to be afraid of United States control over any part of key Internet
infrastructure. Yet it is in the long-term interest of capital to keep some
form of “net neutrality,” for privatizing the very infrastructure of the
Internet would destroy the framework they need to maintain productivity, as
“the privatization of the electronic `commons' has become an obstacle to
further innovation. When communication is the basis of production, then
privatization immediately hinders creativity and productivity” (Hardt and
Negri, 2004).
The United Nations
is becoming increasingly strident in its demands to hand-over key Internet
infrastructure over to itself, as being planned by its forums such as the
Global Forum for Internet Governance and the World Summit for the Information
Society (WSIS) (McCullagh, 2005). While on the surface viewed by some as an
improvement over the current situation, the United Nations are hopelessly vague
as regards their actual plans and possibly equally as dangerous to the Net as
the United States. In one of the first mass mobilizations related to digital
sovereignty, in 2003 an alliance of Internet activists and “anti-globalization”
democracy activists gathered against the WSIS in Geneva. Their base of
operations, the Polymedia lab that exemplified horizontal and open
communications, shut down by riot police (Brennan, 2003). The next phase in
WSIS happened in Tunisia, a country with an autocratic ban on freedom of
expression. The W3C seems sympathetic to the United Nations, with Tim
Berners-Lee and Kofi Annan sending out a joint e-mail to students encouraging
them to use the Web to “share enough understanding planet-wide to bring peace.”
Caught in between a corporate influences to divide the Internet and plans by
the inter-governmental structures to take over infrastructure, the immaterial
aristocracy of the Web is in crisis and it unclear how digital sovereignty will
re-constitute itself. As the Web has proved to be a common good for much of humanity,
this could be a little-known yet truly historic crisis.
The end(s) of the Net
Let us now return to
our cosmic exile who is contemplated the silent – and not so silent – language
of the Internet. Taking stock of our inquiry, we have discovered the human
history of the Internet and the Web, and so can cast aside the mystification
surrounding the appearance of the knowledge economy. What should be clear is no
one person, government, or body can claim ownership over the Internet. In
popular mythology, Al Gore “invented” the “information super-highway,” but one
suspects he may just be copying a line from his high-way building father..
Likewise, Berners-Lee perpetually notes that while the particular combination
of some ideas were his, the actual creation of the Web was a “giant grassroots
and collective effort” (Berners-Lee, 1999). The Internet and Web were built on
the creative and fertile hybrid of government-subsidized research, the
corporate drive for profit, and the humble joy of thousands of nameless
hackers. We have traced the formation of these interests as the immaterial
aristocracy who constitute the governing bodies of the Internet and Web. We
have traced the path of these aristocracy through the absolute democracy of the
IETF, the crisis of capitalist subsumption of the Internet architecture known
as the browser wars, and the reconstitution of digital sovereignty in the form
of the representative democracy of the W3C, a body that hybridized both
corporate and hacker interests. Yet our cosmic exile would not be satisfied
with a mere historical response: the real question is not from whence and
where, but why?
What is the destiny,
of the Web? Does this latest phase of the “wandering of humanity” even have a
teleological purpose (Camatte, 1975)? Or is it just, as the quintessential
hacker would believe, technology for technology’s sake? This is certain: the
Web – by creating a universal and immanent smooth plane for flows of knowledge,
creates ever more capitalist productivity. The dream of artificial intelligence
is just this notion taken to its infinite limit: the replacement of all human
labor by machines. Yet this goal of AI, the robot-worker, it has been a
failure. However, this vision could mean the complete symbiosis of the worker
and capital, or in other words – of the human and the machine: the cyborg
worker as opposed to the robot worker. The cyborg worker – and the cyborg
consumer, the only side of the coin – are the dominant paradigm of the Web. The
real trick of capital in the age of globalization is simple: Robots are
expensive, human labor is globally is cheap. So, ironically, the Web is both
the world's largest encyclopedia and the enabler of hitherto unimagined global
sweatshops, a historic phenomena never before possible except with the
cybernetic co-ordination of the Internet So the Web is the great enabler of
immaterial labour, yet it also is the great co-ordinator of material labour.
The real costs of immaterial labour are still grounded in material labour, and
material labour is a transformative process: the transformation of living
labour and natural resources into capital.
So the greatest
blind spot of autonomist currents recently has been their neglect of this
concept of “natural resources,” a stark contrast to the nearly infinite
universal resources of information given by the Web. For the hidden cost of the
capital’s externalities have begun their untimely return. The natural resources
of the planet have been proved by science to be finite, and faced with the
infinite expansion of capital, the infrastructure of life itself is under
threat, as witnessed by threat of peak oil and climate exchange. Indeed,
climate change may ultimately destroy the human species itself, cutting down
our wandering and rendering capitalism a millennial experiment gone wrong. The
growth of the Web is itself grounded in material production: there are few
processes more resource-intensive than the creation of computer chips. Is the
purpose of the Web to further the development of unrestrained capital, who despite
the turn to immaterial labour, will lead the human species and life itself to
extinction? Are the digital sovereigns presiding over a suicide?
In one of his later
essays, Lyotard develops an ingenious argument that even our cosmic exile would
understand: “It isn’t any human desire to know or transform reality that
propels this techno-science, but cosmic circumstance” (Lyotard, 1991). The
cosmic circumstance that Lyotard is referring to the coming “solar explosion”
of our sun, and its attendant destruction of the planet. Therefore, for true
survival of the species and the continuation of thought itself, the entire
project of humanity – including postmodern capitalism – must be to find a way
to survive this coming “pure disaster” (Lyotard, 1991). Indeed, if one buys
this hypothesis that perhaps even on some unconscious level the goals of
capital are the survival of thought beyond the solar collapse, then
“theoretically the solution is very simple: manufacture hardware capable of
‘nurturing’ software at least as complex (or replex) as the present human
brain, but in non-terrestrial conditions” (Lyotard, 1991). The gambit of
artificial intelligence seems obscene to the contemporary capitalist: Why
bother with artificially intelligent robots if humans are cheap, especially
given the destruction of Keynesianism and the advent of neoliberal
globalization? If one suspends one's disbelief in Lyotard's thesis, the gambit
of artificial intelligence makes perfect sense if we are to wish for true
immortality beyond the solar collapse, and all thought of profit is indeed
instrumental to this goal. In fact, the gambit of artificial intelligence is
the true end of capitalism.
Is this
technological eschatology necessary, or just the parting cosmic joke of
Lyotard? It seems increasingly serious, as both the prominent digital
sovereigns of the Web, Google and the W3C, seem increasingly to be turning
their energies to artificial intelligence. More and more artificial
intelligence researchers are disappearing behind the gates of Google and as the
W3C grows increasingly evangelical in his quest for the Semantic Web, the
dichotomy between the failure of artificial intelligence and the Web was false.
Instead, the Web may be the midwife of the artificial intelligence capable of surviving
the solar death, permanently displacing the human subject with its inhuman
power.
Lenin in Silicon Valley
Despite the
certainty of solar death in the distant future, there are more pressing matters.
For to create artificial intelligence, the world will need at least decades
more time, if not a millennium. This would be if the prospect for genuine
artificial intelligence was philosophically sound (which Dreyfus and others
have convincingly argued that it is not). As for artificial intelligence, there
is no time. The ecological crisis being provoked by capitalism may end life on
this planet far sooner than any solar death. The question of the hour is “What
can be done?” The answer is straightforward. Capitalism based on infinite
growth must be ended, and a new form of economic organization more amendable to
the finite bounds of human life must take hold. Bolter argued the computer age
could bring us some respite from the infinite accumulation of capital, “Perhaps
the most revolutionary change of all is that the computer man thinks of his
world, intellectual and physical, as finite”(Bolter, 1984). Bolter was thinking
of the first resource-scarce computers. In contrast the Web seems to provide
infinite computing power and knowledge, and so it is doubtful if Bolter's
hypothesis still holds. More important than the merely finite resources of the
technology is the realization of the infinite creative biopower that has only
started be realized by the Web. As Hardt has noted in one of his lectures,
technological and social forms accompany and reinforce each other, so just as
the factory brought about and sustained the mass worker, the Web has brought
about and sustained the multitude.
The realization of the
multitude could be an alternative telos to the Web. This is coming
none-too-soon, for the creative powers of the multitude will have to be up to
task of solving the increasingly profound ecological crisis. This perspective
leads us to see the Web not as the handmaiden of AI and the precursor to the
death of humanity, but as humanity’s last best hope in face of the
ever-mounting crisis. As has been pointed out by Gorz and numerous others, in
Marx’s original formulation the ecological crisis has been left out of the
analysis (Gorz, 1979). Exempt in a few scattered and heavily interpreted
references, so has the advent of information technologies such as the Web. Both
the advent of information technology and the looming ecological crisis are the
two crucial new variables that will have to be taken into account in any
analysis of the present, and both variables are conspicuously absent in
traditional readings of both capitalism and Marx.
Instead of staring
into the face of the inhuman, with our brief inquiry complete we can look into
the human face of the Web. The patron saint of Silicon Valley, Teilhard de
Chardin would have proclaimed that the Web is the latest material realization
of the noosphere. “The idea,” de Chardin writes, “is that of the Earth not only
covered by myriads of grains of thought, but enclosed in a single thinking
envelope so as to form a single vast grain of thought on the sidereal scale ...
no one can deny that a network (a world network) of economic and psychic
affiliations is being woven at ever increasing speed which envelops and
constantly penetrates more deeply within each of us. With every day that passes
it becomes a little more impossible for us to act or think otherwise than
collectively” (de Chardin, 1959). In fact, in one apocryphal story, it is de
Chardin who inspired Robert Noyce to invent the integrated circuit, which
currently powers all modern computers. In another idealistic quote, Berners-Lee
reflects on how human society serves as the metaphor for the Web, since “the reason
we need universality is that’s how people operate in the real world. If the
World Wide Web is to represent and support the web of life, it has to enable us
to operate in different ways with different groups of different sizes and
scopes at different places every day” (Berners-Lee, 1999). The Web should not
be viewed as inhuman, but as part and parcel of our evolution and our society.
One critique of the
multitude is that its proponents have no concrete plan for bringing it from the
plane of the virtual to the actual. While the analysis is ground-breaking,
strategically we are left with little except utopian (or not-so-utopian
demands) for global citizenship and guaranteed basic income. We hope that our
inquiry into the history of the Web can help us recognize another way forward,
to fight for and preserve the technological infrastructure that enables the
social form of the multitude. For the creation of this infrastructure was not
written in the stars, but the collective work of committed individuals. It could
have easily turned out other-wise. One can hear the echo of Mario Tronti, “Then
perhaps we would discover that 'organizational miracles' are always happening,
and have always been happening” (Tronti,1964). So it is not true that we have
no model for the multitude. The technical infrastructure of the Web itself is a
model for the multitude: “The Internet is ... the prime example of this
democratic network structure. An indeterminate and potentially unlimited number
of interconnected nodes communicate with no central point of control, all nodes
regardless of territorial location connect to all others through a myriad of
potential paths and relays” (Hardt and Negri, 2000).
The problem is not
that “the hardest point is the transition to organization” for the multitude
(Tronti, 1964).The problem of the hour is the struggle to keep the rhizomatic
“non-hierarchical and non-centered network structure” of the Web open,
universal, and free (Hardt and Negri, 2000). There is no Lenin in Silicon
Valley, plotting the political programme of the network revolution. The beauty
of the Web is that it makes the very idea of Lenin obsolete. Instead of
modelling themselves on oligarchs and vanguards, revolutionaries of today
should be genuine situationists, realizing that no political programme can
bring about the revolution. The situationist realizes that only the creation of
revolutionary situations where the people can transform into the multitude by
realizing their own strength and creating connections of communication and
action . These situations are not just created by street carnivals and
struggles over precarious labour, these situations can be created through
technical infrastructure. In fact, the defining situation of the multitude, a
universal antagonist for Empire, is reflected in the universal information
space of the Web.
Lastly, the truth of
the matter is that the immaterial aristocrats are not the motor of the Web,
although their ideas have helped create its infrastructure. The true motor of
the Web is none other than immaterial labour itself, the collective desires of
humanity. What gives the digital sovereigns power is precisely their ability to
fulfill and desires and needs of “users” – humanity at large – and the desire
of the potential multitude for ever more universal communication and
commonality. For example, the immense desire of the most technically inept
child to hear new music, to communicate to others, to stay “in touch” with
their loved ones – and to do all of these things for free – is in of itself
revolutionary. These desires for better technology are not the private domain
of the immaterial aristocrats, but the common desire of humanity. Only by
creating the technical means to fulfill these desires, while negotiating with
the various imperial entities, do the immaterial aristocrats maintain their
sovereignty. The sharing of music is a common good, and yet the former ruling
musical aristocracy, the recording industry, wish to defeat this at all costs.
Those aristocrats like Jobs who can negotiate the power of the recording
industry to provide the multitudes with the ability to copy and play music
cheaply and digitally are those that will receive power. To resist the desires
of the multitudes on the Web is like trying to swim against the tide. Let us not
put the horse before the cart: it is the potential of the multitude that drives
the immaterial aristocrats, and those that acknowledge and understand that are
those who maintain their sovereignty.
The close
relationship between the Web and the multitude has tactical bearing upon
future: At all costs the Web must preserve its universal nature in order to
further bring about the creation of the multitude. If there is truly inherent
revolutionary potential – not always realized – in the network structure of the
Web, then it must be defended against all odds by potential destruction. In
this case, while the immaterial aristocracy of the IETF and the W3C are
doubtless part of the imperial regime, both organizations are primarily
motivated by the joy of the creating and maintaining technical network, and are
heavily committed to keeping the network universal. By realizing this,
proponents of universal absolute democracy should join common cause with these
organizations to help defend them against the onslaught of corruption (ICANN),
corporate hegemony (Google), and the imperial designs of governments (WSIS, the
“Communications Opportunity, Promotion and Enhancement Act of 2006” in the
United States). Since these immaterial aristocracies operate in a democratic fashion,
even an absolutely democratic fashion in the case of the IETF, all it takes –
in the spirit of Ian Heavens – is time and commitments from revolutionaries to
help resolve the latest round of crisis in digital sovereignty. The most
straight-forward path is to begin to interact with the immaterial aristocracy,
such as by attending an IETF gathering or joining a W3C list, and also to build
alternative sources of digital sovereignty such as through exploration of
peer-to-peer software, or the creation of alternative forums such as the World
Summit for a Free Information Infrastructure. This brief inquiry into digital
sovereignty may help this most practical of affairs. We are approaching a
cross-roads, and nothing less than action is required. Nothing more than the
future itself may be at stake.
Dedication:
This paper is dedicated to Push Singh, former AI
researcher at MIT and one of the kindest and gentlest souls I have ever known.
Push wished to pursue the task of classical artificial intelligence: to create
the intelligence of an individual human being fully in silicon. Unlike previous
researchers, he began modeling “common-sense,” and hoped to build emotions and
social behavior in these simulated agents. Yet, did they ever provide him
anything except a cold and mechanical embrace? What future did he discern in
these simulations? Did he come to the understanding that computers are not
humans themselves, and their silicon intelligence is in-of-itself inhuman? I
remember arguing with him in Marvin Minsky’s living room, defending my thesis
that computers are not replacements for humans, but computer are complementary
to humans. Did he discover humanity in his machines, or did he surrender to the
inscrutable difference that separated him from his silicon creation? We shall
never know, for he took his own life for reasons that can only haunt our
imaginations.
Bibliography:
Barbrook,
Richard (in preparation). Imaginary Futures.
Beckett,
Andy. (2003). “Santiago Dreaming” in The Guardian. May 8th: http://www.guardian.co.uk/g2/story/0,3604,1037327,00.html
Bell,
Daniel (1973). The Coming of Post-Industrial Society. New York: Basic
Books.
Berners-Lee, Tim (1989). Information
Management: A Proposal. CERN.
Berners-Lee,
Tim. (1999) Weaving the Web. Harper Press.
Berners-Lee,
T., Hendler, J. and Lassila, O. (2001). “The Semantic Web”. Scientific
American.
Brenna,
Denise (2003). WSIS? We Size! PolymediaLab: access denied. http://www.indymedia.org.uk/en/2003/12/282658.html
Bolter,
J. David (1984). Turing's Man Western Culture in the Computer Age.
University of:Chapel Hill Press: Chapel Hill, USA.
Camatte,
Jacques (1975). The Wandering of Humanity. Black and Red Press: Detroit,
USA.
de
Chardin, Pierre Teilhard (1959). The Future of Mankind. Harper &
Row: New York, USA.
Courtney,
Rob, and Kapur, Akash (2002). A Primer on ICANN – The Internet Corporation
for Assigned Names and Numbers. Center for Democracy and Technology. http://does-not-exist.net/icann-primer/ICANNprimer.htm
Dreyfus,
Herbert (1972). What Computers Still Can’t Do: A Critique of Artificial
Reason. MIT Press: Cambridge, USA.
Dyer-Witheford,
Nick (2000). Cyber-Marx: Cycles and Circuits of Struggle in High-Technology
Capitalism. University of Illinois Press.
Foucault,
Michel (1966). The Order of Things: An Archaeology of the Human Sciences.
Routledge: New York, USA.
Ghitis,
Frida (2006). Google's China Web. Boston Globe, 26 January 2006.
Gorz,
Andre (1979). Ecology as Politics. Southend Press: Boston, USA.
Grebb,
Michael (2006). “Neutral Net? Who are You Kidding?” Wired News. 31 May
2006.
http://www.freepress.net/news/15758
Hansen,
Evan. (2005). Google wants 'dark fiber' . CNET News, 17 January 2005. http://news.com.com/Google+wants+dark+fiber/2100-1034_3-5537392.html
Hardt,
Michael. and Negri, Antonio. (2000). Empire. Harvard University Press:
Cambridge, USA
Hardt,
Michael. and Negri, Antonio. (2004). Multitudes. Penguin Press: New
York, USA
Jacobs,
W. and Walsh, N. (2004). The Architecture of the Web.
http://www.w3.org/TR/webarch/.
Kerstetter,
Jim. (2005). Group says Yahoo helped jail Chinese journalist. CNet News.
6 September 2005.
http://news.com.com/Group+says+Yahoo+helped+jail+Chinese+journalist/2100-1028_3-5851705.html
Lange, Oskar. “The Computer and the
Market”, 1967, in Feinstein, editor, Socialism, Capitalism and Economic
Growth. Cambridge University Press: Cambridge, UK.
Leiner, B., Cerf, V., Clark, D. R.
Kahn, L. Kleinrock, D. Lynch, J. Postel, L. Roberts, S. Wolff (2003). A
Brief History of the Internet: Available at http://www.isoc.org/internet/history/brief.shtml
Levy, Adrian (1995). Anarchists Use
Computer Highway for Subversion. Sunday Times.
Lewis, Paul (2001). “Herbert A.
Simon, Nobel Winner for Economics, Dies at 84.” New York Times: 10
February 2001.
Lyotard Jean-Francois (1991). The
Inhuman: Reflections on Time. Translated by Bennington G & Bowlby. R.
Polity Press: Cambridge, USA.
Marx,
Karl. (1973) Notebook VII, Fragment on machines. Grundrisse. Translated
Martin Nicolaus. Penguin Books .
Maturana,
H. and Varela, F. (1973). Autopoiesis and Cognition: The Realization of the
Living. D. Reidel Publishing: Dordecht.
McCarthy,
Minsky, Rochester and Shannon (1955), “A proposal for the Dartmouth Summer
Research Project on Artificial Intelligence”.
McCullagh,
Declan (2004). United Nations ponders Net's Future. CNet news. March 26th 2004.
http://news.com.com/2100-1028-5179694.html
McDowell,
John (1994). Mind and World. Harvard University Press: Cambridge, USA.
MacKay,
Neil (2006). American's War on the Web. Sunday Herald. 2 April 2006.
http://www.sundayherald.com/54975
Medina,
Eden (2006). “Designing Freedom, Regulating a Nation: Socialist Cybernetics in
Allende’s Chile.” Journal of Latin American Studies.
Negri,
Antonio (1982). Archeology and Project: The Mass Worker and the Social
Worker. Translated by Ed Emery.
Newell,
A. and Simon, H. (1976). Computer Science as Empirical Inquiry: Symbols and
Search. Communications of ACM, 19(3), pp. 113-26.
Page,
L., Brin, S, Motwani, R. and Winograd, T. (1999). “The PageRank citation
ranking: Bringing order to the Web”. Stanford University Report,
SIDL-WP-1999-0120.
Paxon,
V., Allman, M. Dawson, S. Fenner, W. Griner, J. Heavens, I. Lahey, K. Semke, J.
Volz, B. (1999) IETF RFC 2525: “Known TCP Implementation Problems.”
http://www.ietf.org/rfc/rfc2525.txt
Piperno,
Franco (1996). Technological Innovation and Sentimental Education. In Radical
Thought in Italy: A potential Politics. Editor Michale Hardt and Paolo
Vierno. University of Minnesota Press: Minneapolis, USA.
Simon,
Herbert. (1969). “The architecture of complexity”. In The Sciences of the
Artificial (pp. 192-229). Cambridge, MA: MIT Press.
Smith,
Brian Cantwell (1991). “The Owl and the Electric Encyclopedia.” Artificial
Intelligence, 47, pp. 251-88.
Tronti,
Mario (1964) “Lenin in England”. Classe Operaia (Working Class).
Weizenbaum,
Joseph. (1976). Computer Power and Human Reason: From Judgment To
Calculation, San Francisco: W. H. Freeman.
Williams,
Saul (2002). Free as in Freedom: Richard Stallman’s Crusade for Free Software. O'Reilly Media. http://www.oreilly.com/openbook/freedom/
Winograd,
Terry (2003). Interview. http://www.acm.org/ubiquity/interviews/t_winograd_1.html
Winograd,
Terry and Flores, Fernando (1986). Understanding Computers and Cognition.
Ablex Publishing Corp.: Norwood, USA.
Yurick,
Sol. (1985). Behold Metatron, the Recording Angel. Semiotext(e): New
York, USA.
E-mail: [email protected]