Gabriel Pickard on Mon, 22 Mar 2004 10:49:48 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> posthumous society


Hi nettimers!
This is an article with a strong futurological tint to it. I
describe some techno-social developments that i could imagine
happening, in the direction of society with intelligent and
self-reproducing machines. This ends kind of manifesto-style (i
don't seem able to avoid that form) with a call to get involved
in creating "intelligence tools" (or call it what you like).
And of course i had to fiddle in my own little theories and
paradigms explaining how the transition to posthuman society
might work and how we might form a better transhuman society..
Please feedback and have a good read,
cheers, gabriel


Posthumous society

On the implications of a transition via transhuman- to
posthuman society.
By Gabriel Pickard


First of all, i would like to make a few distinctions between
my idea of posthumanity and other possible readings of that
term; This article is not about "non-humanity", in any of its
many facets. It is not about the total annihilation of the
human race, because: a) there are way too many of us around for
that to happen all too spontaneously, b) none of us will be
around afterwards, which makes it un-interesting, and c) it's a
sure thing anyway, so why waste breath. I try to avoid getting
caught up in male creator-fantasies (presumably something like
trying to compensate for our incompetence to bear children).
And i would also like to ignore the attempts of certain
nutcases who are getting hyped up about augmenting human
capabilities via the hardware/wetware (implantation & hookup)
interface. Such attempts will probably become factual, be
nothing to write home about, and way too expensive to effect
many. Our lives may very well be aided, assisted, caged &
directed by machines, even more than today. But the (magical)
tendency to go cyborg seems to me like overworked & reactionary
ideology of humanity, the crumbling self-esteem "of man"
seeking to be upgraded by the wonderful powers of the machine.
I deem these powers essentially irrelevant. Let us not get lost
in this fascination and fear in the face of the loss of human
primacy. Why go through such a hassle to improve a bag of
bones? (Now don't even start talking about those simpletons who
dream of eternal "life" in digitality...)[1]
The only other serious transhuman theory as i see it would be
one that assumes contact with extraterrestrial intelligence.
However, as far as i can tell - and know my physics,
interstellar exchange is not meant for itsy-bitsy life-spans
such as we humans are confined to. It may be that other longer-
lasting life forms will evolve out of this society to colonize
outer space, however space is not a place for human wetware.

The question is: will we witness a (non-eradicating) transition
away from a society based on - and dominated by - humanity,
sometime before the extinction of humankind altogether actually
comes to pass? In other words, if humans still will be around
and there will be real implications for the lives of
multitudes, how would pervasive, self-replicating and
intelligent technologies affect society?
Posthumanity evolves out of the transhuman stage. A transhuman
society not only consists of relations between human
individuals, and/or not only human individuals partake in it.
There are other "actors" embedded in the social fabric � and
the nature of the mesh of relations itself may change.
As such, a farm can be seen as a transhuman society; human and
animal individuals partake in it. However, in most farms, the
question of dominance is clearly regulated: Masters in the
house, serfs in the barn. Accordingly, Orwell's "Animal Farm"
[2] could be considered a posthuman society. Here mastery has
passed on to the animals (with all unsavory consequences). To
sum it up, the question: "Who (in some way) dominates society?"
is a fundamental criterion for my concept of posthuman society.
In a posthuman setting humans may go on participating (in the
example "Animal Farm" they of course do not - but i mean in
general), but they do not have the power to define the
structure of societies' relations; they do not (and possibly
cannot) organize alone any more.
The metaphor of the farm however too easily leads us to the
idea of a "society with machines" in which the machines either
replace the masters or the slaves. Just as the pigs turn into
humans in "Animal Farm", we seemingly can't help imagining
intelligent machines going homomorphic - taking on human
characteristics.
This mindset goes into the direction of the talk of having
personal relationships with robots (replacing human relations)
- and that computers will tell us what to do when we wake up in
the morning. I don't want to say that such scenarios may not
come to pass, however they lead us to simplify-out what i would
consider one of the most interesting aspects of post-
/transhumanity: machines (and other developments, possibly in
genetic engineering [3]) need not be subject to our concept of
individuality...

The "in-dividual" subject has already been deconstructed by
others. But essentiality, individuality of life's perception
seemingly leads human being to the attractor of subjectivity.
In contrast, machines, products of engineering, as fruits of
our rational (dividual - as i use it as opposite to in-
dividual) thought will not naturally tend to individuality. [4]
So to adapt to human nature, dealing with essentiality and
individuality might very well turn out to be the greatest
challenges in building so-called artificial intelligence out of
digital computers. To communicate intelligently, machines need
more indefinite binding to individuality, to meaning, to the
world of emotion. But a subject, as mediator between an
individual consciousness and a world of information, need not
be re-constructed artificially. What good would the full-
fledged reconstruction of the cognitive powers (which i believe
can only be done including the full sensual, motoric and
emotional capacities of a body) of a (human) social subject do?
Humans we already have (superhumans we do not desperately need)
- and in fact, i don't think we can be reconstructed with the
given means anyway, at least not via engineering methods. [5]
Machines are better at playing chess, they can weld, calculate
and manipulate � do many things that we once thought required
intelligence. Nevertheless we would not call these machines
intelligent, we rather see them as an extension of our human
intellect. I think the transition towards intelligent machines
will continue in such a gradual manner, via an incremental
development of further sophisticated aides. The network would
play the central role in this game of becoming intelligent. As
such, no monolithic subject need evolve.
There are basically two paradigms for technology of
intelligence; The tool-form and the life-form. Both are always
intertwined and develop out of one another. So accordingly, the
life-form, post-artificial intelligence, might evolve out of
the tool-form as an "intelligence tool". To return to our
transhuman farm, we might not want to first develop full-
fledged animals, but instead start out just with sowing (non-
subjective) plant-seeds. Vegetarianism is better anyway...

Let us go back to my definition of post-/transhuman society; I
didn't only speak of individuals as constituting society, i
also described society as having the consistency of relations.
We humans probably will - and should - keep on interfacing and
relating in the ways known and natural to us. We should (i
propose) stop adapting our ways to the demands of the clumsy
electromechanical, & formally invocational [6] interface. It's
simply a matter of overall mental, social & physical health
that i'm concerned with, as well as love of flesh.
However, non-human participants (i do not call them social
subjects) might find very different forms of relating. Among
themselves. I believe that their relations still would be about
collaboration and communication. [7] Yet the "form of
interface" - and with it - of the topics and materials, as well
as the participants themselves, might be quite different.

So this would be my scenario for the time ahead: Machines in a
transhuman society would feature highly intermeshed
communicative powers, as well as certain productive and
manipulative capabilities. - But lacking full emotionality of
desire, (according to the principle of the agent) they would
remain something of a highly sophisticated extended arm of
articulate human will. (In this respect, as far as i support
this approach, i can also be called augmentist).
This scenario can at the least be called problematic.
Introducing new agents between human relations does not
eradicate the power-structures that we have before us at this
time. On the contrary, it may very well enforce them. It might
be that mainly the already long and powerful arms would be
extended by technology. Mechanic/electronic agents of
surveillance and control are becoming pervasive everywhere they
seem profitable. Combined with greater capabilities for unified
interrelation of information, complex inferences ( and so on..
insert your techno-flowery expression here) that are in
development, technological control of other humans by the rich
& powerful can become a whole lot less tedious - and (always
considering the limitations of resources) very widespread.
All of this is well known. I would not like to see an inhuman
intelligent search-and-destroy robot armada, unleashed from the
problems of morality and morale that contemporary organized
bodies of violence face, shooting well-aimed holes into our
social fabric of freedom. Torture and terror are already
rationalized more than far enough. And even if it all remains
speculative (especially concerning the scale and consequence)
at the moment, the problem is that it might happen.

But how would such a highly-sophisticated
information/robot/genemonster-somethingsomething work? Well, i
have an idea, but i don't think it will work. ;-} In fact, i
don't believe that such technology can be "linearly
engineered", masterminded to work at all, without implementing
some crucial flaws in for a bargain. (This is of course the
lesson to be learnt from the failure of "conventional AI" to
model the complexity of cognition.) So as long as digital
machines remain tools (as such designed by humans, designated
to be useful), they can at best become an integral part of
symbiotic collective intelligence with humans. And even that
sounds hard to believe and truly like a lot of work.
The more the engineered part of a machine is replaced by
individual use and interaction, the less tool-like and more
life-like it becomes. Therefore highly advanced, intelligent
and interactive (in the sense of actually acting) technology
will have to evolve - as simplistic as it may sound - in some
way autonomously. Personally, i call this form of development
autogenic processing (AP). [8]
Technical conception, designing and engineering follows the
ideology of dividuality, that everything can be broken down
into a complexity of rationalizable elements - in contrast,
natural evolution, coming from natural structuralities (and not
just adapted to them), is highly individual. The structures
that we produce artificially are only as adapted (and as such
efficient and sustainable) as our concepts for them.
So what we are trying to do, from the vantage point of AP-
designing, is to return back to individuality, through volume &
complexity in materiality and time, units and iteration.
However, the result might be an altered individuality, an
alternate way of structuring. Machines may evolve a non-
subjective (whatever that means) form of adapting perception to
our world. When i speak of natural structuralities as a rough
concept for the complex organization of our world, then i also
suspect that post-artificial structures could constitute a new
nature.

"Process" here is defined as activity according to given
information - and acting on (manipulating, dealing with)
information. [9] If this activity leads to the construction of
new processes which in turn may construct others, and so forth,
i call it autogenic.
A processing is the whole bundle of active processes,
algorithms (the "given" information governing execution) and
the "raw material" of the environment, the data to be
processed, the food to be eaten, the warmth to be enjoyed. As
such, processing is but a rationalization (in terms of
discretifying, identifying) of activity � and ultimately
consciousness. "Information" in this context describes all
things assumedly discrete, as i define information as the
discretion of the object. It is not limited to abstract
information represented in a medium, but certainly includes
that form.

The construction of new processes out of the forerunning means
reproduction. Processes must be produced from the states given
in the environment, data in a medium, material things in
actuality. To be maintainable, such production must form a
reproductive cycle via different states in the running
processes; If we have a processing with terminating processes,
the running processes must produce states to reconstruct
processes, for the processing to continue. If among the running
processes there is an underproduction of states necessary for
the construction of new processes, the processing dwindles, in
case of an overproduction it can grow exponentially. All these
developments may be defined by the algorithm, however in a
well-designed, sustainable ("healthy") processing, they are
heavily dependant on the environment.
All of this is basically a reformulation of evolutionary
theory, However using mainly information-theoretical vocabulary
to draw parallels between different levels at which
processing/evolution materializes. I do not take on a pre-
defined concept of sexuality for reproduction. Accordingly, i
use the term "evolution" in a more classical sense, as the
development of new forms of processing from the already given.
I would propose, that mutation in evolution need not be error
(just as the clear-cut lines between the species need not be),
but rather that there is always a high level of individuality
in the structuring essence of biological existence (the natural
structuralities of cells, organs, nerves etc.), which allows
for an effortless mutation, selection and - evolution.

If we view the human animal as a processing, as activity in an
environment according to given information represented in the
nervous system, we can demonstrate an important capability of
some autogenic processes: Animals can learn. Processing itself
can alter the algorithms that govern it. � An auto-evolving
autogenic processing, or to a Computer Scientist: self-
modifying code. Computer Scientists might go on to say that
this can wreck havoc � and i readily agree... But might it be a
plausible solution to very carefully design auto-evolving
processes with well-governed instances to govern self-
modification of the algorithms? Careful or not, explicitly
intended self-modification is a problem. Mainly the problem of
the environment.
It is however important to note that the natural examples of
autogenic processing are not explicitly self-modifying (as well
as they do not have explicitly represented algorithms). Only
very seldom will you see an animal intellectualy masturbating,
teaching itself something out of virtually nothing (there are
however some freaky homo sapiens specimen..); Auto-evolution is
normally bound to some kind of individual nexus between
environment and algorithm. Biology does it via chromosome-
errors and mating. Neuronal networks via electric attraction
and "Pr�gung" (formation, learning by exposure & repetition).
However we humans are impatient. Self-modifying algorithms
theoretically unleash great power to adapt and expand
exponentially fast (theoretically!). Well-designed, they might
be an option, if the self-modification is kept close to the
environment. They might be an option that will happen. As a
matter of strategy, we will have to follow different threads,
viewpoints, paradigms and ideas.

In the case of trans- & posthuman society we are talking of
artificially initiated processings. They are bootstrapped via
some other technology that is already in place and available to
the designer; Bootstrapping and initiating are the big problems
in this endeavor, of course. The media containing the
processing's reproduction must be furnished appropriately,
algorithms need to be well-adapted, a material or immaterial
environment is needed, for the process to act upon, plus
(possibly) interfaces to further feed the environment (if it is
a processing is mediated in a system). Here the algorithm, the
information informing, or "ruling", the process' activity
(execution), would typically be designed so as to help
fulfilling some form of human desire.
Desire in itself however is just an emotion among many, one
that is kindled by and clings to objects of reality, tends to
objectify and finally fetishize reality. I reject the approach
according to which desire's rationalization � the individual's
will � is treated as exchange currency for all other emotions.
People do not do things because they want to do them, they act
because they feel like it. All emotion propels action
constitutes meaning. The question of desire, meaning and their
mediation remains very important.
If we leave that question behind for now, we come to the
scenario of a transhuman society; A society of humans in
growing symbiotic intellectual/emotional and
material/productive relations to machinistic extensions of
their subjectivity. Ideally, these autogenously developing
(which in this case means hermeneutic circle, reproductive
cycle) agents would have the role of extending into
collectivity, into decentralization of power, into
transcendence of cognitive processing... Problems will be dealt
with as they arise. ;-}

In a transhuman society, humans are in some way still an
integral part of reproduction. They feed meaning, manual labor,
data etc. Consequently one criterion for a posthuman society
might be fulfilled when the reproductive cycle were closed on
the level of materiality; Machines reproducing machines without
any human intervention, be it in the material or immaterial
domain. Evolution of (post-)artificial life would no longer
depend on humans � and most probably start deconsidering the
human condition. Leaving behind human meaning, finding its own
forms. We may not be able to hang onto power, collectively as a
race, indefinitely. But we should not prematurely hand over
power to individuals to use technological progress as a tool
against other humans.
As soon as people realize how they can wreck havoc with
autogenic processing, some will probably start designing
processes to wreck havoc. Even if we do not know for sure if
"it can be done", it nevertheless would seem wise to me to
consider the possibility, follow and accompany developments of
technological innovation that go into the direction of
autonomous self-reproduction and redevelopment closely . In
case such nefarious processes arise, the creation of an "immune
system" should seem imperative. Such an immune system or
rather, an immune processing, would be a project and processing
that constructs process units that (basically and abstractly)
maintain a certain form of structuring in the world that
surrounds them. The initiation of such processing of course
will be developed in conjunction with study of natural
immune/ecology-maintaining processes.

So let us suppose "computers could program themselves" and
let's go on to suppose they would program themselves to be able
to perform many intellectual tasks that only humans could do
before. In such a case, there would be real implications for
the class of so-called "knowledge workers". This does not mean
that humans will not find work (manual labor also didn't
disappear with the advent of industrial robotics), there will
always be hinges in social structures for humans to move. We
will always have something to do - it depends on the
conditions. However another bastion of human primacy will be
taken. Human labor will largely be made redundant. Perhaps
there will be a day when no task performed by a human could not
also be done by a machine. Possibly not even our emotional
competence will remain unsimulated.
Then what will humans do? The answer to this question depends
on the manner in which humans will organize their society's
cooperation. If the distribution of power (in its material,
monetary and immaterial form) remains as violently unjust as it
is, we might get into big problems. Provided a situation in
which many forms of material and immaterial labor are beginning
to be made redundant (but the simulation of subjectivity not
yet fully accomplished), humans might turn to turning the world
into a big power-play. Exercising the remaining capability to
dominate and fight one another... Possibly global wars over
the last things non-virtual � natural resources. (The factor of
ecology and resources should never be disregarded when speaking
of the future.)

Even if these were events that lie far in the future, those
would be wars humanity can only lose. We would have the choice
of mutual anihilation or serfdom. What difference would it make
if we were ground up in inhuman machinery or lorded over by
robot-kings, talent-classes, other humans? Not much. It may
happen sequentially, or all at once.
The end of humanity is a fact. The further existence of the
human race can not be taken as a goal in itself. Everything
else would be a racist ideology. Only the individual human
condition is a concern among us humans that is worth fighting
for.
As such, the mode of organization of cooperation in
collaboration and communication, between us humans and beyond,
is of imperative importance. Self-organization in network-
societies that reject hierarchies as their principle and truly
embrace sustainability might be a better option for dealing
with power. Power that already finds forms not yet overcome,
but might already be on the search for new media. Therefore,
conscious formatting and consequent organization of the
activists' lives would give depth to the project of integrating
oneself into development.
Other tool-architectures may help us dream up new social
structures and vice versa. And if my hypothesis that post-
artificial structures can potentially constitute new nature(s)
turns out to be right, the context in- and out of which these
structures are formed is of defining importance and
responsibility for the future.

"Errichten oder Vernichten" - A German advertisement for Lego
presented German boys with the question "Construction or
Destruction". It is age-old wisdom that it's easier to unmake
something than to build it. We nevertheless see progress in all
forms of structuring the world we live in � why? Because they
have their own dynamics. Structure can be found in endlessly
different dimensions � the universe is not caught up in a
dichotomy between entropy and order � it is absolutely both �
at once. That does not however mean that one modality of
structuring is meant to prevail indefinitely. If two different
modalities meet, what would seem like construction to one might
be destruction to another.
Our biological evolution, based on DNA-genes, is the standard
example of autogenic processing. The information represented in
DNA governs its own replication. This is also the standard
example of a processing that is not contained in a system (at
least not by definition � earth can not be called a discrete
system) but is nevertheless mediated information � with
structures generally based on the modality of organic
chemistry.
Virus and bacterium are two prime examples of different
modalities clashing as well as coexisting... At least they
share the basic structure of DNA. The situation could get a
whole lot more tricky when utterly differently based modalities
meet. One of the areas most threatened by artificially
initiated autogenic processes certainly would be biology;
Nanorobotics (which i deem still a lot further off than some
prophets might hope) and other "more material processes" could
turn out to be a great environmental hazard to wetware [10], as
our whole biological ecology is pretty much accustomed to DNA
being the only carrier of reproductive information.
The first autogenic processings are bound to be a whole lot
more crude than their natural examples � they might at first do
nothing, and then destroy more than they can (usefully!.. ?)
construct. What would the initiation of an "immune system" to
meet those kind of dangers look like?.. And don't the problems
start between us humans, isn't the social structure deeply
involved?

Since this is not supposed to end as a lament that folks
actually should be nicer to one another, let's have a look at
some of the tasks at hand...
When working on intelligent machines, we can develop the tool-
form or the life form. I propose using the methods most
appropriate to reaching the life-form (autogenic processing) in
projects that are rather aimed at the tool-form (non-subjective
network agents). Essentially, i think there is no real
alternative. It is our task at hand to ensure a sound
transhuman condition � posthumanity remains a dream. It's up to
you to decide if it's a nightmare.
One important task that i see and do not fulfill for the time
being, would be to colonize robotics. Techno-scientists in the
realm of mechatronics and robotics already have a strongly
raised awareness for the potential of autogenic processing.
This tendency is born in the ethos of space-enthusiasm,
technological perfection in machinistic recreation. These are
exciting fields, that may become a great motor of intellect for
growing generations. Like nano- & genetechnology, this is a
field that techno-scientistic society is setting its hopes (and
funds) on. Unlike the other two however, robotics might have
the chance of becoming widespreadly applicable, without
overwhelming costs, in the not-too-far future.

I pose the old question anew: Does meddling with technological
progress make sense? I cannot quite shake off the fear of the
spirit i once called, coming to haunt me. If we develop
technology, where will it lead us to? Won't well-meaning
innovation in the end be instrumentalized � and what is well-
meaning about innovation anyway?
But the technology question might also be turned into: Is it
worth the effort? Let us not forget that, as we know of the
social dimension of technology, we should also take into
account its dynamics of power. Progress needs resources,
innovation is a race � who has more fuel to define the
direction of the branches development will take? The question
also could be: will integration necessarily just mean running
along, or can it mean effective (directional) spearheading and
subversion?
I�m pretty sure that fundamentally most of my audience will
agree that "There Is No Alternative" (TINA). Most of them would
of course have felt qualms if they had found themselves in
research that turned out to aid development like that of an
atomic bomb... But in everyday life the nineties have seen a
very widespread acceptance of � and involvement with new
technology throughout the social movements � as soon as they
could afford to. We run along with technological progress. And
running along means running along. Even the free-software
movement has been concerned more with re-writing proprietary
architectures en libre, than with developing new ones.
"Don't hate the media � be the media!", or: "Don't hate the
machine, be the machine" [11] � TINA turned empowerment. Is it
really that way? Neither technology nor humanity should become
ends in themselves. The modern human has become accustomed to a
certain level of high-tech, of doing just because it can be
done. This is not supposed to be moralized, but percieved. The
future may also hold for imaginative and creative anti-
technology, in some way or another.

I believe these are open questions that can only be interpreted
critically depending on the situation. The situation we are in
now is certainly strongly technological. As far as we continue
our contributions to the "project of technological progress"
(whatever that may mean) we need more feasibility studies -
more radical experiments...
I call to get involved in developing projects that aid
intelligence in new, connective and interactive forms. I do not
primarily call for the simulation of human intellect, however i
do call for "critical coding", for technological development
that breaks with forerunning paradigms if necessary.
My personal take is that a close look at phenomenology might
help us in finding alternatives to brute-force attacks on
intelligence (eg. neural networks). Thinking about new,
flexible forms of representation, as well as enacting meaning
might be the outcome. Developments such as the Semantic Web
working group [12] � and generally the growing popularity of
mapping analysis, seem to be interesting approaches, steps
forward on the level of representation. But to transcend that
level, we need a more profound and critical theory to apply.
[13]

We must integrate ourselves into progress without becoming
progressives; Progress as a paradigm eternally discounts the
present to the advantage of the future. Let steps in a path
supplant progression, becoming as it is. As such, it doesn't
matter if posthumanity is actually reached. Whatever we
predict, other things will happen, and if anything we predict
actually works out, it'll probably be much slower than we
thought.

NOTES

[1] For more meat such as these (silicon meat that is), just do
a search for "transhuman", "Kurzweil", "posthuman".

[2] George Orwell, Animal Farm, Secker & Warburg, Great Britain
1945.

[3] I do not however understand how (maybe some) genetic
engineers and enthusiasts can believe that tinkering with a
highly typical and individual system, based on rather crude,
schematic and dividual models, will lead to anything very
rewarding. Instead i would keep to the carpet of
electromechanics for the time being � they probably will be
used to bootstrap advanced biotech further down the line.
(Biotechnology cannot of course be rolled up so easily in a few
sentences � there are also other aspects to consider..)

[4] For more on this dividuality - individuality thing, and a
definition of information that reaches into the material world,
look out for some information-theory coverage coming up. For
preliminary fragments, see also: Gabriel Pickard, Flexible
Darstellung komplexer Sachverhalte in nichthierarchischen
Informationsstrukturen,
http://werg.demokratica.de/archives/00000051.html (German
only); See [9] also.

[5] In this manner i avoid the question of artificial
consciousness for this topic, which will easily push us into a
conservative ideology of subjectivity. In a phenomenological
world-view we need not worry about the others' consciousness
and get metaphysically agitated.

[6] See Chris Chesher, Why the Digital Computer is Dead,
http://www.ctheory.net/text_file.asp?pick=334

[7] I am pushing a certain terminology to describe cooperation;
As consisting of collaboration (the actual work agreed on and
done together) and communication (the activity binding the
working group).

[8] I�m not sure if the term autogenic is correct in this
context, would "autogenous" fit better?

[9] This theory will be expounded later; It constitutes an
advantage upon the simplistic model for the
will/desire/execution/meaning nexus that i propose in: Gabriel
Pickard, Beyond the Computer, in sarai-Reader 03 "Shaping
Technologies", Delhi/Amsterdam 2003

[10] See http://www.etcgroup.org for a contemporary critique of
the hazards of nanotech.

[11] This is a bit misquoted from: Matteo Pasquinelli, Radical
machines against the techno-empire. From utopia to network,
http://www.rekombinant.org/article.php?sid=2264 ; This valuable
essay does not call for "running along" at all.

[12] See http://ww.w3.org/2001/sw/

[13] At the moment i'm planing some software projects that will
develop around the lines of information representation, smart
interfacing and manipulating, inferencing etc.. and merge into
social tools of cooperation. I'll try to document this process
of planning with a series of various texts aimed at explaining
the actual workings of the planed projects, but also at putting
the concerted effort into a context. A slight problem remains:
i do not pretend to know yet, clearly, how to concert that
effort. There are still enough problems out there take on. Feel
like joining? :-}


#  distributed via <nettime>: no commercial use without permission
#  <nettime> is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [email protected] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [email protected]