panix on Sun, 30 Jan 2000 09:25:43 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> Feed: Lawrence Lessig


Stephen Johnson interviews Lawrence Lessig


FEED: Let's talk a little about your background and about the connection
-- both biographical and conceptual -- between constitutional law and
computer code.

LESSIG: I started as an academic in constitutional theory. Although most
constitutionalists in America think only about the American Constitution,
I was at the University of Chicago where we had the Center for the Study
of Constitutionalism in Eastern Europe. So most of my work in
constitutional theory was spent learning about the Georgian Constitution,
the Russian Constitution, the Ukrainian Constitution -- comparative
constitutional thought.

Very quickly you realize that you can't think about constitutions just as
legal texts; they're not things that lawyers write. Constitutions are
really a set of practices and understandings and architectures of
different cultures which constitute what we ordinarily think of as a
constitution. So this attitude, this way of thinking about
constitutionalism then translated quite directly, when I read anything
about cyberspace, into identifying a set of practices associated with the
code of cyberspace as an equivalent constitution in cyberspace. Code was
nothing that lawyers wrote, but it was certainly something that
constrained and made possible behaviors which are fundamental, in a sense,
to the culture of cyberspace. 

FEED: And how does that idea resonate among fellow constitutional law
scholars? 

LESSIG: With modern scholars it doesn't resonate very well. But if I could
talk to scholars from the 19th century, I think they would get it. In the
19th century, the idea of, say, the English Constitution was much more
salient in this respect. The constitution was much less associated with
documents and much more associated with deep traditions that are
relatively hard to change, and that lie below ordinary battles that exist
in political society. I think scholars back then would have better grasped
the idea.  Constitutionalists today like to think strictly about the First
Amendment and the right to bear arms and things like that. That seems to
me a very thin way to think about what a constitution is.

FEED: All this gets us to the fundamental argument in your new book, which
is the parallel between computer code and the force of law. How did that
idea develop?

LESSIG: Actually, I don't remember exactly how it developed. I had been
working independently of cyberspace in thinking about the different ways
in which behavior is regulated. I'd done some work thinking about how
norms regulate behavior, issues that were separate from how law regulates
behavior and how the market regulates behavior. And then there was a
fourth part: I was thinking outside of cyberspace about the architecture,
how it's regulating.  Then when I started thinking about cyberspace, I had
to find a translation for these parts -- and architecture is, in
cyberspace, the code. Code is the most significant regulator in cyberspace
because it's the most plastic and pervasive tool; you can easily change it
in lots of important ways.

People often treat the features of cyberspace as if they're just given to
us by God. What becomes obvious, when you think about it, is that these
features have been crafted, even more so than, say, New York City has been
crafted. These features have been crafted by people who have a certain
purpose behind how they were structuring one place versus the other. That
fact about cyberspace -- that there's a space that constrains and
regulates behavior because of the choices of someone who built it --
that's an altogether obvious way to parallel how we think about law.

FEED: Some people would say, "No. It's precisely because it's plastic that
we don't need to worry about it as much, and it makes it less of a
controlling force because it can be shifted and reinvented so easily."

LESSIG: Yes, but there's a difference between individually plastic and
collectively plastic. At any one moment an individual is not capable of
redefining this space, but collectively it can be redefined, right? It's
the same thing with norms. You could have a norm against smoking, and you
could try to resist that norm and create the opposite norm, but no
individual acting on his or her own could really do that. But
collectively, you could redefine the norms.

Yet individuals in the context of cyberspace can be less empowered --
depending on the way that our code is structured -- to resist the code. It
could be, for all practical purposes, impossible in an extreme way to
resist, impossible in a way that's more severe than it is in the case of
norms. If you violate norms, you might get punished by friends, neighbors,
etc. But it's still civil disobedience. The possibility is still, in a
sense, architected into norms. But it's not a possibility necessarily
architected into code.

FEED: I was thinking about this word "architecture." Another contrarian
argument that could be made against the "code equals law" argument, or the
"architecture of cyberspace equals law" argument, is that architecture of
cyberspace actually resembles real architecture out there -- the
buildings, the physical layout of a city. There might be an "invisible
hand" argument that what really structures your experience is not like the
law -- is not the equivalent of a regulatory force from above -- but
rather an open-ended, sprawling, self-organizing process that structures
experience the way the streets of a city structure experience. Forces that
aren't necessarily so severe and oppressive that they need to actually be
held in check.

LESSIG: Well, I agree that the direct parallel to code is the way a city
is laid out. But you shouldn't take from that fact that the way the city
is laid out is benign. My favorite example is that the way Robert Moses
laid out New York City further segregated it on the basis of race. This is
the way the city's been laid out, but it's going to have a policy behind
it. 

Now, if governments could manipulate the architecture of the city as
completely and as easily as I think it's possible to regulate the
architecture of cyberspace, you would see much more regulation through
architecture in the city and much less regulation through laws in the
city.

What we'll see in cyberspace is that it's going to be easier to move
buildings around than it will be to change norms or to try to impose laws
directly. That's why this will be ripe for the state to try to regulate
it. Part of the objective in saying "code is law" is not to say that code
is the equivalent of law, but rather to say that code can be a transmitter
of legal principles or legal values. Thus it will be the target of legal
regulation. We should think about what the regulation of code is in order
to understand what the values of cyberspace are.

FEED: "Control" -- another key word. We talked a few months ago in FEED to
Andrew Shapiro, who has a book out this year called The Control
Revolution.  You and Andrew have worked together on some things, right?

LESSIG: Yes, he was my student, and we have worked closely together since. 
And he lapped me with his own book. 

FEED: Is there a distinction between your two definitions of control? Is
yours, do you think, a darker one? Or different in some other way?

LESSIG: I think the prediction I'm making is darker than Andrew's. Andrew
is emphasizing the sense in which new technologies are going to empower
control -- bottom-up control -- and to displace top-down control. I
suppose I agree with that description as far as it goes. I'm a little bit
more pessimistic, though, about the uprising. I'm more of a believer in
the principle that it takes small fences to control large animals. These
tiny little regulations that get built into the code are tiny little
controls that can be quite significant in affecting behavior. Worse,
people don't notice and don't resist these things that don't seem crudely
coercive, like a police officer saying, "Do this, do that." They just seem
innocuous, like speed bumps.

FEED: Can you give a good example of one of those? 

LESSIG: Sure: privacy. We have an architecture of the web right now that
makes it hard to control what data are gathered about you. Cookies are one
part of that technology. The architecture is now that you can turn cookies
on or turn cookies off. If you turn cookies off, or if you turn them back
on when you get to a certain site, then 90 percent of your online life is
spent making a decision about this. So most people just turn cookies on
and live with it and that's that.

Now, there's another possible response, which is to imagine an
architecture that's much more subtle in its ability to negotiate privacy
preference.  But rather than rising up to demand this other architecture,
most people just sit back, turn cookies on, and surf without thinking
about it much. There's an example. That's a slight technological burden,
this cookies thing. And it's a slight technological burden that we're
willing to give away personal information to avoid. It's a very easy way
to channel us into turning over information for free.

FEED: Your book makes a very complicated argument about open source, or
"open code," as you call it in the book, and its relationship to
susceptibility to regulation. Would you explain that?

LESSIG: One argument of the book is that we should expect governments to
turn to the regulation of code as a way to bring about their regulatory
objectives. The government sees that it's hard to regulate people directly
in cyberspace, so if you regulate the code, then it makes it easier for
the government to regulate people in cyberspace. But one check on that is
open source software. To the extent the government attempts to regulate
code, if the code is closed code, then it's hard for people to notice the
regulation, it's hard for people to resist it, it's hard for people to
substitute in. You basically have to accept the package as it comes. But
if it's open code, then in a sense it reveals its regulation to anybody
who can read the code, and it's relatively easy for people to pull out the
part they don't like and put something else in. 

There's a nice parallel here: Remember the automatic fastening seatbelts? 
Here's a technology which was attempting to force people to use seatbelts. 
It was open code in the sense that it was easy for people to disable that,
right? The state could have come in and said, "We're going to lock you up
for 50 years if you disable your automatic seatbelt." But that was
politically too costly, right? Instead, you set up a code which attempted
to get people to behave in a certain way, but it permitted people to opt
out because it was open code; it couldn't help but be open code. You
checked the government's ability to try to enforce this regulation because
the government wasn't about to force people into it directly.

The closed code equivalent would have been if there was some way to make
it absolutely impossible to start the car without a seatbelt connected and
there was no way to get out of it. The argument in the book is that to the
extent that the world of cyberspace is closed code, it's easier for
government to regulate. To the extent that it's open code, it's going to
be harder for government to regulate code. You might think of open code as
a check on government regulation. But it's a check in the sense that it
forces the government to be transparent in its regulation.

FEED: So you're trying to persuade the libertarian factions that their
beloved open source can actually function as a check.

LESSIG: Right. It can function as a check in the way constitutionalists
want to think of checks.

FEED: That's great. One last thing: Y2K. You have a take on that which I
think is different from any of the ones we've been hearing. 

LESSIG: I used Y2K opportunistically. Some critics continue to obsess
about Y2K as this great disaster while at the same time they take a
position that the government should just stay out of the internet and let
things be.  Most of the thrust of my book is an argument that if we just
let things be, the internet is going to go down a path that's actually
going to reduce freedom and enable government regulation much more than if
we think critically about how things are developing. And with Y2K, the
ease with which companies were able to get out of liability for bad
products -- waiving their liability for software products, and thereby
letting people be free of government regulation -- is in part responsible
for the lack of concern about this bad product. 

Ford Motor Company can't produce cars that are going to explode after five
years or it'd be sued. And yet it's easy for a software company to waive
liability for any of these types of things just because of legal rules
that allow them to get out of it and therefore avoid responsibility.
Consequently, they don't spend as much time worrying about the products
and the quality of the products. The basic point is that government could
have played a role here in creating the incentives for companies to be
much more responsible with the code that they produced. Instead, what we
have is this potential environmental concern. 

FEED: What do you think is going to happen with Y2K? This interview will
be published right after. (Presumably, that is, it will be published.) Are
you stockpiling cans?

LESSIG: No, but I'm going to be in Egypt. 

FEED: Really? 

LESSIG: Yes, where there are not many computers. So I'm going to be safe. 
[Laughs.]


What is the proper role of regulation in cyberspace?
Share your thoughts on "code is law" in the FEED Loop.


� FEED Inc.


http://www.feedmag.com/re/re280_master.html




#  distributed via <nettime>: no commercial use without permission
#  <nettime> is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [email protected] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [email protected]