matthew fuller on 12 Mar 2001 16:50:11 -0000 |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
<nettime> WebTracer interview |
Interview with Tom Betts /NullPointer NullPointer has recently released a beta-version of a new web visualisation application, WebTracer. Downloadable from: http://www.nullpointer.co.uk/-/tracer.htm MF: What are the questions you are asking about the structure of the web, and about the software that is being developed to use it that suggest the approaches to it deployed in WebTracer? NP: Well aware of the legacy of webmapping as a supposed demystifying device and fetishised formalistic perversion of form I do not intend to decorate this project with too much hypothesis of cultural and social intent. (there are others who could grace it much better than myself) However I cannot deny that the intentions of the application are not primarily to aid webmasters in their analysis and development of their own sites but to, as i hope is obvious, repurpose the information that comprises hypertext and the web into another plane of perspective and interaction. The application deals with sites and pages as molecules and atoms, the resulting cellular structures reflect the information structures of the web. I find that the representation of the many shells and layers that guide our exploration and expliotation of cyberspace can help to reinforce the awareness that all information systems are guided by a great number of defining elements. The Hardware used, the Operating System, the Software, the Network Protocols and finally the File Structures themselves all mould the way that users interact with dataspaces and the way that they can create them. MF: When you use the software it is clear that the arrangement of the relations between the nodes carries information in terms of the length of the linking line. What determines the magnitude of displacement from one node to the next, ie, how can a user 'read' the information that the software displays spatially? NP: The molecular structures created by the application are arranged spacially in terms of several different modifiers. The program uses both the order of links as they appear on a page and the relative depth of links within the host webserver's html docs directory. The closer a node is to the base of a WebTracer structure the closer that page lies to the index page of the whole site, additional subdirectories create distinct planes that are positioned up across the vertical axis. Hence sites with strict and deep heirarchical file structures will create tall objects, where'as sites with flat or database driven structures will result in a flatter series of planes or plateaus of information. The order that these levels are built is dependent on the order of their appearance to the user, and each distinct directory path occupies it's own horizontal plane. The color and length of any linking strand represents the direction and distance of that link within the structure that is being established. MF: On your web-site, in the text accompanying some screen-shots of the software in action, you use particular terms to discribe these spatial arrangements such as 'plateau', 'crown', 'tree' and so on. How much are these ways of describing the links a result of the way the WebTracer software spatially organises the display of links and how much are they structures that are inherent to the structure of the particular web-sites that it hits? NP: The particular structural forms that result from a WebTracer run on a site; as 'plateau', 'crown', 'tree' are a combination of both the order in which the program 'sees' the links and their paths and locations on the remote webserver. Although the display routines can be configured differently, the molecular model resulting from a 'trace' reflects very closely the information structure of the target site, both on a file structure level and on an information design level. MF: We already have as commonplace the phenomenon of art and other websites being made to be only viewable through certain configurations of software and access speed, that only make themeslves visible through certain very narrowly configured sets of software devices. The arguments for and against this, echo of course, some of those considered at the inception of the web and are ongoing, with the destinction between pyhsical and logical mark-up of text etc.(oldskool!) For these sites, the import and export filters of software already constitute a hidden micropolitics of which file formats are accepted or are interpretable and which not, based around alliances between the different forms of organisation that generate these protocols and standards. And obviously these systems of gating and reading, of coding and decoding, operate at many different scales - including cultural ones - during any particular period of use of a piece of software. One other related thing that occurs on the web frequently is people blocking spiders, from search engines etc. from their sites - that is to say from people / machines reading their data in certain ways. I wonder, given a perhaps increased emphasis on 'using' or perceiving the data on a site in the 'correct' way, how you perceive the WebTracer operating in this context? NP: Well, there's quite a range of issues you have highlighted here, but as you point out they all stem from the same old internet (or hypertext) argument of freedom of form/media versus control of form/media. As I touched upon, in answer to a previous question, the nature of the internet and associated technological media has meant that different parties see different means to different ends. The ongoing process of encoding the theoretically open system of the web is an inevitable development of it's popularisation and commodification. Reducing information to a series of eight.dot.three file formats and locking those formats into the development and distribution of software applications, serves to create a language that is both arcane and specific. Such frames placed around the dataspace of the net have a dual purpose; On the one hand they contextualise and compartmentalise the medium into bite sized chunks, which users can familiarise themselves with and reflect already existing metaphors or schema; On the other hand they tie up data and medium to statements about ownership and intellectual property. With the definition of a system comes the ability to quantify it and commodify it. A natural extension of this practice is the concern over infringement of these definitions or alternative readings and systems (hence the blocking of autonomous agents e.t.c.). The web has gone from a very open media which grew because of it's inherent qualities of 'openess' into a system overloaded with the imposed frameworks and metaphors of commercialising agencies. There becomes an "official" way to browse, syndicated by whoever has the largest presence in the definition of the term. I'm not saying that applications such as webtracer are in any way countering that trend (in a sense they are providing further reworkings) but perhaps they will make people aware that there are still different ways of viewing any system. MF: You mention the difference between flatter, or database driven sites and those that have a more hierarchically ordered structure. Would you say that one of the things that WebTracer and other pieces of software that map links between sites is to effectively flatten all sites into a 'plateau'? In a sense, yes, but the action is of course not a physical/dimensional flattening but rather a psychlogical reduction of the intricacies of data into one specific analysis. Webmapping software is concerned with certain features or issues in hypertext, the rest it can ignore from it's resulting output. Obviously there are many factors which affect and dictate the production of a web site, but most webmapping software is reductive and formalistic. MF: Following on from this, how do you see people using the software? How do you use it? I would like to see people using it in an almost sculptural way, there is a certain aesthetic kick of of revealing the inherent structure of a site which I think appeals to a lot of people. I would also like to think that it could be used practically as well as an information design analysis tool, but i suspect that it would need more commercial development for this. I have used it for both these purposes, but I think that what I enjoy most about it is the pseudo filmic way you can move from node to node across a mapped site as if it were a medical examination. I have already had many suggestions from users of some very varied and creative ways of using the application from both the designer and the user point of view. MF: There's a bundle of other material on the nullpointer site, from the relocated material of dividebyzero.org to sound generation software in which you seem to be exploring other potential spaces for software to go. What are the key ways in which software can be developed that mainstream software is missing out on at the moment? I think that developing software is a real double-edged sword. As you write new software, you become acutely aware that you will be continually restricting aspects of it's functionality, to suit your needs. You can't help then but reflect on the way this process occurs in all the other software you use and even in the tools you write your own software with. One of the few ways to counter this trend is the open source movement. Open source isn't just about code either, it relates to a whole set of attitudes that can benefit the resulting software. The video games industry thrives on the developer community and is one of the most cutting edge sectors of the industry. There is also a less visual but equally important area within academic developer community (IRCAM,MIT e.t.c.) Each area of the developer community has skills that can benefit the others. In my own work I try not to restrict myself to working only in one community or with one programming environment and I will use code or approaches that are already available and then warp them to my own personal ambitions. I would like to see simpler products coming from the mainstream software market, but with a much greater facility for mods and patches to be developed by the user community. If it wasn't such a janky program, I'd love to see the Quake modmakers get to work on Microsoft Word;) # distributed via <nettime>: no commercial use without permission # <nettime> is a moderated mailing list for net criticism, # collaborative text filtering and cultural politics of the nets # more info: [email protected] and "info nettime-l" in the msg body # archive: http://www.nettime.org contact: [email protected]