The Intelligent Web

Luís Ángel Fernández Hermana - @luisangelfh
21 August, 2018
Editorial: 231
Fecha de publicación original: 5 septiembre, 2000

The devil lurks behind the cross

We shouldn’t be surprised if, very shortly, someone – an expert on the environment, for instance – gets a message that goes something like this, “We have detected that there is insufficient information online about decision-making related to crises caused by catastrophes in densely populated areas. You are an authority in the field. Please publish an article of 5,000 words on the subject including the relevant hypertext links. You have seven days to do this. If you ignore this message all your authorized pay facilities on the Net will be suspended, as well as your access to the Internet. These will only be restored on receipt and acceptance of the aforementioned article.” So who will the message be signed by? The Committee for Sufficient and Reliable Information on the Net (CoSuReInNe)? An NGO, a government department? No, it will almost certainly have been spontaneously generated by the “Web’s Global Brain”.

Fiction? Digital utopia? Not at all. Laboratories in the US and Europe have been working in this direction for several years now: converting the web into a central nervous system composed of thousands of millions of neurons which feed on intelligence distributed on the Net. One way towards this global mind is what Tim Berners-Lee, inventor of the Web, calls the “Semantic Web”, where information contained on the Net is comprehensible to machines, and then they can reorganize, structure, complete and offer it at any time according to the needs -explicit or otherwise- of users.

A web of these characteristics would have to able to operate “under its own volition” as a product of its ability to understand its own connections and user activity on the Net. A “Global Brain” directing each step (the evolution, growth, maturation and reproduction) of a kind of super-organism based on the collective intelligence of all its members. Put like this, the task seems gigantic, if not completely daunting, part of some kind of science fiction story. Nonetheless, there is something in this global brain that has more to do with the past than the future, for the fundamental requisites for this evolutionary step forward on the Web are already there, both from the conceptual point of view as well as the computer architecture needed for machines to be able to understand the information that we are feeding them.

The idea of a super-organism based on society’s collective intelligence is not, in fact, a new one. Herbert Spencer (1876-96) wrote about it in his “The Principles of Sociology” where he described the evolution of society as an organism. H. G. Wells, author of “The Time Machine” amongst other works of fiction, also reflected on the emergence of a Global Mind. The French philosopher Pierre Teilhard de Chardin, Jesuit and paleontologist, whose writings where rigorously scrutinised by the Catholic church, proposed a “noosphere” (the sphere of the mind), a network of thoughts, information and communication that would cover the planet. Recently, Francis Heylighen, a young Belgian researcher, co-director of the Centre for Interdisciplinary Research “Leo Apostel” (CLEA) and editor of the Principia Cybernetica Project, has begun to develop a systemic evolutionary philosophy of the global mind.

His advantage over his historical predecessors in this line of thinking is evident: Heylighen has the Web at his disposal, a system of distributed intelligence of a global nature. In addition, he is not alone, nor does he have to work as a pioneer in virgin territory. Many centres are working in the same direction, from MIT where Tim Berners-Lee heads a project aimed at getting machines to understand the information circulating among them and capable of negotiating it online, to the Los Alamos National Laboratorywhose Distributed Knowledge System(DKS) is starting to produce tangible results. And then, there in the middle, there we are, constantly creating, recreating and feeding the neurons of this brain which we think we dominate but that increasingly determines the way we behave, what we do and how.

So, what are the basic premises of the Web’s Global Brain? To start off with, like our brains, it has to be capable of structuring and organizing the information and knowledge it possesses and in order to do this it has to generate, multiply and maintain its neuronal connections. In other words, it has to establish links according to user needs. This is what Principia Cybernetica’s Web server developed by Heylighen and Jopan Bollen does. It updates and rebuilds its links constantly according to internaut demands and even lets some of them “die” if they are considered out of date because they are hardly ever or never consulted. .

The machine assigns an algorithm to each visitor which allows it to track the path that they take through the Web and memorise the history of its behaviour. The next time the visitor comes, this programme will show the pages that interested the user and adjust the structure of links to guide it through its sphere of interests. Principia Cybernetica’s server can even find shortcuts between places that it does not know but whose importance it deduces because of the number of visits it attracts. In this way, it builds up neuronal connections not just based on traffic flow or the type of consultation made, but also on the nature of the content itself.

From there it is just a short step to detecting the redundance or shortage of information and knowledge in areas of interest to users. The same distance that lies between trying to correct things at both extremes. At the one end proposing the suppression of what is, basically, a waste of neuronal space because it is not used, just like what happens in our brain as well. At the other, guided by user needs, it will find the way to procure the necessary information to fill the knowledge gaps it detects as it goes along. In the end, if we are all going to benefit from the Global Brain, we are all going to be responsible for it in some way, its operation, knowledge or behaviour. The intelligence of the Web’s Global Brain will not only be the sum total of all the individual intelligences on the Net, but a fully evolving collective intelligence which it will be very difficult for us to understand from an individual standpoint. It’s a bit like what happens when we try to understand airline tariff systems or many other things that take place in the area of supposed “social intelligence” that we form part of.

So, how far away are we from the Web’s Global Brain? According to researchers we have been working with it for quite some time now in a rudimentary — or embryonic — phase in its evolution. Heylighen maintains that if the technical bodies in charge of the adoption of protocols and standards on the Internet give the green light, intelligent agents (autonomous programmes) could be incorporated into servers now as well as the algorithms capable of injecting a considerable degree of knowledge about the information the machines store. In five years time, the Web will behave with an intelligence that will convert it into the nerve center of a super-organism that each of us will form a tiny part of.

Five years is all that has lapsed since the Web burst into the public arena on the Net. In other words, what we are saying is that the “Global Brain” will form part of our daily lives in the blink of an eye. And we haven’t even begun to reflect on the social, cultural, political, educational, recreational, etc., implications of a live prosthesis of collective memory commanded by a brain made up of hundreds of millions of neurons distributed via interconnected computers being attached to our brains. Perhaps one of the characteristics of this prosthesis is that, as it develops, our critical faculties regarding its consequences diminish.

Translation: Bridget King