The Semantic Web

Luís Ángel Fernández Hermana - @luisangelfh
1 January, 2019
Editorial: 269
Fecha de publicación original: 29 mayo, 2001

A Deck of Aces

Human languages, naturally enough, are intended for human consumption. The digital language of zeros and ones, the “unique language” as we have previously dubbed it in this space, is meant to be understood by machines. Between the two there lies a vast region brimming with enormous possibilities: the language that will permit machines and human beings to communicate with each other, where the former will be able to understand the meaning and content of human language. This is the aim of the so-called “semantic web” project led by Tim Berners-Lee, inventor of the Web and director of the World Wide Web Consortium (W3C), and James Hendler, lecturer at the University of Maryland and researcher at the Defence Advanced Research Projects Agency (DARPA), both of whom will be here in Barcelona in October to tell us about this project at en.red.ando’s II One Day Conference.

The semantic web –or intelligent web, as it has already been dubbed by many– in effect means endowing the web with a high degree of intelligence and modifying many of the premises on which it is presently based. This does not, nonetheless, imply the development of a new web. Instead, it means increasing the capabilities of the present web by means of languages and tools that will make it possible to store information on the Internet in a structured fashion, thereby making it comprehensible to machines from the perspective of semantic analysis. This is seen as the first step towards proactive cooperation between computers and human beings in the communications process. As Tim Berners-Lee says in an article written for the debate on electronic scientific publication conducted by science magazine Nature, “Instead of asking machines to understand people’s language, the new technology, like the old, involves asking people to make some extra effort, in repayment for which they will get substantial new functionality –just as the extra effort of producing HTML markup (HyperText Markup Language) is outweighed by the benefit of having searchable content on the web”.

Research into the development of the semantic web, therefore, implies the development of a whole new range of tools, from advanced search engines to intelligent agents, which will enable machines to negotiate the content they store and establish relationships between concepts despite the fact they might come from many different disciplines or knowledge areas. By understanding the content of documents stored on the web, the Net will exponentially increase its capacity to look for information, to negotiate it between machines and present it much more “accurately”. In other words, it will be able to display not just the information requested by the user, but all that which, although not included in the original request, might, nevertheless, be much more specific and reached by other means.

And vice versa. In the same way, the semantic web will also be able to detect “information gaps” (demands without supply) and look for those who might fill those gaps thus opening up hitherto undreamed of negotiation areas on the Net. With these features as its initial basis, it is not strange that Berners-Lee is convinced that, “if correctly designed, the semantic web could contribute to the evolution of human knowledge in general”. In this way, his original idea of the web as a network of distributed intelligence, in which human beings and machines cooperate to improve communication, comes a little closer to fulfilling that objective which fell by the wayside as a result of the technical limitations of the times (according to Berners-Lee).

So far so good, but here comes the big question. If the web acquires the capacity for obtaining semantic knowledge of the document content it stores, what language, let’s say, will it acquire this skill in? What semantic structures will it be able to recognise? The semantic structures of all languages, or just English? How soon would all languages be incorporated into the new, spectacular functions of the semantic web? According to researchers in charge of the Semantic Web Activity project, the new machine language called DARPA Agent Markup Language (DAML) will allow for the creation of information structures comprehensible to machines no matter what the original language. That is its long-term objective (whatever that means in Internet terms). In other words, for quite a few years to come, the web will not only tend towards English, because that will be the language it understands “best”, but its abilities will also have such a devastating effect that everything that has been said so far about global and local languages, as well as linguistic policies on the Net, will have to be sifted through the filter of the semantic web and the remains will have to be very carefully examined. This will, undoubtedly, be one of the most interesting areas under discussion in the debate we are planning for the end of October in Barcelona. The people attending en.red.ando’s II One Day Conference will have the opportunity to debate these fundamental issues with the promoters of this important innovation for web communication, as well as with those working on the “global brain” project, another line of research designed to complement the semantic web.

Translation: Bridget King

print