Metcalfe’s law says that “the value of a network increases proportionately with the square of the number of its users”. This law works for Internet, social networking, the www and any other type of network like that. The idea is to be able to give a value to the network.
law has been used in the context of web 2.0, or rather all sorts of people are trying to see if it can help us understand it all a bit better. There are a number of non-believers in Metcalfe’s
law, for example Bob Briscoe
and Benjamin Tilly don’t like it at all
. Simeon Simeonov
with Bob Metcalf
addresses this himself
and it’s a nicely put argument.
Whether it is right or wrong is a very long debate to have, well beyond the scope of this post or even this blog, so we will simply take Metcalfe’s law at face value and see if it can work in a web 2.0 context. In this sense it would be something like the the value of a service is given by the number of users it has.
The more users there are, the more links there are and the number of potential links increases every time
a user joins (the debate is around how the value of each link is not equal). I really liked the paper by Hendler
Polytechnic institute) and Golbeck
(University of Maryland). It’s called “Metcalfe’s law, web 2.0 and the semantic web
The problem is nicely summarized in the abstract:
“The power of the Web is enhanced through the network effect produced as resources link to each other with the value determined by Metcalfe’s law. In Web 2.0 applications, much of that effect is delivered through social linkages realized via social networks online. Unfortunately, the associated semantics for Web 2.0 applications, delivered through tagging, is generally minimally hierarchical and sparsely linked. The Semantic Web suffers from the opposite problem. Semantic information, delivered through ontologies of varying amounts of expressivity, is linked to other terms (within or between resources) creating a link space in the semantic realm. However, the use of the Semantic Web has yet to fully realize the social schemes that provide the network of users.”
Interestingly they mention Tim O’Reilley who said that the importance of web 2.0 is centered around content creation but the critical thing about it is RSS, permalinks and other kinds of linking technology. They say that the network effect comes from the social constructs within the sites, and that the value of the network can be deduced through the links between the people who interact in them.
They rightly point out also that a short-coming of web 2.0 is that tags don’t create much of a link space. Tags are always more sparse than links. This is why there are problems at the moment with clustering efforts because there’s not so much to go on. More work is being done to automate tags and such things. There are ontologies and taxonomies and so many more structures being tried and tested.
RDF, OWL, RDFS and so on are all about assigning URIs in order to represent relationships. The authors are right when they say that the most important thing about these languages is that they provide “common referents”. The latent value of the semantic web is in the vocabularies because we can assess the value, and the other characteristics of words. This based on how they link to each other, the relationships they have and share.
They say that Matcalfe’s law comes into play here again because “the more terms to link to, and the more links created, the more the value in creating more terms and linking them in”.
The drawbacks as they describe mostly revolve around the fact that our early attempts at semantic web evolution have failed because of the amount of tagging needed. The whole tagging and folksonomie movement hasn’t worked because it’s flat and doesn’t exploit the links between the elements properly.
They mention FOAF
as the most successful semantic web effort to date. I have to agree with this, and I have blogged about it before.
The main problem is that tagging (like for Del.icio.us) isn’t very useful because it’s not expressive enough and it isn’t structured. OWL for example is and once the teething problems are ironed out, it will be much easier to extract important and useful data from the web. We just need to learn how to use all of this new technology effectively.
“Metcalfe’s law makes it clear that the value of these systems, viewed as networks of communicating agents (whether human or machine), arises from the many connections available between online resources. To exploit this space, however, there must be explicit linkages between the resources: when it comes to the network effect, if you don’t have links, you don’t get it.”
So basically Metcalfe’s law allows us to see the enormous amount of possible linkages and current ones too. It is truly staggering. Imho Metcalfe’s law is good enough here to help us form an idea of the vastness we are dealing with. Obviously not all links are equal and there is the issue with valuing them adequately, but surely this is something we can busy ourselves with once we have a full picture.
Why should you care?
Metcalfe’s law is simply a stab at a metric for evaluating the landscape. What it shows is that it is huge, and we don’t really need to be told much more. It’s important to get into ontologies and things, play with FOAF, see how you can include your site in this model.