The subject of the accuracy of the Biblioblog Top 50 has come up again, this time at Tolle Lege. I noted in the comments on this post that Alexa’s traffic ratings (upon which the Top 50 is based) are only loosely correlated with actual pageviews, since they rely heavily on visits by Alexa Toolbar users, which make up only a small minority of Internet users. In response, Rob asked Are Alexa’s Ratings Suspect?:
Is Alexa’s system suspect and should it be used? Or do you find that many people do indeed have the Alexa toolbar installed and so its a good way to evaluate statistics? OR, is all this really a ploy from Alexa to get people to install the toolbar, and truth be told the toolbar doesn’t really matter?
I agree with those who suggested that the higher a blog’s traffic, the more accurate Alexa’s rating is going to be, while the lower the traffic (and thus, the smaller the sample size for Alexa to work with), the less reliable they will be. That’s why the top 25 or so tends to be more stable in each month’s rankings, while those of us lower down the totem pole can see our Alexa ratings jump by several million up or down, even when our traffic remains steady.
If we really wanted as accurate a listing as possible based on Alexa’s numbers, we’d probably have to limit it to a Top 10, or start measuring pageviews at the blog level. Theoretically, this could be done by integrating a statistics package into the Biblioblog banner and getting everyone to install it. But someone who knows a great deal more about programming than me would have to build such a widget and pay for bandwidth and server space to collect all that data. There would also be privacy issues and who knows what other headaches.
Is all of that really worth it to have a slightly more accurate measure of the most frequently visited biblioblogs? As has been emphasized ever since N.T. Wrong began the thing, the “Top 50” was never about an objective measure of the best or even the most popular biblioblogs. It’s simply a fun way of highlighting some of the better blogs out there–and it certainly does that, even if there will always be others that could have been included but weren’t. Of course, even if it did reliably measure popularity, it still wouldn’t tell us which blogs are “best.” As their own disclaimer insists:
In Biblical Studies the ability to write meaningful pieces that only you and, maybe, one other person in the world understand is the zenith of achievement. The Biblioblog Top 50 is thus no indication of the worth or otherwise of the blogs involved. As Jesus the Galilean once said, “Οὕτως ἔσονται οἱ ἔσχατοι πρῶτοι καὶ οἱ πρῶτοι ἔσχατοι”.
In a way, the volatility of the list is actually a good thing, as it ensures that more blogs turn up in the list than might otherwise. As long as everyone understands that the relative rankings are, at best, merely a rough snapshot and not an objective measure, taking drastic measures to make it more “accurate” would only obscure the fact that the value of a blog is not measured by its traffic but by the quality of the conversations it fosters–whether between two people or a hundred.
Whatever it’s failings, the Biblioblog Top 50 has certainly sparked its share of worthwhile conversations, and not just about its accuracy, so it would be a shame to ruin all that in some misguided quest for accuracy. Besides, this way even the lowliest among us can hope that this will be the month Alexa accidentally inflates our numbers and thrusts us into the blessed realms known only to James McGrath, Ben Witherington and Jim West… or else casts us down into obscurity.