Google Scholar Metrics can be accessed from Google Scholar, via a “Metrics” icon on the upper right of the home page.
The purpose of Google Scholar Metrics is to “help authors worldwide as they consider where to publish their latest article“. Two more excerpts from the Google Scholar Blog post, “Google Scholar Metrics for Publications“, are:
To get started, you can browse the top 100 publications in several languages, ordered by their five-year h-index and h-median metrics.
Scholar Metrics currently covers many (but not all) articles published between 2007 and 2011. It includes journal articles only from websites that follow our inclusion guidelines as well as conference articles and preprints from a small number of hand-identified sources. For more details, see the Scholar Metrics help page.
One noteworthy aspect of the top 100 publications in several languages is that RePEc (Research Papers in Economics) and the arXiv repository are included, and achieve high ranks (#4 and #5, respectively; Nature, New England Journal of Medicine and Science are ranked #1 to #3, respectively).
Another interesting aspect of this list of 100 publications, from an Open Access perspective, is that the PLoS journal that’s top-ranked on the basis of h-index is PLoS ONE, ranked #63. Then comes PLoS Biology (#83), PLoS Medicine (#88) and PLoS Genetics (#93).
The Eigenfactor method of ranking journals also identifies PLoS ONE as the top-ranked PLoS journal.
Different rankings are obtained using other measures. In particular, the 2010 Journal Impact factor (JIF) for PLoS ONE is lower than that of the other 6 PLoS journals. And, on the basis of the 2010 Article Influence Score, PLoS ONE has a lower rank than 5 of the other 6 PLoS journals.
The SJR method for ranking journals also gives PLoS ONE a low rank in comparison with the other PLoS journals.
Comment: Which ranking method to believe? My answer is: none of them, if considered by themselves.
In 2009, Johan Bollen and 3 co-authors published “A Principal Component Analysis of 39 Scientific Impact Measures” [PLoS ONE 4(6): e6022]. One of their conclusions was that “scientific impact is a multi-dimensional construct“. Various measures place a different emphasis on the major dimensions of the construct (see Figure 2 of their paper). They also concluded that “the JIF and SJR express a rather particular aspect of scientific impact that may not be at the core of the notion of scientific ‘impact’.” They suggested that the JIF and SJR are indicative of journal “Popularity”. rather than “Prestige”. Their results indicated that the h-index places less emphasis on “Popularity” and more on “Prestige” than do the JIF or the SJR.
Bollen et al didn’t include the Eigenfactor approach in the set of impact measures, but merely commented that it should be considered for inclusion in comparisons of impact measures.The Article Influence Score was also not included in their set of measures.
So, what to conclude about Google Scholar Metrics? Their current focus on the h-index is useful, but the metrics would be more useful if they included other measures. Of particular interest would be measures of “usage”, analogous to those identified by Bollen et al as “Rapid” measures of “Prestige”. From this point if view, the editorial by Gunther Eysenbach, “Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact” [J Med Internet Res 2011;13(4):e123] is of particular interest. Might tweets quickly predict “Popularity” more than they predict “Prestige”?