September 15, 2015

Professors David J. Love and Supriyo Datta named Thomson Reuters Highly Cited Researchers for 2015

Professor Supriyo Datta
Professor Supriyo Datta
Professor David J. Love
Professor David J. Love
Highly Cited Researchers 2015 represents some of world’s most influential scientific minds. About three thousand researchers earned this distinction by writing the greatest number of reports officially designated by Essential Science Indicators as Highly Cited Papers — ranking among the top 1% most cited for their subject field and year of publication, earning them the mark of exceptional impact.

Professors David J. Love and Supriyo Datta have been named Thomson Reuters Highly Cited Researchers for 2015. 

Highly Cited Researchers from Thomson Reuters is an annual list recognizing leading researchers in the sciences and social sciences from around the world. The 2015 list focuses on contemporary research achievement: only Highly Cited Papers in science and social sciences journals indexed in the Web of Science Core Collection during the 11-year period 2003-2013 were surveyed. Highly Cited Papers are defined as those that rank in the top 1% by citations for field and publication year in the Web of Science. These data derive from Essential Science Indicators℠ (ESI). The fields are also those employed in ESI – 21 broad fields defined by sets of journals and exceptionally, in the case of multidisciplinary journals such as Nature and Science, by a paper-by-paper assignment to a field. This percentile-based selection method removes the citation disadvantage of recently published papers relative to older ones, since papers are weighed against others in the same annual cohort.

Those researchers who, within an ESI-defined field, published Highly Cited Papers were judged to be influential, so the production of multiple top 1% papers was interpreted as a mark of exceptional impact. Relatively younger researchers are more apt to emerge in such an analysis than in one dependent on total citations over many years. To be able to recognize early and mid-career as well as senior researchers was one goal for generating the new list. The determination of how many researchers to include in the list for each field was based on the population of each field, as represented by the number of author names appearing on all Highly Cited Papers in that field, 2003-2013. The ESI fields vary greatly in size, with Clinical Medicine being the largest and Space Science (Astronomy and Astrophysics) the smallest. The square root of the number of author names indicated how many individuals should be selected.

The first criterion for selection was that the researcher needed enough citations to his or her Highly Cited Papers to rank in the top 1% by total citations in the ESI field in which they were considered. Authors of Highly Cited Papers who met the first criterion in a field were ranked by number of such papers, and the threshold for inclusion was determined using the number derived through calculation of the square root of the population. All who published Highly Cited Papers at the threshold level were admitted to the list, even if the final list then exceeded the number given by the square root calculation. In addition, and as concession to the somewhat arbitrary cut-off, any researcher with one fewer Highly Cited Paper than the threshold number was also admitted to the list if total citations to his or her Highly Cited Papers were sufficient to rank that individual in the top 50% by total citations of those at the threshold level or higher. The justification for this adjustment at the margin is that it seemed to work well in identifying influential researchers, in the judgment of Thomson Reuters citation analysts.

Of course, there are many highly accomplished and influential researchers who are not recognized by the method described above and whose names do not appear in the new list. This outcome would hold no matter what specific method was chosen for selection. Each measure or set of indicators, whether total citations, h-index, relative citation impact, mean percentile score, etc., accentuates different types of performance and achievement. Here we arrive at what many expect from such lists but what is really unobtainable: that there is some optimal or ultimate method of measuring performance. The only reasonable approach to interpreting a list of top researchers such as ours is to fully understand the method behind the data and results, and why the method was used. With that knowledge, in the end, the results may be judged by users as relevant or irrelevant to their needs or interests.