Responsible metrics

If you are using metrics to evaluate research, ensure the metrics you choose accurately and fairly reflect impact.

Traditional metrics have limitations. It is important for researchers and research committees to acknowledge the limitations of traditional metrics and to use metrics responsibly.

To use metrics responsibly, follow four principles: use appropriate metrics, use robust metrics, provide context for metrics, and be transparent about the metrics used.

Choose appropriate metrics

Choose a metric that accurately captures what you are trying to measure

Each metric has a methodology that limits the information it describes. Using a metric to describe impact beyond what its methodology captures can be risky.

Journal impact and prestige cannot be used as indicators of the impact or quality of an article you’ve published in that journal. However, these metrics may indicate author repute, as you were successful in publishing in a desirable journal.

Use caution when interpreting a metric, and ensure you stay within its scope.

Can your work be captured by the metric you want to use?

Metrics are limited by their source data. As most metric tools are provided by databases, their sources are the research outputs indexed in those databases. The metric’s accuracy will be affected if your work is not captured by the database. This can happen for several reasons, such as:

  • The nature of research outputs (e.g. creative works, performances, datasets, software) is rarely captured in databases.
  • The discipline is poorly represented in that database (e.g. SciVal metrics are poor indicators for humanities subjects, as they are sourced from the Scopus database which has low coverage of humanities journals.)

Choose robust metrics

Use metrics with accurate, valid methodologies. Avoid flawed metrics, even if they are commonplace.

The h-index metric has been used in the past as an indicator of researcher impact. However, the h-index does not account for discipline-specific citation patterns or for career length.

Because of this, it poorly represents active early career researchers and researchers in low-citation fields. These flaws mean the h-index is no longer considered an appropriate or accurate metric for general use.

Look for discipline-normalised or field-weighted metrics

Robust metrics will consider disciplinary trends, output age and publication type through weighted calculations. Some disciplines (such as medicine and the sciences) have higher productivity and citation rates than slower-citation, lower-turnover areas (such as humanities fields).

This means that citation-based metrics are highly skewed in favour of these highly cited fields. Citation metrics may be skewed in other ways as well. For example, an older article may have a higher citation metric than a recent publication. Articles and reviews are likely to have higher citation metrics than books.

Normalised or weighted metrics can help account for differences across disciplines or demographics, making outputs more widely comparable. For multidisciplinary research, normalised or field-weighted metrics may not be appropriate since the output is likely to span multiple research areas.

Revise the metrics you use often. Ensure they are still appropriate and valid.

When conducting a review or evaluation, it is essential to ensure that previously used metrics are still fit for purpose. For example, while the h-index has been widely used and accepted for author assessment in the past, concerns about its methodological flaws have led to its reduced use.

Give context to metrics

Provide narratives and context. A number is an insufficient indicator of impact.

You must give a qualitative, expert interpretation alongside whatever metrics you include. This provides context to the numbers and allows you to describe the impact of your work beyond your research discipline. This is especially important when impact cannot be captured by citation-based metrics, such as policy reform, changes in practices or uptake of research beyond literary re-use.

Use caution when interpreting a metric. Ensure you stay within its scope.

Most academic review processes, including PBRF, emphasise qualitative evaluation of research impact rather than quantitative measures. Use of metrics alone is generally insufficient in such instances.

Be transparent when using metrics

Data collection and analysis methods need to be clearly stated. This enables verification and reproducibility.

Once you have selected a metric for inclusion in an impact report or evaluation, you should also include:

  • Your methodology
  • An explanation/interpretation of your results
  • Any variations or exclusions from the method and their justifications
  • Source data (e.g. the database or tool used, publication time span of the data collected, definition of metric)
  • Collection date (metrics will continue to change over time)

This extra information allows others (such as examiners or the researcher being evaluated) to revisit and reproduce the metrics.

Resources on responsible metric use