Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
Editorial
Media & News
Original Article
Review Article
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
Editorial
Media & News
Original Article
Review Article
View/Download PDF

Translate this page into:

Editorial
ARTICLE IN PRESS
doi:
10.25259/STN_13_2026

Stop Reducing Us into Simplistic Numbers: An Academic Outcry!

Faculty of Pharmacy, Cairo University, Cairo, Egypt
Bioscience Research Laboratories, MARC for Medical Services and Scientific Research, Giza, Egypt
School of Medicine, Newgiza University, Giza, Egypt
Children’s Cancer Hospital, Cairo, Egypt
Human Genetics and Genome Research Institute, National Research Centre, Giza, Egypt
Ross University School of Medicine, Bridgetown, Barbados.
Author image
Corresponding authors: Prof. Ramy K. Aziz, Faculty of Pharmacy, Cairo University, Cairo and Bioscience Research Laboratories, MARC for Medical Services and Scientific Research, Giza, Egypt. ramy.aziz@pharma.cu.edu.eg
Author image
Prof. Mohamed A. Farag, Department of Pharmacognosy, Faculty of Pharmacy, Cairo University, Cairo, Egypt. mohamed.farag@pharma.cu.edu.eg
Licence
This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, transform, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.

How to cite this article: Aziz RK, Abdelaziz AI, Ali SS, El-Kamah G, Siam R, Farag MA. Stop Reducing Us into Simplistic Numbers: An Academic Outcry! Sci Tech Nex. doi: 10.25259/STN_13_2026

“Everything should be as simple as it can be, but not simpler!”

Attributed to Albert Einstein[1]

Almost everyone working in academia is currently struggling with evaluation systems, either intended to rank the scientific output of individual scientists, journals, institutions, or entire countries. Publishers and ranking organisations compete over developing different systems, metrics, and “elite lists,” and seem to be particularly preoccupied with indexes, ranks, and percentiles. The evaluation systems are not merely laudatory, adding prestigious medals to scientists and institutions, but they also influence pivotal decisions affecting careers and funding.[2] Admission, recruitment, promotion, and award committees inescapably rely on criteria and metrics to make final choices between equally endowed applicants.

The key problem that we, academics and scholars, are suffering from is the ‘numericalisation’ and oversimplification of assessment criteria, resulting in superficial, unjust, and often manipulable decisions. Using simple quantitative methods (number of publications, number of citations, H-index, publications in ‘highly ranked’ journals, society memberships, or total raised funds) may lead to selecting undeserving candidates. Even worse (and unfortunately), sometimes metric manipulators are the ones who benefit the most. Additionally, with the many current reports on misconduct, paper and citation mills, as well as mass production of near-redundant publications/salami slicing, there is an urgent need for verifying every form of scientific output to guarantee integrity. This verification itself requires reliable sources to avoid mislabelling or wrongly accusing authentic scientists.

To positively address these irregularities, we here propose some key concepts to be taken into consideration during the evaluation of scientific output.

KEY CONCEPTS

First, initial filtration is important: Issues with zero-tolerance (such as plagiarism, scientific misconduct, data or metric manipulation, multiple imposed retractions) are all grounds for immediate rejection or disqualification of applications. This is clearly different from inadvertent oversights, voluntary retractions, and genuine mistakes.

Once the ethical filter has been cleared, it is important to take into consideration as many dimensions as possible in the career of a scientist/scholar/faculty member, to avoid one-dimensionality and oversimplification.

Having independently and collectively brainstormed this problem, we recommend the following seed set of attributes or criteria for evaluation, and this Editorial could be seen as a nucleus for novel metrics rather than an exhaustive list of evaluation rubrics. Here are the criteria we recommend measuring for each scholar/scientist, possibly with different weights:

  • 1)

    Productivity: It simply means the volume of publications, proposals, patents, and any other research outcomes (e.g., software deposition in Github, datasets in sequence or spectral databases, or the foundation of an association).

  • 2)

    Quality of research: Quality may be the hardest to measure but should not be neglected for that reason. Quality may be assessed via close reading of the work. In science, quality also depends on methodological rigour and the use of state-of-the-art technologies. With advances in large language models, we believe that artificial intelligence can help with quality and novelty assessment.

  • 3)

    Uniqueness/innovation/novelty: This additional attribute could be reflected in innovation metrics (patents, substantial findings, or specific awards) as well as low similarity indices. Novelty may sometimes go against quantitative or immediate ‘impact’ if the investigator is ahead of peers or is challenging agreed-upon paradigms.[3] A great factor affecting novelty and uniqueness is the research question/hypothesis itself.

  • 4)

    Contribution (role): The above publication-based measures need an important discrimination: what is the role of the researcher/scholar? Key roles in a paper are usually the primary role (first, co-first, and sometimes second author), the research director/PI/senior author role (last or co-last/corresponding author). In some fields or cultures that favour alphabetical author lists, these roles are not expressed by the position in the author list, but rather by the defined ‘author contributions’ within the publication. These roles need to be substantially discriminative and need to be taken into consideration as an important factor before any quantitative assessment.

  • 5)

    Focus/independence: For awards or grant applications, in particular, it is important that the applicant has some research focus. A researcher’s focus and independence are reflected by his or her (i) adherence to one or a few research areas (which, of course, is career-stage dependent); (ii) divergence from the mentor(s)’ work; (iii) funding acquisition as a principal investigator.

  • 6)

    Maturity/scientific school: While maturity could be seen as an extension of focus/independence, it is more reflected in substantial contributions to one or more scientific fields or subfields, the ability to secure and sustain funding, and the development of extensive independent research. Maturity is equally expressed in mentoring juniors and building research programs, which can be concretely reflected in the phasing out of leading authorship roles to the gradual assumption of senior/corresponding authorship positions. By definition, maturity is proportional to years of experience and should thus be weighted appropriately for different levels. However, in absolute terms, a scholar with a high ‘maturity index’ is one who can be considered as a master of his or her field.

  • 7)

    Impact: Impact is most often measured by citations of a publication, page views or hits to a website, software downloads, etc. More sophisticated measurements include some that are article-based (e.g., citations per article, media coverage, alternative metrics), investigator/career-based (e.g., H-index,[4] C-score,[5] M-quotient[6]). Among important adjustments to citation metrics is field normalisation: the Field-Weighted Citation Impact (FWCI) is a normalised indicator that compares a candidate’s citation performance to the global average within the same field, year, and document type (URL: https://metrics-toolkit.org/metrics/field_weighted_citation_impact/), and may enable fair evaluation across disciplines by correcting for differences in citation practices and research maturity, although it is challenged by the tough assignment of a paper to a certain field (whether by keywords, journal specialisation, or author affiliations).

  • 8)

    Trust factor(s): Trust is a function of both reproducibility and integrity and reflects confidence in the integrity and rigour of a researcher or an institution. While trust used to be deduced from all the above criteria, it currently requires more attention because of pervasive misconduct and ease of publishing online, in addition to the fundamental need to reproduce and validate scientific findings. Other trust markers are peer-validation tools (e.g., reference letters, interview articles, invited talks to reputable conferences, etc.). On the other hand, ‘mistrust factors’ include unresolved PubPeer records (https://pubpeer.com/), repeated imposed retractions, and publication in predatory journals.

*Note: For integrity issues (including metric gaming, paper mills, predatory journals, PubPeer records, retractions, etc.), the opinion of expert evaluators remains important to judge whether any of the above cases seems sporadic, unintentional, or a result of a collaboration that is hard to suspect (e.g., an integrity breach by a foreign collaborator’s student, which is quite hard to verify, or a trespass by a member of a large consortium of authors). On the other hand, repeated incidences, even if proven unintentional, could reflect lack of rigour, lack of attention, or lack of real contribution. In any case, early judgment should be avoided, and the benefit of the doubt should be given to any suspect.

IMPLEMENTATION

The above ideas are not particularly novel, and we do not think that this discussion is exclusive. Most importantly, we are aware and greatly respect the DORA initiative (URL: https://sfdora.org), among others. However, these ideas do not seem to have been sufficiently implemented on the ground. We highly value and recommend bottom-up and top-down initiatives to save science from objectification and superficiality, and above all: to keep truth, rather than glamour, as the purpose of science.

  • a.

    One way to implement such measures is to switch from single numbers to multifaceted, yet still simply visualised, systems. Radar plots are relatively easy to perceive and still summarise a variety of criteria in one figure. An example radar plot [Figure 1] represents a comparison between three researchers with different breadth and depth of productivity and excellence.

  • b.

    Oscar-like awards to scientists: this might sound over-simplistic, but the cinema industry seems to be ahead of scholarly evaluation. The US Academy of Motion Picture Arts and Sciences gives annual role-dependent awards (Oscars) that distinctly reward primary players (main actors) vs. secondary ones, and that differentiate directors from highly skilled technicians. As such, research awards could be reimagined, e.g., best career-long primary author, best publication by a primary author, best principal investigator (often last author), or best technical contribution by a ‘middle’ author.

A radar plot comparing three imaginary researchers based on the eight proposed criteria in this article.
Figure 1:
A radar plot comparing three imaginary researchers based on the eight proposed criteria in this article.

DECLARATIONS

With all respect to technological advances, the authors did not feel the need to seek assistance from generative AI or large language models at any stage of drafting or writing this manuscript.

Author contributions

All authors equally contributed to developing the ideas and concepts in this article. The manuscript was drafted during round-table meetings between the authors, as part of an unpublished white paper entitled: “Ideas to improve current researcher evaluation methods”. RKA conceived the idea, initiated the discussion, and transcribed the suggestions of the round-table participants. All authors read, edited, and approved the final version.

References

  1. , ed. The ultimate quotable Einstein. Princeton University Press; .
  2. , et al. Social Work in Health Care. 2005;41:67-92.
    [CrossRef] [PubMed]
  3. . The structure of scientific revolutions (2nd ed). The University of Chicago Press; .
  4. . Proceedings of the National Academy of Sciences of the United States of America. 2005;102:16569-16572.
    [CrossRef] [PubMed]
  5. , et al. PLoS Biology. 2016;14:e1002501.
    [CrossRef] [PubMed] [PubMed Central]
  6. , et al. American Journal of Pharmaceutical Education. 2009;73:111.
    [CrossRef] [PubMed] [PubMed Central]

Fulltext Views
1,612

PDF downloads
127
View/Download PDF
Download Citations
BibTeX
RIS
Show Sections