Dr. Enver Zerem: A Multifaceted Journey From Clinical Practice to Scientometrics

Dr. Enver Zerem

Dr. Zerem’s Journey in Medical Science

Dr. Enver Zerem is a Professor of Internal Medicine at the University of Tuzla and Mostar University School of Medicine, Bosnia and Herzegovina (BiH). His career includes serving as the Director of Scientific Research and Education at the University Clinical Center in Tuzla from 2007 to 2018, and as the head of the Council of Doctoral Studies at the School of Medicine, University of Tuzla between 2012 and 2017. He also held the position of President of the Managing Board of the University of Tuzla from 2012 to 2016 and was the Vice Dean for Clinical Education at the School of Medicine, University of Tuzla from 2002 to 2004.

Dr. Zerem earned his medical degree from the University of Belgrade School of Medicine (Serbia), becoming a specialist of Internal Medicine in 1983 and a Gastroenterology specialist in 1988. He obtained his MSc degree in 1991 and his PhD degree in 1994, both from the University of Tuzla.

Dr. Zerem’s research primarily focuses on the diagnosis and treatment of acute pancreatitis, the use of interventional ultrasound in gastroenterology and hepatology, acid-related disorders, and viral hepatitis. He also concentrates on scientific publishing and science metrics systems. An accomplished author, Dr. Zerem has written over 200 peer-reviewed articles, reviews, and book chapters.

Currently, he is a member of the Board of Directors of the Eurasian Association of Gastroenterohepatology and the expert committee for adopting new guidelines for acute pancreatitis of the International Association of Pancreatologists (IAP). He is also an active member on the editorial boards of several scientific journals, including the World Journal of Gastroenterology, World Journal of Clinical Cases, Biomolecules and Biomedicine, Balkan Medical Journal, Eurasian Journal of Hepatology and Gastroenterology, and Acta Medica Academica. Additionally, he serves as a reviewer for various prestigious scientific journals. Dr. Zerem was elected as a corresponding member of the Academy of Sciences and Arts of Bosnia and Herzegovina (ANUBiH) in 2012, and he became a full member in 2018.

Dr. Zerems Perspective: The Interview

Professor Zerem, given your stated goal of achieving excellence in patient care and ensuring universal relevance through scientific validation, how do you manage the immediate demands of clinical practice alongside the long-term objectives of scientific research? Additionally, as a regular member of the ANUBiH and a dedicated medical professional, could you share the initiatives or projects you are currently involved in, and explain how they contribute to the advancement of science in BiH?

Throughout my entire career, and perhaps it’s not an exaggeration to say throughout my life, my primary goal, regardless of the associations I’ve been active in or the institutions I’ve belonged to, has always been to be a successful doctor dedicated to helping sick people. I have held several important roles, primarily in health, academic, and educational institutions and associations. Among these, I am especially proud of my full membership in the ANUBiH. Simultaneously, I have always prioritized my role as a doctor, working daily with patients. The satisfaction I derive from successfully helping people is incomparable to any other achievement in my career.

However, I firmly believe that mere love for the profession is not enough for success in medicine; it requires a solid foundation of knowledge and certain prerequisites. A doctor must possess comprehensive knowledge of medicine and exceptional expertise in a specific medical field. Moreover, it is vital to conscientiously apply and document this knowledge over an extended period. This process leads to new insights and information, which can then be published in scientific articles and shared with the scientific and professional community. Such dissemination gives clinical practice its scientific validation and universal relevance.

The Ranking of Scientists Based on Scientific Publications Assessment

Your work in scientometrics is crucial for assessing research quality and resource distribution. While scientometrics faces challenges due to data precision needs, scientific publications remain central for knowledge sharing, career advancement, and credibility. This also extends to grants, leadership, and mentoring roles. However, scientometrics often overlooks important activities such as grants and mentoring. Citation-based rankings have limitations, especially in multi-authored works, and university rankings need more diverse criteria. In the biomedical field, databases like Web of Science and PubMed are indispensable.

Though scientific journals play a crucial role in knowledge dissemination and research evaluation, objective assessment remains challenging. Impact factors and citations are significant metrics, but evaluating individual scientists is complex and often reliant on citation counts. Current metrics like the H-index and impact factor have limitations in reflecting individual contributions. Your proposed metric, considering both journal quality and author contributions, aims to bridge these evaluation gaps, enabling fairer comparisons among scientists and institutions.

Could you elaborate on the role of scientific publications within current scientometrics, and discuss their impact on academic evaluations, as well as the growing importance of scientometrics in the allocation of research funding and the assessment of scientific quality?

The primary output of scientific research is the information published in scientific journals. These journals are fundamental for spreading knowledge, and they also serve as key benchmarks for academic and scientific evaluation, securing funding for scientific research, and advancing careers. Academic communities strive to govern their staff’s progress using objective criteria and standards. This effort, coupled with limited funds and the drive to allocate resources to high-quality research, underscores the growing importance of assessing research quality and valuing knowledge. Yet, the challenge is the application of criteria that objectively evaluate scientific research. Such criteria should provide detailed qualitative and quantitative data, enabling academic communities to effectively track their members’ progress and funding agencies to make informed decisions.

How do you think the scientometrics could be improved to encompass a broader range of scientific activities beyond publications?

Apart from assessing scientific publications, various other activities contribute to a scientist’s credibility. These include the number and quality of grants for scientific research projects, leadership roles in national or international academic societies, memberships on editorial boards of esteemed journals, mentorship of doctoral dissertations, and similar activities. While these undertakings are significant and bolster a scientist’s credibility, current scientometrics primarily focuses on publications, often overlooking these other critical aspects in evaluations for academic progression and grant competitions for scientific research funding. This oversight is due to the heterogeneity of these activities, each with their unique characteristics and requiring diverse evaluation parameters.

Given these considerations, I believe that in the near future, there will not be universal evaluation criteria for these academic activities, despite their undeniable importance to the academic and scientific community. Nevertheless, this should not deter academic communities and scientific research funding agencies from incorporating these activities into their valorization processes, alongside scientometrics parameters, if they deem them essential for specific evaluations.

What impact has the shift towards government and public sources in science funding had on the direction and nature of scientific research? Additionally, can you elaborate on the challenges of translating basic scientific research into practical applications, and how this affects funding and innovation?

Throughout the 20th and early 21st centuries, science funding has increasingly come from governments, public funds in affluent countries, pharmaceutical companies, and other international corporations. The primary public funds and organizations that finance most scientific research are also its chief beneficiaries. They are expected to contribute to general social progress and ensure the profit and sustainability of these entities.

Significant investments in science are predominantly channeled into fundamental research, leading to an expansion of the knowledge base. Yet, the outcomes of basic scientific research often do not align with the anticipated speed of their practical application in industry improvements, social advancement, and public health. This delay in applying scientific research results can erode confidence in science, negatively impact the funding of further research, and cause a relative deceleration in scientific innovation. This situation necessitates additional so-called ‘translational research’ to facilitate the transfer of knowledge from fundamental research to practical application. Translational research, requiring comparatively lower investments, is accessible not just to governments and public funds of wealthy countries, but also to less developed countries and corporations, in contrast to the exclusivity of fundamental research.

What are the main limitations of existing scientific evaluation systems?

Nearly all prominent scientometrics indices evaluating scientists’ achievements focus primarily on the number of citations their articles receive. For instance, the well-known H-index, a system evaluating an individual’s scientific contribution, is determined by the lowest-ranked article with a citation count corresponding to its rank number (e.g., a scientist with an H-index of 20 must have at least 20 articles each cited ≥20 times). The H-index, while adept at assessing scientific significance and contribution, is straightforward to calculate and significantly impacts a scientist’s profile.

However, this system has considerable limitations, as it relies solely on the number of citations an individual article receives. In evaluating scientific contributions, the H-index overlooks each author’s individual input in the assessed article and does not address the common issue of inflated author lists, where some authors may have minimal or no contribution (the H-index treats all authors equally, granting each the total citation count of the article).

Theoretically, a “scientist” could achieve an H-index above 20 without having authored a single significant paper. This system also inherently favors older articles (which have been available for citation longer) and can negatively affect the evaluation of emerging scientists’ work, impacting their academic progression and access to scientific research grants.

How do you think the evolution of scientometrics has influenced the broader perception and functioning of the global science system?

Academic communities seemed unprepared for the integration of scientometrics in assessing their members’ scientific and academic credibility. Unfortunately, it appears that little has changed to date. A few decades ago, the responsibilities of academic staff were considerably less demanding. University teachers primarily focused on teaching and other forms of student education, with some engaging in scientific research projects that lacked stringent outcome expectations or completion deadlines.

However, with Margaret Thatcher’s tenure as Prime Minister, new regulations for university teachers were introduced in Great Britain and subsequently across Europe. These included five-year renewable contracts with specific goals and conditions for renewal, based on scientometrics parameters. These reforms, initiated by Prime Minister Thatcher, were met with limited enthusiasm within the academic community. This sentiment was not exclusive to less ambitious university teachers; even prominent scientists and inventors of groundbreaking discoveries expressed reservations. Many in the scientific community argued that undue pressures and time constraints on research could hinder scientific innovation and lead to a diminished focus on the educational and teaching aspects of university work.

Despite significant opposition to these new rules for evaluating scientific achievements — including from highly respected scientists and Nobel Prize laureates — they have been adopted and embraced by most prestigious universities and scientific institutions. In smaller and less developed academic communities, resistance to the scientometrics valorization of science is often unjustified and typically serves as a pretext to obscure the subpar quality of work when measured by scientometrics standards.

What are the major challenges in developing objective criteria to assess scientific research, and how can they be addressed?

The existence of numerous scientometrics systems itself highlights their limitations and the absence of a perfect system that can precisely measure the scientific contributions of scientists and journals. The Impact Factor (IF) from the Web of Science (WoS) and the total citation count of articles in a journal are generally acknowledged as the most pertinent indicators of a journal’s significance. However, assessing a scientist’s scientific importance is far more complex than evaluating the significance of scientific journals. The value of a scientist’s work cannot be directly inferred from the prestige of the journals where their articles are published.

Several factors complicate the evaluation of a scientist’s significance and the merit of their scientific output: the varying number of articles published by different authors; the diverse types of articles published across journals indexed in different scientific databases; and the limitations of citation count as a measure of an article’s value, which include a substantial time lag and a bias towards older articles of similar quality. Additionally, the contribution of authors within a scientific article is often unequal. As a result, applying appropriate scientific criteria that can objectively evaluate scientific research and provide precise qualitative and quantitative data for an objective assessment of its value is extremely challenging.

Z-calculator

The Z-score calculator, a software tool within the novel Science Metric System (SMS), has been developed to streamline the computation of the Z-score. This score incorporates the journal’s Impact Factor (IF), the Total Number of Citations (TNC) of the journal, and the author’s contribution to a scientific article.

This user-friendly software is available as a web-based application for easy access. The calculator utilizes four parameters (number of authors, IF, TNC, and the number of citations of an individual article) and considers various authorship models to calculate the Z-score. It is designed to be adaptable and compatible with modern web browsers, with potential capabilities for linking to scientific databases to automate data retrieval.

Considering the limitations of current scientometrics systems, what future developments do you foresee in the field of scientometrics?

Despite its numerous shortcomings and facing opposition from parts of the academic community, scientometrics will continue to exist and evolve as a distinct field of study. It is indispensable to science, as the adage goes: “Science begins where measurement begins.” In its future development, scientometrics should aim to address and rectify the weaknesses evident in current Scientometric Measurement Systems (SMSs).

In my view, the most significant shortcomings of the current SMSs include the evaluation of an author’s contribution, the assessment of different types of articles, the valorization of newer publications, inconsistent criteria by journals in determining the order of authors in scientific articles, and the misuse of citations. A prime example of citation abuse, which poses a threat to the credibility of the entire SMSs, is the so-called “position paper”. In these papers, hundreds of authors often receive thousands of citations despite not meeting the minimum criteria for authorship. Overcoming these challenges necessitates the support of the entire academic and scientific community. This support should not be limited to highlighting the system’s flaws but should also extend to proposing enhancements for the betterment of scientometrics.

Considering the current limitations of scientometrics, what are your views on the future of this field and its capacity to evolve and more effectively address these limitations? Additionally, how do you believe your criteria will influence the evaluation and ranking of scientists in developing countries, where there is a complex interplay between politics and academia?

As the Director of Scientific Research at my institution, I am annually involved in establishing the ranking criteria for evaluating our employees’ scientific contributions. Drawing from my extensive experience, I propose in this article criteria that can objectively assess the scientific impact of scientists and institutions.

The new criteria, known as the Z-score, which I published in 2017 in the Journal of Biomedical Informatics (and subsequently, in 2018, a calculator for these criteria developed with my colleague Kunosić), are designed to more equitably assess an author’s contribution to an article and the valorization of newer articles. These criteria emphasize the unequivocal significance of the first and corresponding authors relative to other contributors, the prestige of the journal where the article is published, and the citation count of both the article and the journal. The criteria underscore the importance of the journals in which articles are published, particularly because new articles evaluated within the same year of publication (some just days before evaluation) necessitate a valuation method beyond just citation count. Ultimately, all calculated values are aggregated to determine an author’s total scientific contribution.

To effectively implement and calculate the Z-score, we have developed suitable computer software – the Z-score calculator. This calculator includes all parameters outlined in the proposed criteria, is compatible with all browsers, and is capable of automatically collecting data once linked to a browser.

How does the Z-score enhance traditional metrics like the Impact Factor and H-index, and how does it accurately evaluate individual contributions in multi-author publications?

The Z-score criteria, which I suggest, acknowledge the first and corresponding authors as integral contributors to the scientific article. This approach aims to deter the addition of ‘false’ authors to the author list, as their inclusion would dilute the contributions of the actual authors. Implementing these criteria would substantially mitigate the issue of including authors who do not meet the authorship criteria. Moreover, it would almost entirely resolve the problem of ‘position papers’, where the total citation count is divided among potentially hundreds of authors.

The Z-score criteria significantly objectify the assessment of scientific contributions and the ranking of scientists. They can be practically applied as a relevant measure for academic advancement and in applications for scientific grants. Notably, the Z-score uniquely objectifies the value of new articles and authors, which is crucial for valorizing recent scientific production and grant applications. Thus, I believe that SMSs should consider the evaluation of newer articles, which may not have accrued a citation count that reflects their real scientific value due to limited time exposure to the scientific community. It’s also vital to highlight that these criteria, encompassing aspects beyond citations, are not intended to oppose existing scientometrics, such as the H-index, but rather to complement and enhance them.

What challenges do you foresee in adopting the Z-score, how can these be addressed, and what changes do you anticipate in scientometrics and academic evaluation with metrics like the Z-score? Additionally, what feedback have you received from the academic community regarding the Z-score calculator, and how has this influenced its development?

The Z-score criteria were published in the highly esteemed Journal of Biomedical Informatics, which at the time was edited by one of the world’s leading biomedical informatics scientists, Edward H. Shortlife. An invited editorial on the article was penned by Professor David Bates, a renowned expert in bioinformatics. The article garnered attention from several respected scientists who shared their views through letters to the editor and citations in their own works. Some researchers also inquired about specific details of the article directly via e-mail. Although it has been just over six years since its publication, which is relatively short for the full affirmation of such ideas, its significance is gradually being recognized. For context, the H-index was initially little noticed in its first five years but has now gained global importance. However, some concepts in the article, particularly the reduced importance of co-authorship in multi-authored articles, have not been well received by a significant portion of the academic community.

Despite these challenges, I firmly believe that the Z-score criteria are the most optimal SMS for evaluating new articles and scientific outputs, crucial for applying for scientific grants and assessing the value of recent articles and emerging scientists. Yet, when it comes to evaluating the long-term scientific contributions of a scientist, the Z-score has its limitations. This is due to several reasons: firstly, for long-published articles, the total citation count is more significant than the journal’s IF at the time of publication; secondly, the current IF may vary greatly from the IF at the time of publication; thirdly, finding the IF for the publication year of older articles can be challenging, especially if those journals are no longer in the scientific databases used for calculation. All these factors complicate the calculation process of the Z-score for older articles. Consequently, I am currently working on revising the Z-score criteria to simplify the calculation process and make it more user-friendly.

In Bosnia and Herzegovina, universities prioritize education over research. Overall, investments in scientific research require internationally recognized scientometrics criteria for effective evaluation. Current metrics often overlook these contributions, and the lack of universal criteria for diverse activities affects fairness in evaluations. Despite limitations, university ranking systems are crucial for assessing scientific and educational quality.

How do university ranking systems impact global perceptions and the research-teaching balance in universities, especially in the Balkans, and what specific challenges do universities in Bosnia and Herzegovina face in enhancing their rankings and scientific production?

The competition for prestigious positions in global university rankings is intensifying, increasingly reflecting a battle for status and various forms of dominance among the world’s most developed countries. While these rankings consider diverse parameters, for most universities (with some exceptions for the top 100), the number of publications and their citation impact are crucial for ranking. The world’s most prestigious university rankings are published semi-annually or annually. In the academic communities and media of many regional countries, these results are highly valued, often followed actively and sometimes too emotionally, contrary to the expected rationality of academic members.
Some regional universities have recently made noteworthy advancements in these international rankings. Academic communities in these countries continually assess and seek opportunities to improve their standings on these lists, with the media actively analyzing and comparing their universities’ performances against the global and regional counterparts. However, the situation in Bosnia and Herzegovina presents a paradox. Despite a long-standing, publicly established belief in our academic superiority, our universities are either unranked internationally or positioned significantly lower than similar-ranked institutions in the region.

“Scientometrics and academia”

In your recent editorial in Biomolecules and Biomedicine , you explored the profound societal impact of science, the crucial role of universities in fostering scientific growth and educating upcoming scientists, as well as the hurdles in appraising scientific and educational activities. While publications play a central role in academic assessment and funding, current evaluation metrics often disproportionately emphasize them, overshadowing other facets. Criticisms include an overdependence on citation counts and an inability to fully represent academic credibility. Emerging metrics are being developed to remedy these shortcomings by adapting criteria for various academic disciplines. The article underscores the necessity of internationally recognized evaluation standards, particularly in smaller academic communities.

How do you perceive the evolving role of scientific research in shaping society, and what future developments do you foresee?

It can be stated with certainty that scientific achievements and societal development are typically directly proportional. Consequently, substantial investments in science are often channeled into fundamental scientific research, leading to a significant expansion of our knowledge base. This type of research holds intrinsic value and social importance in generating new information that enhances human science overall, proving beneficial even when it doesn’t yield immediate, tangible outcomes in applied activities. It is also reasonable to assume that all basic scientific research carries a potential for producing practical results, though such outcomes can be unpredictable and distant in terms of timing and application context.

However, the results derived from basic research often do not align with the anticipated pace of their practical application in areas like industry, social advancement, and public health. This delay in applying scientific findings poses risks of eroding trust in science, negatively impacting funding for scientific research, and causing a relative deceleration in scientific innovation. Therefore, it is often necessary to conduct additional, so-called ‘translational research,’ which facilitates the transfer of knowledge from fundamental research to practical application.

Could you provide insights on how higher education institutions can strike an effective balance between their roles in research and education to maximize contributions? Also, what specific challenges exist in objectively assessing educational activities within academia, and how might these be addressed?

Answering that question is challenging. In a context where the traditional education system has transformed into one that is more open, flexible, with emphasized decentralization and strengthened autonomy, where management is driven by goals and outcomes promoting equality and a common value base, the position of higher education becomes particularly complex.

Higher education institutions, being resource-rich and conducive to creating science, find scientific research an integral part of their activities. Without this aspect, higher education risks becoming unproductive and unable to fulfill its purpose. Concurrently, they shoulder another crucial responsibility: educating new generations of scientists and experts who will further the development and enhancement of science and society. While both activities hold equal importance, scientific research, manifested through innovations, patents, and publications, is more quantifiably assessable compared to educational and other activities within higher education. This leads to its predominance as a measure in evaluating the quality of higher education institutions.

Critics of this approach to university evaluation argue that it neglects teaching quality, focusing solely on scientific output, hence not fully representing all facets of university quality, particularly those reflecting program modernity and the quality of teaching and faculty. However, it is widely acknowledged that these discrepancies predominantly affect lower-ranked universities, where a relatively small number of publications might lead to statistical “artifacts.” In contrast, publishing a significant volume of scientific work, a trait of high-ranked universities, generally necessitates maintaining quality across other operational segments as well.

Culminating our insightful interview, we at Biomolecules and Biomedicine express our gratitude to Dr. Zerem for his generous sharing of knowledge and experiences. His active role on our Editorial Board and his distinguished presence in Bosnia and Herzegovina’s academic community greatly enrich our journal and the scientific field at large.

References:

Zerem E. The ranking of scientists based on scientific publications assessment. J Biomed Inform. 2017;75:107-9.  [cited 2024 Jan. 22] Available from: https://doi.org/10.1016/j.jbi.2017.10.007.

Zerem E, Vranić S, Hanjalić K, Milošević DB. Scientometrics and academia. Biomol Biomed [Internet]. 2023 Dec. 15 [cited 2024 Jan. 22];. Available from: https://www.bjbms.org/ojs/index.php/bjbms/article/view/10173

Editor: Ermina Vukalic

Be the first to comment

Leave a Reply