How to tackle plagiarism in a manuscript

All scientists working on an experimental research study have to refer to the discoveries and findings of previous studies. In the current methodology of manuscript development, it is imperative to cite and quote the findings of previous studies that are being utilized for developing new hypothesis and experimental study design in a manuscript. The authors have to follow the rules and regulations set forth by the publication ethics of the scientific community.

The scope of plagiarism in a manuscript

According to the journal publisher Elsevier, plagiarism may occur in different forms during the process of manuscript development: i) an author may not cite another’s paper and may claim it to be his or her own work, and ii) an author may completely copy another person’s work either in sentences or paragraph format but may not cite the work of these authors in their own paper. All such kinds of behaviors do not comply with the ethical standards of publication, accounting for academic misconduct.

Detection of plagiarism

There are many sophisticated software developed and used by scientific publishers to detect plagiarism in a manuscript. If the plagiarism is substantive, that is, more than 25% of the paper is highlighted as complete replica of other published papers, the manuscript would be considered as useless and unethical. Such a manuscript would be dismissed immediately by the publisher and punitive action would be taken against the author by informing the academic institution to which the research is affiliated. COPE has given a clear flowchart to tackle such dubious manuscripts.

Ways to tackle plagiarism: Instructions to authors

Authors must comply with the reference citation guidelines of various journals before submitting the manuscript for review. They must also cite the theories and ideas presented in conferences and discussions, besides the standard manuscripts. The bibliographic information of all reference studies must be presented separately below the main part of the manuscript. Every reference study cited in the manuscript must also be enlisted in the bibliographical presentation at the end of the manuscript.

If the author has copied more than six words in a sentence word-by-word from another paper, then such content must be presented within quotation marks. In case the author wishes to reproduce copyrighted graphics and text from another paper, then it is mandatory that the author obtains requisite permission or no-objection certificate from the publishers and authors, respectively. One of the best ways to determine plagiarism issues before submitting to the journal is by uploading the manuscript in plagiarism detection softwares, such as iThenticate or CrossCheck.

Citing and quoting previous works to avoid plagiarism issues

When an author includes the wordings of a previous study word-by-word, then the easiest way to bypass plagiarism issues is to include these words of previous studies in “quotes”. Moreover, the readers of the manuscript must be able to understand what is original content and what exactly is obtained from previous studies. Nevertheless, given the pressure on researchers to “publish or perish,” issues such as quality and plagiarism are often sidelined.

Copyright laws are not complied by the authors when they make minor changes to their own work and republish them in multiple journals; this kind of manipulation is known as self-plagiarism. Moreover, plagiarism is not just restricted to the word-by-word copying of text of previously published manuscripts, it also includes blind copying of important illustrations, such as figures and tables, in the paper. The graphics of the same author should never be copied from the previous paper without citing them correctly.

Standards of good laboratory practice and scientific communications

The research studies are strictly monitored for their innovative study design and quality by the various scientific and academic institutions present all over the world. These institutions of research and higher education comply with the standards laid down by Good Scientific Practice (GSP) and Good Laboratory Practice (GLP).

Academic integrity of budding researchers can be maintained by these academic institutions if they monitor the research grants and experimental study design to ascertain whether the codes of GSP have been complied strictly. Human health has now become a hot topic of contemplation given the results cited from reliable sources of evidence-based medicine.

To tackle the seething problem of academic irregularities, including plagiarism and non-adherence to publication ethics, the federal agency termed the Office of Research Integrity (OR) was first set up in the USA in 1992. Among the various steps taken by the agency to tackle academic misconduct, an investigation to tackle various issues of plagiarism in manuscripts of scientific research has been the top-most priority. The primary objective of this body was to educate the science editors in monitoring and restoring academic integrity amidst various cases of scientific misconduct.

Following the US government’s decision, the UK government also launched another regulatory body termed Committee on Publication Ethics (COPE) in 1997. This body clearly defined the principles of fairness in manuscript evaluation and publication. It also suggested ways to tackle academic plagiarism and misconduct through a novel flowchart.

The mantra of academic publishing: Publish or Perish

Academic publishing is a dynamic field with high quality standards. The adage “Publish or Perish” stands true for most researchers in academia. The painful reality of academic publishing is the fact that perishing is more probable than publishing, given the extremely high and rigid standards of top journals in academic publishing.

Let us consider a case study of American Psychological Association (APA). In 2013, the peer reviewed journals of APA received 12,000 manuscripts as submissions. Out of these 12,000 manuscripts, more than 76% manuscripts were rejected by journal editors. In fact, the rejection rate was more than 90% in top peer reviewed journals. Given the high rejection rates of manuscripts, researchers are always stressed out despite toiling for years on research studies and grants. Recently, a survey was conducted at the University of California to understand how researchers cope with the stress of academic publishing.

In this survey, we send academic inquiry by email to numerous researchers. Only 130 researchers responded to our academic review requests. In the survey, we asked researchers about their academic position, personality traits, the manuscripts they were reviewing presently (the journals to which they had submitted the manuscript for review), how they were coping with academic stress of publishing in peer reviewed journals.

Depending on the nature of the individual and the field of study, researchers had different coping mechanisms. It is important to note that most people had stressful periods of uncertainty. For example, the outcome of job interview and selection, the diagnostic tests, results of competitive examinations, and decisions of college admissions.

The objective of this survey was to understand whether researchers were coping with the stress of academic publishing productively or not. Most researchers were coping with anxiety-related stress disorders due to the uncertainty of manuscript publication in peer reviewed journals.

The long waiting periods for manuscript publication was causing lot of stress. Many researchers suffered from anxiety, neuroticism and developed a pessimistic attitude. The academic history of researchers played a pivotal role in shaping their attitude. The amount of anxiety and uncertainty were less among researchers who had successfully published their manuscript at least once in a peer reviewed journal.

Academics who had submitted a manuscript for the first time to a peer reviewed journal had maximum stress and anxiety during the waiting period. The situation was similar to those of recent graduates who anxiously wait for the outcome of interviews at various job fairs. However, there were some new scholars who were very excited after submitting their manuscript to a peer reviewed journal for the first time.

There were young scholars who thought that their manuscript would be accepted by noted peer reviewed journals as their experimental study design presented path-breaking effects. These young scholars had spent years completing their research work. However, their excitement was short-lived when their manuscript was rejected by peer reviewed journal editors.

The results of this survey are as follows: negative feelings of neuroticism, anxiety, and stress were higher among academics who had published few papers. Similar feelings were observed among academics who had few submissions currently. These academics had higher waiting periods and they were highly anxious about whether their manuscript would be accepted by journals. The level of uncertainty and anxiety was certainly much higher among these academics. They had conditioned their brains to accept the worst case of rejection.

Researchers who had been working in academia for a long time were more adept at coping with uncertainty as they had been battling the cycle of rejection, re-submission, and eventual publication. Academics who had successfully published many papers were not really worried about rejection of their manuscript by journal editors.

Graduate students had the highest level of uncertainty as they had submitted their manuscript to peer reviewed journals for the first time. Compared to post-doc and adhoc faculty members, graduate students had high levels of stress and anxiety. The coping skills were the poorest among graduate students as they braced for the worst.

When is the waiting period hardest for academics?

Waiting period is highly stressful and uncertain for graduate students as they have spent years working on their experimental study design while pursuing their doctorate degree. Following the successful publication of their manuscript, they gain greater confidence in their line of work. Young academics are always encouraged by the adage: If you want something done right, do it yourself. Nevertheless, waiting period is most stressful and uncertain among graduate students as they have invested years of hard-work pursuing their research study.

More anxiety and stress were reported by academics with the following traits:

  • Numerous positions of authorship
  • Higher investment into research study
  • Fewer authors on a research study

These academics checked the websites of journal publishers quite often after submitting their manuscript. While recording the experiences of academics during the waiting period, we found that stakes were highly screwed given the poor rate of acceptance by top journals and publishers. Although researchers need to have many publications for a thriving career in academia, all published papers are never treated as equal. Only manuscripts published in top peer reviewed journals of Elsevier, Springer, Wiley, Taylor & Francis, and Nature can be considered prestigious and relevant for career advancement in academia.

 

 

 

 

How patent citations have revolutionized science and technology

Science and technology are intertwined fields in modern world; science envisages theoretical principles and applications. Technology is the application of scientific experiments. In the continuously evolving world of academia, researchers publish papers and file for patents of their inventions. Most academic researchers also work as consultants in R & D laboratories. For example, they may be adjunct professors in universities and also subject matter experts for industries. In totality, technology is the application of science. It is important to note that citations cannot be a measurement of innovations and disruptions. Patent applications include detailed information about novel information. By filing patents, scientists and consultants can avoid legal disputes about its authenticity. Moreover, patent citations also lay foundation to novel areas of licensees. Examiners of patents play a pivotal role in examining the patent applications.

Patent citations are an essential component of scientific literature

Country:  European and American patent offices have different sets of standards for filing patent applications. Some patent examiners cite papers published in their countries, while other patent examiners cite papers published internationally. This introduces domestic bias in the process of patent citations.

Field: The number of patents filed also depends on the field of study. For example, life science fields such as Pharmacology, Biochemistry, and Genetics are fields of study in which large number of non-patent technical documents are filed currently. On the other hand, Engineering is a field in which large numbers of patent documents are filed in current times.

Journal: It is important to note that there is absolutely no connection between patent citation and the impact of citation in a journal. In a patent application, there will be information cited from papers published in average journals. Papers are cited in noted journals of science and technology as they break the technology barrier. Within scientific literature, there are many factors that govern patent citations. In certain disciplinary subjects, more references are included.

In some countries, scientists prefer citing articles that are published in their own countries; however, there are countries wherein scientists prefer publishing in international journals. Nevertheless, citations cannot be an indicator of impact factor and quality of research. The data receives interesting insights whenever we make comparison between inventions in a patent office. In World Intellectual Property Organizations (WIPO), most patents included research publications from the UK. Moreover, patent citations of US research publications is stable and while there is a downtrend of variance for patent citations from China.

Conclusion Patent citation is an integral component of innovation and research in academic publishing.

The disruption of academic publishing market

As the world evolves from the print medium to digital medium in the internet of things, the relevance of printed journals has been losing relevance. Traditional academic publishers like Elsevier, Springer, and Wiley followed the traditional business model of printed journals for decades. However, with the evolution of computer and internet, things seem to be changing at a dynamic speed. Researchers bypassed academic journals and stocks of printed academic publishers seemed to have doomed down in the 1990s.

Elsevier, Springer, Wiley, and Nature evolved rapidly due to the changing dynamics of this industry. These traditional publishers have overcome the changing dynamics of research industry. Currently, Elsevier’s annual revenue has increased to $25.2 billion. As the academic publishing market evolves from the print to the digital medium, the business model is perfect for making lot of profits. In academic publishing, the consumer and producer are researchers. They need to publish their work in their high impact factor journals where they receive maximum viewership. Elsevier is the most prestigious journal that has overcome all barriers of the changing dynamics of academic publishing.

DISRUPTION OF SCI PUBLISHING

Open access is the answer to the disruption of academic publishing market. Given the capitalist, aggressive, and unrealistic approach of traditional publishers like Elsevier, Springer, and Nature, open access model provides a fertile ground for dissemination of research information. The fate of a researcher is not dominated by prestige of deans and committees. Currently, there are more than 10 thousand open access journals enlisted in the Directory of Open Access Journals (DOAJ). University authorities in Netherlands have pressed for Open Access journals over Subscription journals at the headquarters of Elsevier in Netherlands. They have threatened to boycott Elsevier journals if they do not shift to Open Access model. Education and knowledge are fundamental human rights, and the open access movement of academic publishing means free dissemination of knowledge. With the current business model of SCI journals, high subscription fees prevent dissemination of knowledge to everyone.

Currently, capitalist equity investors are trying hard to sustain their capitalistic, commercial values. To get tenure and promotion, researchers are forced to publish in SCI journals of traditional publishers: Elsevier, Wiley, Springer, and Taylor & Francis. With the demand of open access movement growing presently, SCI model of publishers will no longer have access to assessment. To bring about a balance between the power struggle of these two evolving and changing dynamics, Fair Access to Science and Technology (FASTR) bill has been drafted recently by the US Congress.

How English became the lingua franca of scientific publishing

English is the lingua franca of scientific publishing in the 21st century, with more than 75% of research studies being published in international English journals. (Refer the book “Does Science Need a Global Language by Scott Montgomery). However, the situation was quite different in the 19th century before the First World War. An almost equal number of scientific studies were published in German, English, and French. Nevertheless, German was the lingua franca of scientific publishing till 1900s. Today the situation has reversed with massive proliferation of English scientific journals and a steady decline of publications in German.

Purists may argue that Latin is the original language of science, and this hypothesis is partially correct in that Latin dominated the scene of scientific literature from the medieval period till the 17th century. In fact, Galileo was the first scientist from the medieval Renaissance period to publish his thesis extensively in Latin. Thereafter, the thesis was translated from Latin to other European languages. However, Latin became just another language of scientific research with the advent of industrial revolution in Western Europe.

Cut to the 20th century: German language lost its dominance even in scientific literature as Germany had to concede defeat in both the World Wars held in the 20th century. During World War I (1914 to 1918), scientific research studies from Germany and Austria were vociferously boycotted by scientists from Britain, France, and Belgium. Thus, German and Austrian scientists were also debarred from publishing in Western European journals at that time. Thus, the World War led to the division of scientific communications into two sections: Central Powers (Germany and Austria) and Western Europe (predominantly English and French). However, the hatred toward German scientific journals persisted following the First World War. In the United State of America, all things German received widespread hatred as the country joined England during the World War I in 1917.

During this period, many international organizations of scientific publications were also established. This included the International Union of Pure and Applied Chemists (IUPAC). These organizations functioned in only Western European languages of English and French. German, the erstwhile dominant language of chemistry, was completely banned by these organizations during World War I.

Following the entry of United States of America in World War I in 1917, a strong anti-German wave swept the country. Although there was a sizeable population of Germans in the USA, the language was banned in 23 states. With the implementation of this ban, public speaking of German language stopped, including radio shows. Schools stopped teaching German language to children younger than 10 years of age. Thus, German as a foreign language lost its sheen in the USA during that period. Nevertheless, the ban on German language was lifted in 1923 in the USA; however, the damage was already done.

With ban of German language, the US witnessed a large population of English speaking scientists who had limited knowledge of foreign languages, such as German and French in the 1920s. Meanwhile, scientific publishing in American English gained significance in the international community.

To escape World War I in Europe, many scientists fled from Europe and migrated to the USA. With the ban on foreign language education, English was adopted by these European scientists in the USA. In the year 1902, only 293 scientists completed their doctoral studies in the USA (Source: National Science Foundation). Contrast this with the year 1990: more than 30,000 students graduate with PhD in the USA. Thus, more than a million new scientists from the USA are today working, writing, and publishing in English. This has made English language the indisputable lingua franca of scientific communications.

What one needs to know is that population of native English speakers in the world does not exceed 5%. This means that English is the second language of most researchers, including those living and working in the USA. Thus, lot of valuable scientific research fails to get published in English journals as ESL (English as Second Language) scientists struggle to elucidate their scientific discoveries in English. This is especially true for ESL countries like China, Russia, and French. English speaking scientists must have sympathy toward these ESL scientists and collaborate with them to develop “English as the universal language of science.”

 

 

Challenges facing research scientists in academia

With the recently held rally “March for Science” on April 22, 2017 in Washington, DC, the Donald Trump administration must have felt pressure to concede to the demands of research scientists. The Trump administration received severe criticism for reversing climate change policies and reducing funding of academic research projects. In this article, the challenges facing research scientists in academia have been summarized as follows:

1)     Reduction in government grants toward scientific research

Scientists need money for performing research studies. With Donald Trump mulling over a further reduction in financial grants toward scientific research, scientists would grapple with various issues as research projects are already struggling at various levels. However, research funding has been drying up over the last few decades. Most path-breaking research discoveries happen in projects that last over a decade, while grants allotted by serial governments in the USA last for just three to four years.

In such a scenario, scientists have to seek grants from external sources to cover lab costs, research assistant salaries, and to implement procedures. The funding received from universities covers only the salaries of scientists working on projects. The sources of external funding are limited and most researchers have to primarily depend on the federal grants provided by the US government.

As funding is getting limited, the process of grant approval is becoming stricter. In the year 2000, more than 30% NIH research proposals received federal grants. Today, the situation is grim with only 17% NIH research proposals receiving federal grants.

All this cost-cutting measures have led to dismal status: researchers shy away from unconventional subjects today and stick around to publishing short papers with a faster turnaround period. Thus, mediocre science is the current state of academia.

2)     Conflict of interest from external sources

As the federal grants become highly competitive and meager, scientists turn to industries and commercial establishments for funding their research work. This ultimately leads to conflict of interest, with most reviewers questioning the authenticity of results. Scientists are compelled by these industries (FMCG, pharma, food, etc.) to produce results that favor the commercial prospects of the sponsoring agencies.

3)     The study design of most experiments is biased, all thanks to poor incentives

Most research scientists are compelled to create study design of experiments and produce “novel” results, which will ensure the publication of work in prestigious journals. The “path-breaking discoveries” do not occur often, so scientists introduce bias early on in the experimental study design to embellish results. As they are pressurized to produce “significant results” for publication, scientists are helpless as they also need to save their research careers. Most scientists manipulate the analysis of results, rather than providing an honest assessment of their findings. For example, most biomedical researchers conduct extensive p-test to statistically analyze their results against other hypothesis. They only publish “statistically significant” results, which are easily achieved by this so-called “p-hacking”

Can you believe how poor incentives has jeopardized insignificant results? More than 30 percent of the so-called high quality medical research papers are now found to be containing exaggerated or wrong results. In monetary terms, this has translated to a wastage of $200 billion, that is, 85% of the money spent on scientific research globally.

4)     Peer review process is faulty

Although most journals have peer review process to improve the quality of manuscripts and to prevent wrong studies from getting published, the process seems to be losing its sheen. As peer reviewers are NOT paid by the journals for providing constructive feedback of the manuscript, they do it under obligation. Thus, many systematic reviews have now found out that peer review is a faulty process: it fails to ensure that bad science is NOT published. Time and again, many manuscripts with faulty results and plagiarized content seem to have got published. As the editor and peer reviewers know the authors of the study but the authors do not know about the editors and peer reviewers, there could be instances of biases toward researchers from certain institutions and countries.

5)     Scientific research is inaccessible to the public owing to high subscription prices of journals

Publishing a research study in a journal is not enough to disseminate science. Most journals are extremely costly as leading companies like Elsevier acquire numerous journals and sustain the print model for their vested interest. Most articles in journals can be accessed by readers at a hefty fee. For example, a yearly subscription to the Journal Cell costs around 279 $. If an educational institution subscribes to the 2000 Elsevier journal for a year, the cost would soar to anything between 10,000 $ to 20,000 $. Most US universities pay for these journals and their students can access it whenever they want; however, PhD scholars in developing countries like Iran need to shell out from their pocket, which means they would need at least 1000 $ a week for reading some novel research papers.

It is indeed a sad story that the common man’s tax returns are funding research studies at universities and government labs, but the common man has to again pay a hefty sum of money to access this work in scientific journals. Can you believe the annual revenue of Elsevier was pegged at around $3 billion in 2014?

Conclusion

Science is not yet doomed and there are methods for fixing these issues. The process needs to be modified to include more proofing and to mitigate biasing: this can be achieved by rectifying the peer review process and by ensuring better allocation of federal grants. With more federal grants being processed at regular intervals, scientists would be happier to pursue unconventional subjects. The tendency to suppress non-significant results would diminish, leading to better transparency. Thus, the more frequent sources of bias would be eliminated in academic publications and scientific research.

The impact of Open Access Publishing on scientific research

Scientists often rue the fact that scientific research studies have limited viewership due to the high subscription charges of journals. As the world moves from the print to the digital medium, science policy makers have been advocating “Science should be freely available to the common man”.

Open Access Model of Scientific Publishing

With this perspective, the Open Access model of Publishing has changed the dynamics of the industry. In the Open Access model, the author usually pays a hefty fee to the publisher to make the article freely accessible to all through web portals.

As the journals are digitized, the cost associated with print publication is mitigated. So, how much does the author really need to pay to get their work published in an Open Access Journal? PLOS is the most noted Open Access Journal Publisher whose high costs are primarily associated with technology and labor. In a PLOS One, an author has to pay usually $ 1350 for publication. In another Open Access journal PeerJ, authors are charged a one-time fee of 299$ and they can publish unlimited papers in the same journal.

Highly selective Open Access Journals of BioMed Central and PLOS One stipulate a fee of $2700 to $2900 from authors intending to publish their work. According to a recent survey by researchers at the University of Helsinki and the University of Michigan, Open Access Journal publishing is a grey area with journals charging anything between 8$ to $3900.

According to a leading source working at Hindawi Publishing, an Open Access Journal Publisher in Cairo, Egypt, the cost of publishing a single article turned out to be just 290$ with successful publication of 22, 000 articles in a single year. On the other hand, the marketing source of PeerJ concedes that the cost of publishing an article in their journal is about hundred dollars.

All editors and reviewers working for an Open Access Publishing house are voluntary workers who are NOT paid. The estimated cost of operation for The Open Library of Humanities, a non-profit organization that publishes seven peer reviewed journals in the Open Access model, is approximately $ 3,20,000. There are certain “free journals of open access” but the cost of operation is borne out from the grants received by a university and the staff is primarily volunteers. Costs are associated with everything related to online publishing in an open access model.

Sustainability of Open Access Journals

The print format of subscription journals is lobbied by traditional publishers who argue that Open Access Publishing is sacrificing the quality of science at the cost of “free dissemination to the public”. Elsevier has more than 2000 journals with subscription or hybrid model of publishing, and it earned a revenue of $1.1 billion in the year 2010. Its profit margins have been about 36 % in the same period.

Open Access Publishers seek to cover the costs and any additional money is further used as a reservoir to overcome unforeseen costs. PLOS keeps some profit margin on the journals, but they are not bound like subscription journals to share their profits with shareholders. The primary source of funding to Open Access Publishers is through university grants and the fees charged to authors for “article-processing.”

In the subscription model, the universities sign Non-Disclosure Agreements before making bulk subscriptions of journals. In the Open Access model, the author is required to pay an appreciable amount to initiate the publication process and make it available for free viewership. At the Open Library of Humanities, the non-profit organization sustains not only on grants from external foundations but also on the fees paid by libraries availing its work. The money provided by libraries is more like a form of endorsement to the novel process.

As of 2013, there are 8847 Open Access journals enlisted in the Directory of Open Access Journals (DOAJ). This number has risen sharply within 5 years from being only 5000 in 2009. According to PLOS, open access model is completely flourishing with 12% of peer reviewed articles of STEM disciplines being published in Open Access Journals. The NIH drafted a policy mentioning that the results of scientific studies would be freely available on the internet within one year of its publication.

The lure of subscription model still exists due to “high-impact factor” of these journals

Although the real cost of publishing is low and peer reviewers are not paid for their work of academic editing even in subscription journals, it’s the “high impact factor” that attracts researchers to the subscription model. For example, the impact factor of subscription journal Science is 34.661, whereas the impact factor of PLOS One, the most noted Open Access Journal, is just 3.234.

With most universities not considering new scientists whose publications are in journals with impact factor < 5, the death knell on the research career of budding scientists discourages them from pursuing Open Access Movement. Open Access Journals are favored by seasoned scientists at this juncture propagating a shift in science policy.

Changes in the scientific publishing industry

Today, most subscription journals are drifting toward the hybrid model, which is an offshoot of open access publishing. Here, authors pay a large sum of money to the subscription journal to make it open access. For example, the subscription journal Cell presented its hybrid journal Cell Reports in 2014. The authors are charged 5000 $ to make their work freely accessible to all. With the Open Access movement, the role of traditional scientific publishers is being mitigated to that of middlemen.

Conclusion

The impact of Open Access in scientific publishing can be quantified with the latest data in sales volume: the subscription model of journals was previously 100 billion dollar industry in terms of revenue. As of 2010, the Open Access model of publishing has eaten 3% of its market share. Open Access model is now worth 100 million dollar in revenue. This is because there is a drift from print to internet (digital media). As of 2010, the print v/s digital media platform for scientific papers stands at 40:60 ratio.

Has SCI publishing really benefitted the masses from medical research?

Has SCI publishing really benefitted the masses from medical research? The cat is out of the back recently according to a report presented by Dr. John Ioannidis on how medical ethics are grossly compromised in the last century. Medical research results are manipulated to favor the sponsoring pharmaceutical companies, which raises the most important question: do lives of the common man don’t matter at all to the government and state agencies?

Clinical Trials sponsored by pharma companies cause conflict of interest

In the clinical trials sponsored by drug companies, the clinical outcome is NOT measured in terms of “survival v/s death” but it only lays emphasis on symptoms reported by subjects, such as “chest pain,” “fever,” “vomiting,” etc. While reporting improvements in the conditions of patients, research studies do not exactly explain whether the administered drug had an effect on the condition of the patient. In other words, statistical analyses are not conducted to reflect whether the novel drug discovery indeed produces prognostic effect that is more than marginal.

All these findings were reported by Georgia Salanti, a biostatistician assisting Prof. John Ioannidis who practices and teaches at the medical school affiliated with the University of Ioannidis. How did drug companies so successfully manage to introduce their novel drugs with successful clinical trial results? What was the secret code of their magic formula?

Manipulations begin NOT just at statistical analyses but AT experimental study design

Even before data crunching and statistical analyses are implemented, drug companies carefully choose their hypothesis. For example, the experimental study design is such that their novel drug is pitted against drugs that have been proven to be less effective in previous studies. Yet again, questions in the analyses directly introduce the biases, not answers, said Prof. Ioannidis. The moot point now is—can medical research studies really be trusted?

How has Prof. Ioannidis grappled with this topic throughout his career in medical research? Well, he is the one who specializes in conducting meta-analysis of research studies. His expertise in these kinds of work has made him a global name in medical research.

Physicians providing misleading advice to patients thanks to these studies

Much of the results reported by biomedical scientists in their published work are falsely fabricated to suit the needs of the sponsoring agencies. All these studies provide misleading information to the physician, and most physicians are really aware of the drug lobby, most possibly hand-in-glove with these commercial agencies.

So, there may be cases where the patient had simply a normal chest pain but had to undergo angioplasty as the physician diagnosed it as myocardial infarction (heart attack). There may also be instances where a simple medication could have cured your annual flu attack, but the physician prescribed expensive antibiotics to handle the case. According to the noted meta-researcher Prof. Ioannidis, almost 90% of the results published in medical journals is either misleading or simply amplified to suit the drug lobby.

Prof. Ioannidis, a noted medical researcher with expertise in meta-analyses

What are the real credentials of Prof. Ioannidis in the medical research community? His findings have mostly been published and highly cited in the most noted medical journals. In fact, he is a leading speaker at various medical conferences all over the world. Nevertheless, medical ethics have so rampantly been tampered in these studies that the results are mere embellishments and not innovations: the “conflict of interest” term is simply coined to enter medical journals, with the fact being that most studies do have conflict of interest as they were sponsored by pharma companies: commercial establishments.

Prof. Ioannidis first came across rampant malpractices in research studies as early as 1990s while working as a young medical researcher at the prestigious Harvard Medical School, USA. In that era, studies focusing on rare diseases had limited data from previous studies. Most medical researchers preferred the rule of thumb rather than performing statistical analyses. However, most medical researchers investigating common diseases, such as cancer, diabetes, heart illness, etc. also followed the same principle. The “hard data” illustrating the probability of “survival v/s death” should be actually used to govern their medical diagnosis of patients. However, this data was actually NOT reported in most studies.

A novel arena of “evidence based research” was looking promising to young researchers in the 1900s. Prof. Ioannidis also joined the fray of young researchers. Thus, he worked at the following prestigious medical institutions: Tufts University, John Hopkins University, and the National Institute of Health. Although he was a math-genius in school in Greece, Prof. Ioannidis decided to emulate his illustrious parents, who themselves were renowned medical researchers. The “contradictory results” in medical research studies is not an uncommon phenomenon. For example, recent studies have proved that mammographs, colonoscopies, and PSA tests are not really useful in detecting cancer, unlike studies in previous era that reported otherwise. Furthermore, the efficacy of anti-depressant drugs like Prozac was questioned in recent research studies as their efficacy was not more than that of a placebo. In the previous era, most doctors recommended constantly replenishing your body with fluids during intense workouts: the current lot of medical researchers is questioning the health-outcomes of this advice.

Prof. Ioannidis is today spell-bound at how peer reviewed studies employing “randomized clinical trials” are producing absolutely antagonistic results: a case in point is whether the extensive use of cell phones causes brain cancer. Thus, “randomized clinical trial,” previously considered as a gold-standard of medical research, is today being questioned for its accuracy in producing reproducible results in independent research studies.

Plausible causes for studies with antagonistic results on the same topic

So, how are so many studies on the same topic or condition coming up with conflicting results? The answer lied in the errors introduced by researchers at various levels: i) the questions that were evaluated by researchers examining the subjects; ii) the study design used to handle an objective; iii) the inclusive criteria laid down for the subjects; iv) the various medical parameters that were examined during the study; v) the statistical tests used for data analyses; vi) the reported results in these studies, and finally vii) the publication of these studies in medical journals of various impact factors.

The extreme pressure of noted medical journals: only “novel results” are published

What makes medical researchers compromise on their ethics? It is the extreme pressure to receive funding for their work, so the data is easily manipulated to suit the vested interests of the funding agencies. Manipulation of results may either be done voluntarily or it may have occurred unforeseen. Why is the pressure so extreme to manipulate results? This is because it is NOT enough to publish medical research in journals: the impact factor of the journal decides the prospects of the researcher. Most noted medical journals have a rejection rate of more than 90%. Thus, only “novel studies with innovative results” make the cut in these journals.

Though Prof. Ioannidis had to carry out his research work in the form of meta-analyses for many years, he continued with his effort finally bearing fruit: the Open Access Journal PLOS Medicine. The journal publishes all fittingly correct medical papers, regardless of whether the results are “innovative.”

Final remark: SCI Journals have failed medical research completely in their quest of publishing “innovative result,” rather than “true results.”

According to the model put forth by Prof. Ioannidis, medical research studies are flawed grossly because the rates of wrongness were almost equal to the rates at which the so-called “novel findings” were substantially proved to be wrong. Check out his astounding statistical report: the most common type of study design is non-randomized clinical trial and 80% of these clinical trials are ultimately proved to be wrong in terms of results. Furthermore, the so-called randomized trials that serve as gold standards also prove to be wrong is as much as 25 % instances. Strange but true, the most high-quality platinum standard studies involving “large randomized trials” also have 10% chances of being wrong.

Altmetrics- an important tool for measuring research impact

With the social media wave gripping scientific publishing, the way a research article has an impact on future studies has been defined today by an innovative tool: altmetrics. The conventional tools to assess the impact of scholarly publications, such as journal impact factor, peer review process, h-index, etc. are now being considered as redundant even as there is metamorphosis in the world of academic publishing.

With most scientists propagating the “online/internet of things” channel for publication, academic social networks have gained significance. Mendeley is one such academic social network cum reference manager that has become a repository of 40 million research articles, thereby exceeding the US government’s initiative for biomedical articles, Pubmed. With this novel approach, the previously uncited articles are now gaining effective visibility and are being shared by collaborators all across the world.

Definition and scope of Altmetric

In the new-age world of academic social network, Almetrics is the tool defining the impact of a research article across various online channels. In many collaborative platforms, scientists are now sharing ‘raw datasets” and “experimental study design” before manuscript preparation to journals. In recent times, we have seen a number of “semantic publication units” which contain just a passage of the citable article, not the entire article. Altmetrics constitutes the impact created by all these composite traces of online channels.

Impact of altmetrics on peer review

Previously it was a slow process that included overburdened researchers from advanced countries. Today, we can see the impact of a research article simply by collecting the number of shares, reads, and bookmarks of the article in an academic social network or repository. This means the peer review process can be completed within just one week through the crowd-sourced platform. Many Open Access journals, such as PLOS, PeerJ, BMJ Open, are now considering this innovative approach to accelerate peer review process.

A comparison of altmetrics with conventional tool

Altmetrics is a correct measure of the impact created by the article. In case of Journal Impact Factor, it only gives an indication of the journal’s average citation for each article, thereby restricting its impact within a reference frame of a journal. In contrast, altmetrics gives a summary of the impact created by the article within various online platforms: academic, non-academic, uncited articles, and articles published without peer review. Although traditional researchers argue that altmetrics cannot reflect the quality of the article in terms of novelty, we argue that JIF is a tool that can be manipulated in a very extensive manner.

What does altmetrics truly reflect in various categories

1) The attention received by the article on online channels: There are complex algorithmic tools in-built to determine the reach, share, and popularity of the article. For example, the metric tool will let you know its shares or mentions on news websites, Twitter, etc. through the “impression” tab. Pageviews and Downloads are tools that can help you understand if the article is well-received or not.

2) Dissemination of an article in terms of quantitative measure: These tools will let a researcher know if the article is being shared or discussed in a community of researchers or in the public sphere. For example, these tools will let you know mentions of the article on news websites and authoritative blogs.

3) The impact and influence created by the article: Altmetrics are tools that gather data of the article for leveraging the impact. With qualitative analysis of the data, one can understand the following:

·        General comments of various researchers on the article, constituting constructive feedback.

·        The various journals, magazines, and academic networks where the article is being cited in different parts of the world

·        How many people have read the article on various online channels

·        Whether or not the article is being reused in other research publications.

In summary, altmetrics enable qualitative data analysis of research publications. They are faster than the conventional citation-based metrics, with the perennial shift of researchers from the print media to online channels of the internet.