Results 1 to 4 of 4

Thread: [Article] Integrity under attack: The state of scholarly publishing

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default [Article] Integrity under attack: The state of scholarly publishing

    In the past, there have been repeated discussions among members of TWC about the quality of the scholarly peer-reviewing process, mostly in the highly politicized context of research on global climate change and the IPCC reports.

    I have always defended peer-reviewing as the best option for ensuring the quality of scholarly publishing. However, I have always been aware that it is not free of problems. The following recent (2009) article from the Newsjournal of the Society for Industrial and Applied Mathematics, Vol 42 (10), lists several instances of abuse or misconduct:

    Spoiler Alert, click show to read: 

    Integrity Under Attack:
    The State of Scholarly Publishing


    By Douglas N. Arnold

    Scientific journals are surely important. They provide the most effective means for disseminating and archiving scientific results, and so are a key part of an enterprise on which our health, security, and prosperity ultimately depend. Publications are used by universities, funding agencies, and others as a primary measure of research productivity and impact. They play a decisive role in hiring, promotion, and salary decisions, and in the ranking of departments, institutions, even nations. With big rewards tied to publication, it is not surprising that some people engage in unethical behavior, abuse, and downright fraud. Still, when I started to look at the issues more closely, I was appalled by what I found. In this column, I give a few troubling examples of misconduct by authors and by journals in applied mathematics. One conclusion I draw is that common bibliometrics—such as the impact factor for journals and citation counts for authors—are easily manipulated not only in theory, but also in practice, and that their use in ranking and judging should be curtailed.

    SIAM places great value on scholarly publishing, of course, and we are taking strong actions to ensure the integrity of our own publications and to protect our authors from theft of their work. But we are still struggling to decide just what actions we should take. So I invite the thoughts of members of the SIAM community. If you have witnessed troubling incidents in journal publication, let me know. Do you think such incidents are on the rise? Should SIAM be doing more? Should we look beyond our own publications and authors?

    Author misconduct—most obviously verbatim plagiarism, but also more subtle appropriation of ideas and duplicate publication—has always been with us. At SIAM, however, our impression is that the problem is becoming far more common. Perhaps even more disturbing is journal misconduct, carried out by publishers and editors, often with an evident profit motive. One example is a sloppy or sham peer review process designed to produce the impression of a serious scholarly journal without the substance. Another is the deliberate manipulation of citation statistics in order to raise the impact factor or other journal bibliometrics.

    A recent case involving SIAM brings in both author and journal misconduct. A paper published in a SIAM journal in 2008 was plagiarized essentially verbatim from a preprint version posted by the authors on the web. A copied version of the paper appeared in the International Journal of Statistics and Systems in the same year with different title and authors. SIAM's publisher, vice president for publications, executive director, and I undertook a full investigation, which required nearly six months. The case got messier and more disturbing week by week. I decided that our final report on it should be made fully public; it is available on the web, where you can read the details.[1]

    Meanwhile, here are some of the sad conclusions. Based on the papers that we reviewed, we determined that the suspect authors had committed plagiarism in this and various other cases. At least four articles published under their names in four different journals are essentially verbatim copies of the articles of other authors, and we have reason to believe that there are other cases as well. The journal publisher, Research India Publications, publishes nearly 50 journals, many related to applied mathematics, but did not respond to our inquiries about the plagiarized article. We contacted the editorin-chief listed on the journal web page, but he himself has been unable to contact the journal! After learning about this incident from us, he submitted his resignation to the journal but has received no response from the publisher; his name, along with those of numerous other distinguished mathematicians, remains on the journal website.

    Rumors of editor and journal misconduct have dominated the highly publicized case of the applied math journal Chaos, Solitons and Fractals (CSF), published by Elsevier. As reported in a 2008 article in Nature,[2] “Five of the 36 papers in the December issue of Chaos, Solitons and Fractals alone were written by its editor-in-chief, Mohamed El Naschie. And the year to date has seen nearly 60 papers written by him appear in the journal.” In fact, of the 400 papers by El Naschie indexed in Web of Science, 307 were published in CSF while he was editor-in-chief. This extremely high rate of self-publication by the editor-in-chief led to charges that normal standards of peer-review were not upheld at CSF; it has also had a large effect on the journal’s impact factor. (Thomson Reuters calculates the impact factor of a journal in a given year as C/A, where A is the number of articles published in the journal in the preceding two years, and C is the number of citations to those articles from articles indexed in the Thomson Reuters database and published in the given year.) El Naschie’s papers in CSF make 4992 citations, about 2000 of which are to papers published in CSF, largely his own. In 2007, of the 65 journals in the Thomson Reuters category “Mathematics, Interdisciplinary Applications,” CSF was ranked number 2.

    Another journal whose high impact factor raises eyebrows is the International Journal of Nonlinear Science and Numerical Simulation (IJNSNS), founded in 2000 and published by Freund Publishing House. For the past three years, IJNSNS has had the highest impact factor in the category “Mathematics, Applied.” There are a variety of connections between IJNSNS and CSF. For example, Ji-Huan He, the founder and editor-in-chief of IJNSNS, is an editor of CSF, and El Naschie is one of the two co-editors of IJNSNS; both publish copiously, not only in their own journals but also in each other's, and they cite each other frequently.

    Let me describe another element that contributes to IJNSNS's high impact factor. The Institute of Physics (IOP) publishes Journal of Physics: Conference Series (JPCS). conference organizers pay to have proceedings of their conferences published in JPCS, and, in the words of IOP, “JPCS asks Conference Organisers to handle the peer review of all papers.” Neither the brochure nor the website for JPCS lists an editorial board, nor does either describe any process for judging the quality of the conferences. Nonetheless, Thomson Reuters counts citations from JPCS in calculating impact factors. One of the 49 volumes of JPCS in 2008 was the proceedings of a conference organized by IJNSNS editor-in-chief He at his home campus, Shanghai Donghua University. This one volume contained 221 papers, with 366 references to papers in IJNSNS and 353 references to He. To give you an idea of the effect of this, had IJNSNS not received a single citation in 2008 beyond the ones in this conference proceedings, it would still have been assigned a larger impact factor than any SIAM journal except for SIAM Review.

    Another example of journal misconduct was revealed with an element of comedy. In “‘CRAP’ paper accepted for publication,” published online in June in Science News, senior editor Janet Raloff [3] described an experiment in which Cornell graduate student Philip Davis and a friend used a computer program, SCIgen, to generate a random document; the grammar and vocabulary were those of a computer science research paper, but the document was completely free of meaningful content. (The paper opens, “Compact symmetries and compilers have garnered tremendous interest from both futurists and biologists in the last several years. The flaw of this type of solution, however, is that DHTs can be made empathic, large-scale, and extensible.'' Four pages later, it concludes, “We expect to see many futurists move to studying TriflingThamyn in the very near future.” Indeed!) The paper was submitted to The Open Information Science Journal (TOISCIJ), published by Bentham Science, a publisher of more than 200 open-access scientific journals (many of which, according to the publisher’s website, have high impact factors). Although the paper was submitted under pseudonyms and with the give-away affiliation Center for Research in Applied Phrenology, or CRAP, Davis was notified four months later that the “submitted article has been accepted for publication after peer-reviewing process in TOISCIJ.” Following the open-access model, the publisher told the authors that the paper would be published as soon as they sent a check for $800. (They declined to do so.)

    The cases I have recounted are appalling, but clear-cut. Perhaps even more dangerous are the less obvious cases: publishers who do not do away with peer review, but who adjust it according to nonscientific factors; journals that may not engage in wide-scale and systematic self-citation, but that apply subtle pressures on authors and editors to adjust citations in favor of the journal, rather than based on scholarly grounds; authors who may not steal text verbatim, but who lift ideas without giving proper credit. These are much harder to measure and adjudicate. What do you think? Are such practices significantly distorting the scientific literature or enterprise? Do you have a story of such dubious practices to tell?

    One conclusion that I am ready to draw is that we need to back away from the use of bibliometrics like the impact factor in judging scientific quality. It has long been noted that what the impact factor measures is not well correlated with the quality of a journal, and even much less with the scientific quality of the papers appearing in it or of the authors of those papers. In our field, the 2008 IMUICIAM-IMS report Citation Statistics [4] made that case eloquently. Less emphasized has been that these metrics are open to gaming, and are in fact being gamed; in some cases they are likely a better indicator of the unscrupulousness of the authors, editors, or publishers than of the quality of their work. Frequently, I hear of technical solutions, proposed in the hope that an adjustment to the formula—for example, increasing the time frame for the impact factor from 2 to 5 years, or excluding self-citations— will solve the problem. Such remedies, in my opinion, are doomed to failure. The numbers of citations to mathematical articles are small integers, with excellent papers often drawing lifetime totals of only tens or hundreds of citations, and such numbers are easily manufactured. What one editor can do in one journal by self-citation, a pair of editors can do with two journals without self-citation. Counting can never replace expert opinion.

    What can we, as concerned scientists, do? Of course, the first step is to look to ourselves: As scientists, we should place great emphasis on scientific integrity, in what we write and what we review. Ask yourself some questions before lending your name to a journal as an editor. Does that journal hew to high standards of peer review? Does it have clear policies and mechanisms for enforcing them? Is its output a useful addition to the sprawling scientific literature? We also need to educate others, not only our students, but also our colleagues and administrators and managers. The next time you are in a situation where a publication count, or a citation number, or an impact factor is brought in as a measure of quality, raise an objection. Let people know how easily these can be, and are being, manipulated. We need to look at the papers themselves, the nature of the citations, and the quality of the journals. I look forward to learning from the experiences and thoughts of the SIAM community. You can reach me at president@siam.org.

    [1] www.siam.org/journals/plagiary
    [2] Nature, vol. 456, 27 November 2008, page 432.
    [3] www.sciencenews.org/view/generic/id/44706/title/Science_+_the_Public__‘CRAP’_paper_accepted_for_publication
    [4] www.iciam.org/QAR/CitationStatistics-FINAL.PDF



    An example from my personal experience is the highly regarded LNCS Series from Springer. I found several articles from the same group of authors in several LNCS volumes. All articles were essentially the same. Only some paragraphs and the order of authors had been shuffled around.

    So, what are your opinions? Can we do without bibliometrics? What would be the consequences? Can we change the system so that it cannot manipulated as easily? Or should we not change the bibliometric system, but instead rely on encouraging ethical publishing (if so, we would probably need to discuss this in the EMM...)?
    "The cheapest form of pride however is national pride. For it reveals in the one thus afflicted the lack of individual qualities of which he could be proud, while he would not otherwise reach for what he shares with so many millions. He who possesses significant personal merits will rather recognise the defects of his own nation, as he has them constantly before his eyes, most clearly. But that poor blighter who has nothing in the world of which he can be proud, latches onto the last means of being proud, the nation to which he belongs to. Thus he recovers and is now in gratitude ready to defend with hands and feet all errors and follies which are its own."-- Arthur Schopenhauer

  2. #2

    Default Re: [Article] Integrity under attack: The state of scholarly publishing

    Working in academia myself, I think the root of the problems lies with the overhyped university rankings (and of course the money and prestige associated with it).
    That started in the US, where the paying student and the paying research partner want advice where they get the most bang for their buck.
    So a ranking had to be devised (there are several), and a ranking requires a metric. The easiest metric is simply to count things, like the number of papers a university has, or how often their papers are cited. That wave started to swing over into the rest of the world, either because state financed universities need a justification for their taxpayer or they simply wanted to become part of the hype, and to be en par with prestigous names like Harvard, MIT or Berkley. Fashions like this often develop a dynamic of their own.
    Not this is where trickle down economics start to set in. A university wants to hire in a new professor. in the best case, they simply want the best people there are, and with the whole publication indices and impact factor they get a simple metric to judge their applicant. Suddenly, the length of the publication list becomes more important than teaching quality, research capability and other soft factors.
    In the worst case, the university simply wants to get up in the rankings, so they hire someone with a ling list they can A) cling themselft to and B) is more inclined on pumping out more publications in the years at the new university thats highering him.

    The problem is that any self-maximizing system will adapt to the given constraints, no matter how irresponsible or non-sensical they are.

    On measure of this development is that the number of conferences and journals has exploded in the last 10+ years. under closer inspection, one can find that those journals are hosted by one or two research institute, nothing more, and mainly push the papers of their own researchers (although the impact factor of those is pretty low most of the time).

    One problem serious conference organizers have to cope with now is "reference only" presentations; basically a research team will submit the abstract and the publication for the conference proceedings, but will not show up at the conference (not even giving an advance notification of their abscence). Often these are researchers from India or China who cannot afford the travel expenses, but still want to be part of the game. To cite one organiser I met and discussed the subject with:: "only asians I personally know or have reliant people vouching for them get accepted, no matter how interesting their paper looks." Sad but true.

    Another recent practice is "paper splitting": Have some intersting research done, invested a lot of time? Want some more impact? no problem, don't make one paper, make 3 or 4, concentrating on differing aspects, but copying 75% of the other papers you release.
    I experienced that when I happened on a really thorough indepth paper that turned out to be imporant for my research. that team had 5 presentations in 2 years. And every paper was 80% identical with the other papers (verbatim copies of text and illustrations). For me, that's just annoying because I have to sift through a lot of text, but it also irks me that they get 5 times the credit where maybe 2 times would have been justified.

    It's not that the whole system is corrupt, but with existence of the internet and "electronic only" journals, some people simply abuse the system, being the black sheep that colours the whole flock grey.

    I think that sooner or later (after enough negative examples), the whole system will be reformed, and a lot of papers will be ditched as unsientific by the wayside.

    So yeah, the problem in the article is real, but then again the frustration is rising and will probably result in some big bang.
    Neutral to the teeth.
    “'My country, right or wrong' is a thing no patriot would ever think of saying except in a desperate case. It is like saying 'My mother, drunk or sober.'”
    G.K. Chesterton

  3. #3

    Default Re: [Article] Integrity under attack: The state of scholarly publishing

    I agree with most things Nik said, especially his description of the development of the current system and its sometimes bizarre consequences. However, I do not agree with his prediction of a "Big Bang". For such an event to happen, researchers/scientists are much to individualistic. From my experience, it's impossible to convince them to be part of an organized (or non-organized) collective action. It just won't happen. So, in my opinion the system will not change significantly any time in the near future. The only option I see is to continuously point out and discuss the failures and shortcomings with the current system of bibliometrics, in order to increase general awareness, so that two things might happen:

    1. Scientists decide to not feed the system by structuring their research in order maximize bibliometric impact.

    2. Commissions with the task to select applicants for academic positions need to look beyond bibliometrics and actually read the papers published by the applicants.

    Personally, I think that this is not going to happen unless a general change of mind, away from economizing every aspect of our life, takes hold. Which seems unlikely, too.

    So, as you can tell, I'm rather pessimistic that things will change for the better in the near future. Maybe someone has some ideas to prove me wrong? If you plan an academic revolution, count me in
    "The cheapest form of pride however is national pride. For it reveals in the one thus afflicted the lack of individual qualities of which he could be proud, while he would not otherwise reach for what he shares with so many millions. He who possesses significant personal merits will rather recognise the defects of his own nation, as he has them constantly before his eyes, most clearly. But that poor blighter who has nothing in the world of which he can be proud, latches onto the last means of being proud, the nation to which he belongs to. Thus he recovers and is now in gratitude ready to defend with hands and feet all errors and follies which are its own."-- Arthur Schopenhauer

  4. #4

    Default Re: [Article] Integrity under attack: The state of scholarly publishing

    I do not think some kind of concentrated, orchestratedd action will happen. Professors in generals are to much Prima Donnas and individualistic to agree on any common course.
    But the system as a whole is based on several soft factors and more important volunteer work:
    - Professors work as CoEditors for journals, reviewing and correcting papers. They do this for free. The workload has already increased immensly, and we probably see a decreasing acceptance of renowned professors to sacfrifice their time. When the available work offered falls below a certain leveel, the system is not workable anymore.
    - As people (researchers) adapt to the system, maximizing their profile, the metric as a whole becomes meaningless, so the appointment committees of the big name university will start to look beyond a mere paper metric, changing the system again.
    We will see
    Neutral to the teeth.
    “'My country, right or wrong' is a thing no patriot would ever think of saying except in a desperate case. It is like saying 'My mother, drunk or sober.'”
    G.K. Chesterton

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •