In the age of generative AI, the "replication crisis,” and increasing pressure to “publish or perish” – amplified by growing social and political hostility to academia – the temptation to cut corners, look the other way, or otherwise sacrifice academic integrity to get ahead has never been stronger. Indeed, many scientists, journal editors, and publishers do, unfortunately, engage in unethical practices – as The Analytical Scientist has highlighted in recent years (see: P(r)aying for Authorship; Upholding Academic Integrity in a World of AI; and Are We Enabling Abusive Reviewers?).
However, there is another side to the coin – one less discussed, but no less damaging. In recent years, a new breed of self-appointed watchdogs has emerged online: anonymous collectives who use post-publication peer review platforms and social media to launch coordinated campaigns of criticism against researchers.
Some accusations may have merit. But often these campaigns are built on a foundation of institution and sheer volume – with real-world consequences: lost funding, mental health distress, and reputational harm.
One prominent analytical scientist, who wishes to remain anonymous, shared their experience of being subjected to such a campaign. Despite no formal investigation being deemed necessary by their institution, the public shaming had already taken its toll, including the withdrawal of speaking invitations.
It is in this context that the anonymous group ScienceGuardians has emerged – not to shield misconduct, they say, but to protect the integrity of the scientific process itself. What follows is their call to reclaim scientific integrity from those who have hijacked its banner. Their message: critique must be grounded in evidence, not ideology; and accountability must apply to all – including those who claim to be defending it.
The Silent Collapse
How integrity is being hijacked – and why science must reclaim it
By “Elias Verum” on behalf of ScienceGuardians
Science is sustained by trust – between data and interpretation, critique and fairness, openness and accountability. But that trust is being eroded. Today, the architecture of scientific publishing is strained not only by commercial pressures and technological disruption, but also by the rise of unverified influence, where certain actors – operating without oversight or consequence – work to shape reputations, derail careers, and distort the public record of science itself.
At ScienceGuardians, we believe in the necessity of anonymity – but only when paired with verification and accountability. Our concern lies not with anonymity as a principle, but with how it is exploited by those who operate beyond any professional, ethical, or evidentiary bounds. And while institutions remain largely silent, these actors have filled the vacuum – replacing measured discourse with orchestrated denunciation.
This is not the future science deserves.
Who are the players?
The ecosystem of modern scientific publishing has become overrun by intertwined dysfunctions, driven by five core actors and enabled by one critical institutional failure:
Predatory publishers – and predatory practices within legitimate publishers – who have commodified scientific output, turning peer review into a performative ritual and prioritizing volume and profit over quality and integrity.
Citation cartels and paper mills churning fraudulent papers with artificial authorship and manipulated metrics.
Anonymous mobs, hiding behind the appearance of "sleuthing," whose coordinated attacks have replaced due process with public shaming.
Predatory conferences and exploitative event organizers, offering meaningless speaking slots and fabricated recognition in exchange for fees – undermining scholarly communication and academic credibility.
Reckless journalism and poorly informed media coverage, which often amplifies accusations without context or verification – shaping narratives that precede evidence and damage reputations without recourse.
Weak institutional procedures and passive oversight mechanisms, which prioritize risk avoidance over fairness – failing to protect individuals even in the face of clear and well-documented injustice.
And as a result of these intertwined dysfunctions, the authors – the central pillars of the academic community, many of whom are early-career scientists – find themselves trapped in a system that has chosen the path of least resistance: placing the full weight of blame and consequence on them.
What are the causes?
The roots of this crisis are deep – but they begin with something far more subtle than corruption: an accumulated depowerment of researchers and a systemic absence of guidance, tools, and standards. As a result, a series of unchecked forces have reshaped the academic landscape into something far more fragile – and far more dangerous.
Among the pressures driving this transformation is a growing obsession with citation metrics, where careers now rise or fall on H-index scores and journal impact factors, distorting the purpose of publishing from sharing knowledge to chasing numbers.
In parallel, open-access profiteering has taken hold; under the guise of accessibility, publishers have adopted a pay-to-publish model in which editorial scrutiny is too often sacrificed for financial throughput.
Compounding the problem is the absence of foundational guidance: many researchers – especially those early in their careers – receive little to no structured training in research ethics, data integrity, or responsible publishing. Without clear, empowering best practices, scientists are left to navigate a high-pressure environment blindly, increasing their vulnerability to both exploitation and accusation.
This fragility is further exploited by platforms like PubPeer, which allow users to create multiple accounts without oversight, enabling a single individual to masquerade as a chorus of voices – an echo chamber built on manufactured consensus.
Finally, the lack of guidance on the responsible use of AI in research has left many scientists without clear benchmarks. Rather than being supported, those using emerging tools often face skepticism or public shaming – discouraging honest disclosure and encouraging silence or misuse, not out of intent to deceive, but out of fear of judgment.
These are not isolated flaws. Together, they have created an environment where narrative eclipses evidence, where silence is safer than dissent, and where reputational control – not scientific rigor – defines credibility. In this vacuum, self-appointed enforcers have emerged – unverified, unaccountable, and increasingly coordinated. And among them, the PubPeer Network Mob has risen to dominate the space, exploiting these systemic cracks to seize disproportionate influence over the scientific record.
Collectively this group – predominantly non-academics or unaccomplished academics, some with no scientific background at all – have manipulated the research integrity narrative to position themselves as arbiters of truth, which proceeds via widespread orchestrated harassment, intimidation campaigns, and reputational destruction.
In the sections that follow, we explain in greater depth how this network operates, the scale of its activities, and why its unchecked influence poses one of the most serious threats to integrity in science today.
What are the effects?
We are witnessing the industrialization of academic harassment.
Just three individuals – Perpetrators 1 (K.P.), 2 (A.M.), and 3 (D.B.) in our investigation – are responsible for over 52,000 entries on PubPeer as of April 1, 2025, approximately 24 percent of the platform’s all-time output. One of them, a financial advisor with no scientific background, has openly admitted to running several anonymous online identities. Another, operating under “Desmococcus antarctica” and “Rhipidura albiventris,” has created over 17,500 entries. Together, they’ve left hundreds of thousands of comments, flooding the platform, overwhelming targets, and creating what can only be described as dossiers of destruction.
In one particularly egregious case we have documented, Jörg Rinklebe Professor of Soil and Groundwater Management at the University of Wuppertal, Germany, had nearly all of his published works – over 500 papers – attacked on PubPeer within a short span of time, including 100 in a single month, almost entirely by the anonymous account “Desmococcus antarctica” operated by Perpetrator 2. His university repeatedly cleared him of wrongdoing. But on PubPeer, verdicts are not determined by evidence. They are assumed by volume. The PubPeer Network Mob has pushed the deeply flawed notion that “the more cases someone has on PubPeer, the more fraudulent they must be” – a tactic designed to bypass analysis and shame researchers into silence.
This system, by design, offers no recourse, no transparency, and no accountability. And it works. Retractions are forced. Institutions panic. Publishers bend. Careers end.
Unfortunately, the incentives have been with us for decades. But today’s tools – mass anonymity, AI-driven analysis, metrics-based prestige, and viral platforms like X – have magnified them beyond recognition.
Integrity work used to be slow, evidence-driven, and peer-reviewed. Today it is reactive, anonymous, and ideological. We are no longer confronting misconduct; we are accusing by impression, punishing by volume, and judging by public reaction.
When fear replaces debate, we are no longer doing science. When thousands of entries are posted not by researchers but by pseudonyms, often with clear personal vendettas or professional incompetence, science becomes hostage to ideology, unverified anonymity, and algorithmic destruction.
Most devastatingly, real misconduct becomes diluted – buried in a sea of frivolous accusations, often led by those with no qualifications to assess science in the first place. Meanwhile, researchers, especially younger ones, are left terrified of being next. Not for falsifying data – but for making enemies online.
What must be done?
The academic community is a vibrant, dynamic, and beautifully alive organism – capable of cleansing itself from the plagues we have named, if only given the space, tools, and freedom to do so. The majority of researchers are not dishonest. What they need are fair, transparent systems and principled standards – ones that reward integrity, not silence, and protect rigor, not status.
That’s why we believe reform will not come from within the corrupted centers. It must come from those with the courage to name what others fear to admit, and the clarity to build what others have failed to imagine.
ScienceGuardians offers a post-publication peer review platform grounded in ethics, verification, and transparency (scienceguardians.com). At its core is a set of principles designed to restore integrity and accountability to scientific discourse. Each user is limited to one account – ensuring that no one hides behind masks or operates multiple sockpuppets to manipulate discussions. Academic identity verification is mandatory upon registration, requiring users to confirm their institutional affiliation and role via a recognized academic, publishing, or indexing database email address. While users may choose to remain anonymous publicly, their identity is confirmed internally, eliminating the possibility of pseudonymous drive-by accusations.
Importantly, verified users are not subject to moderation. Once their credentials are confirmed, all contributions are published immediately, fostering open, accountable, and uninterrupted discourse. The platform also structures critique through persistent, topic-specific threads for each publication, allowing discussion to evolve constructively rather than descend into disorganized commentary. Underpinning all of this is a set of community-anchored ethics guidelines – already available at: scienceguardians.com/resources/main – which aim to support authors, editors, reviewers, and institutions in upholding the highest professional standards.
More broadly, we propose:
Verified accountability for all post-publication commentary, including platforms like PubPeer.
Strict transparency requirements for publishers – fair retraction protocols and independent appeals.
Institutional oversight of online integrity platforms – to distinguish critique from coordinated abuse.
A decisive end to anonymous mass accusations, particularly by individuals with no credentials, oversight, or transparency.
And most critically, a new generation of independent integrity platforms, governed by scientists, technologists, and legal experts – not influencers or mobs.
We are not against critique. We are against corruption in the name of critique and research integrity. The academic community must learn to distinguish between ethical reformers and those who have made a career out of destroying others – often behind layers of research integrity and self-serving moralism.
If integrity is to survive, we must rescue it from those who have hijacked its banner for influence, money, or revenge.
The real question is not how many retractions we’ve enforced. The question is: how many truths have we silenced along the way?
Scientific Harassment: A New Threat for the Analytical Chemistry Community
By Victoria F. Samanidou, Laboratory of Analytical Chemistry, Department of Chemistry, Aristotle University of Thessaloniki, Greece
Ethical issues in research and publications are an old story. Plagiarism, fabrication, falsification, paper mills, “salami slicing,” ghost and gift authorship have been troubling the scientific world for decades.
So, nobody is shocked by revelations of scientific misconduct – analytical science included. What is new, however, is the sheer volume of reported cases, resembling an avalanche that sets off far-reaching side effects. Equally alarming is the ease with which accusations are now made – and the serious consequences that follow, including the defamation of both established and early-career scientists. Slander can result in lasting mistrust, and reputations may be tarnished permanently, regardless of eventual vindication.
The adage “there’s no smoke without fire” is often cited – but is it always true?
In today’s hyperconnected world, negative news and accusations spread online at lightning speed. But when allegations are proven unfounded, corrections rarely receive the same visibility. At best, they become footnotes in obscure comment threads. The damage to a scientist’s reputation – and potentially that of their institution – can persist long after the truth comes out.
What if these accusations are not based on facts, data, or scientific evidence? What if they are fabricated by anonymous users who comment unethically, motivated by conflicts of interest or other questionable motives?
We’ve all witnessed online commentary from anonymous users – some with fake profiles, others refusing to disclose their identities – who express strong opinions despite lacking relevant expertise. Social media platforms such as Facebook and X allow virtually anyone to broadcast feelings, opinions, and criticisms with little to no oversight.
The situation is compounded by platforms that permit anonymous or pseudonymous critiques of scientific publications. Without requiring verifiable credentials (like a Scopus ID or ORCID number), individuals can create multiple non-institutional accounts, use different email addresses, and post repeatedly – potentially flooding comment sections with bad-faith attacks. Some of these activities may even be orchestrated by commercial entities, including publishers or predatory journals, operating behind a veil. In politics, the practice of paid mass commenting is well documented – who’s to say science is immune?
Unfortunately, Aeolus’s bag is already open, and in the era of AI and social media, the problem of scientific harassment cannot be easily prevented.
Recently, there has been a huge stream of retracted papers. It appears to have become a trend. New platforms are emerging, sleuths are hired, posts are widely circulated, and the “retraction factory” is productive and growing – though not without collateral damage.
There are many innocent victims affected by this rise in retractions. Nobody denies that retraction is a reasonable consequence when published work involves serious misconduct, such as manipulated or fabricated data, plagiarism, or other unethical practices, including bribing editors. But what if that is not the case?
What are the consequences of a mistakenly retracted manuscript? The works cited in the paper are no longer valid, although paradoxically the retracted article may still gain citations. Journals with high numbers of retractions may be downgraded. The accused scholar may lose access to funding or grants. Other authors, who trusted the journal’s impact factor, may find their work diminished by association or excluded from indexing databases such as Web of Science. Institutional rankings may also be affected. Are these consequences reversible? We must remember that careers, promotions, and grants often rely on metrics.
So, who are the primary victims? Authors – attacked from all sides. They conduct research, pay to publish or to be read, devote time as reviewers, and yet they find themselves exposed to a storm of unethical behavior when they should be protected at its center.
The main weapon in the scientific arsenal to prevent this “scientific harassment” is the elimination of anonymity. Authors publish their work openly, with their names known. Meanwhile, platforms allow anyone to file a complaint against a publication without revealing their identity. Anyone can anonymously or pseudonymously express criticism – often driven by conflicts of interest, rivalries, or competition for grants.
Though these platforms claim to uphold scientific integrity, they are often used in unethical ways. Misconduct investigations must indeed be pursued – but only when they are based on scientific evidence and transparent reasoning, not anonymous or covert attacks.
Only when peers openly express their opinions and reveal their identities can the threat to scientific integrity be addressed. This approach would help distinguish legitimate, evidence-based concerns from pseudonymous, allegedly scientific attacks that lack theoretical or factual grounding.
“Dignity does not consist in possessing honours, but in deserving them,” Aristotle wrote in Nicomachean Ethics. His belief in moral integrity as the basis for respect remains relevant. At the same time, no one should be unjustly accused by anonymous “scientists.”
Upholding ethics in research and in analytical science is essential if society is to regain its trust in science.
Conflicts of Interest in Analytical Science: Between Ethics and Anonymity
By Damià Barceló, Honorary Adjunct Professor, Chemistry and Physics Department, University of Almeria, Spain
This article is being published at a timely and crucial moment. The recent wave of actions targeting science and scientists – often led by anonymous mobs using platforms like PubPeer to launch intimidation campaigns and reputational attacks – typically avoid engaging with the scientific content itself, instead offering superficial remarks about alleged “conflicts of interest (COIs)” between editors and authors of a given paper.
Let’s talk about COIs. Do we even have a clear, standardized definition of a conflict of interest? When is a COI considered active – within the last five years, ten years of collaboration? Often, authors are invited to contribute to a paper by the corresponding author without knowing the rest of the authors personally. Are all of them then considered to have COIs? Do publishers have consistent criteria for what constitutes a COI? And why do some journals indicate the name of the handling editor on the front page of the accepted article, while others omit it? PubPeer's anonymous actions tend to target journals that disclose the editor’s name – is that fair?
How many people, including authors, editors, and publishers, have been influenced by anonymous PubPeer comments to retract a paper? I believe an open and transparent discussion about conflicts of interest would benefit the entire scientific community, especially since many PubPeer actions are based solely on “suggested” COIs without supporting evidence.
Turning back to this opinion piece by ScienceGuardians – it is both useful and crystal clear, addressing the core of the issue. It highlights several systemic failures, including citation cartels, unchecked anonymity, Open Access profiteering, and the absence of clear guidance on the use of AI in research. It also cites the troubling example of a colleague and friend in Germany who has been relentlessly targeted on PubPeer over the past months or even years.
I would also like to bring the context of analytical chemistry into the discussion. Analytical Chemistry is, of course, a scientific discipline, so the broader concerns raised apply here as well. But there are specific aspects of analytical chemistry that make our situation particularly relevant. Our field is deeply rooted in strict methodologies and protocols. We, as analytical chemists, are meticulous in selecting high-purity chemical standards and top-performing instruments. We don’t treat instruments as “black boxes” – we want to understand exactly how measurements are made. This sets us apart from other scientific disciplines.
As analytical chemists, we also take pride in developing and following detailed guidance protocols. We strive to do things correctly – just as ScienceGuardians proposes solutions to uphold scientific integrity, such as requiring verified users and protocols. This is where our values align.
Finally, I was especially struck by the last line of the opinion piece: “The question is how many truths have we silenced along the way.” The author is absolutely right – and that is a question the scientific community can no longer afford to ignore.