mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#reproducibility

1 Beitrag1 Beteiligte*r0 Beiträge heute

Retractions and failures to replicate are signs of weak research. But they're also signs of laudable and necessary efforts to identify weak research and improve future research. The #Trump admin is systematically weaponizing these efforts to cast doubt on science as such.

"Research-integrity sleuths say their work is being ‘twisted’ to undermine science."
nature.com/articles/d41586-025

www.nature.comResearch-integrity sleuths say their work is being ‘twisted’ to undermine scienceSome sleuths fear that the business of cleaning up flawed studies is being weaponized against science itself.

And yet another one in the ever increasing list of analyses showing that top journals are bad for science:

"Thus, our analysis show major claims published in low-impact journals are significantly more likely to be reproducible than major claims published in trophy journals. "

biorxiv.org/content/10.1101/20

bioRxiv · A retrospective analysis of 400 publications reveals patterns of irreproducibility across an entire life sciences research fieldThe ReproSci project retrospectively analyzed the reproducibility of 1006 claims from 400 papers published between 1959 and 2011 in the field of Drosophila immunity. This project attempts to provide a comprehensive assessment, 14 years later, of the replicability of nearly all publications across an entire scientific community in experimental life sciences. We found that 61% of claims were verified, while only 7% were directly challenged (not reproducible), a replicability rate higher than previous assessments. Notably, 24% of claims had never been independently tested and remain unchallenged. We performed experimental validations of a selection of 45 unchallenged claim, that revealed that a significant fraction (38/45) of them is in fact non-reproducible. We also found that high-impact journals and top-ranked institutions are more likely to publish challenged claims. In line with the reproducibility crisis narrative, the rates of both challenged and unchallenged claims increased over time, especially as the field gained popularity. We characterized the uneven distribution of irreproducibility among first and last authors. Surprisingly, irreproducibility rates were similar between PhD students and postdocs, and did not decrease with experience or publication count. However, group leaders, who had prior experience as first authors in another Drosophila immunity team, had lower irreproducibility rates, underscoring the importance of early-career training. Finally, authors with a more exploratory, short-term engagement with the field exhibited slightly higher rates of challenged claims and a markedly higher proportion of unchallenged ones. This systematic, field-wide retrospective study offers meaningful insights into the ongoing discussion on reproducibility in experimental life sciences ### Competing Interest Statement The authors have declared no competing interest. Swiss National Science Foundation, 310030_189085 ETH-Domain’s Open Research Data (ORD) Program (2022)

To my knowledge, first time that not only prestigious journals, but also prestigious institutions are implicated as major drivers of irreproducibility:

"Higher representation of challenged claims in trophy journals and from top universities"

biorxiv.org/content/10.1101/20

bioRxiv · A retrospective analysis of 400 publications reveals patterns of irreproducibility across an entire life sciences research fieldThe ReproSci project retrospectively analyzed the reproducibility of 1006 claims from 400 papers published between 1959 and 2011 in the field of Drosophila immunity. This project attempts to provide a comprehensive assessment, 14 years later, of the replicability of nearly all publications across an entire scientific community in experimental life sciences. We found that 61% of claims were verified, while only 7% were directly challenged (not reproducible), a replicability rate higher than previous assessments. Notably, 24% of claims had never been independently tested and remain unchallenged. We performed experimental validations of a selection of 45 unchallenged claim, that revealed that a significant fraction (38/45) of them is in fact non-reproducible. We also found that high-impact journals and top-ranked institutions are more likely to publish challenged claims. In line with the reproducibility crisis narrative, the rates of both challenged and unchallenged claims increased over time, especially as the field gained popularity. We characterized the uneven distribution of irreproducibility among first and last authors. Surprisingly, irreproducibility rates were similar between PhD students and postdocs, and did not decrease with experience or publication count. However, group leaders, who had prior experience as first authors in another Drosophila immunity team, had lower irreproducibility rates, underscoring the importance of early-career training. Finally, authors with a more exploratory, short-term engagement with the field exhibited slightly higher rates of challenged claims and a markedly higher proportion of unchallenged ones. This systematic, field-wide retrospective study offers meaningful insights into the ongoing discussion on reproducibility in experimental life sciences ### Competing Interest Statement The authors have declared no competing interest. Swiss National Science Foundation, 310030_189085 ETH-Domain’s Open Research Data (ORD) Program (2022)

We invite staff and students at the University of #Groningen to share how they are making #research or #teaching more open, accessible, transparent, or reproducible, for the 6th annual #OpenResearch Award.

Looking for inspiration?
Explore the case studies submitted in previous years:
🔗 rug.nl/research/openscience/op

More info:
🔗 rug.nl/research/openscience/op

#OpenScience #OpenEducation #OpenAccess #Reproducibility
@oscgroningen

Antwortete im Thread

Jack Taylor is now presenting a new #Rstats package: "LexOPS: A Reproducible Solution to Stimuli Selection". Jack bravely did a live demonstration based on a German corpus ("because we're in Germany") that generated matched stimuli that certainly made the audience giggled... let's just say that one match involved the word "Erektion"... 😂

There is a paper about the package: link.springer.com/article/10.3 and a detailed tutorial: jackedtaylor.github.io/LexOPSd. Also a #Shiny app for those who really don't want to use R, but that allows code download for #reproducibility: jackedtaylor.github.io/LexOPSd #WoReLa1 #linguistics #psycholinguistics

Fortgeführter Thread

Just pushed some updates. I have posted the #verilog serial link code. This involved getting a set of scripts for simulation set up.

If anybody is interested in helping on this project, I'd love to have somebody try to follow the readme.md to install the prerequisites and run the ./run_sim script.

The goal would be to take notes and flesh out what things aren't in the prerequisite list and how to install them.

I'd be happy to do the same for somebody else's project.

Antwortete im Thread

7/ Wei Mun Chan, Research Integrity Manager

With 10+ years in publishing and data curation, Wei Mun ensures every paper meets our high standards for ethics and #reproducibility. From image checks to data policies, he’s the quiet force keeping the scientific record trustworthy.

Are you an educator and wondering if and how to include #preregistration into student assignments? Then join our #webinar and learn from our speakers!

🗓️ 26 June
⌚ 13:30 -15 hrs
register here:
events.teams.microsoft.com/eve

Ewout Meijer from Maastricht University and Elen Le Foll from the University of Cologne will share their experiences with having students preregister their term papers and theses work.

We will make a recording of the webinar available.
#openScience #reproducibility

How reproducible is research in digital art history?
Béatrice Joyeux-Prunel argues for a post-computational framework that complements FAIR data with ethics, expertise & interpretive validation.
link.springer.com/article/10.1
#DigitalHumanities #Reproducibility #FAIREST #DigitalArtHistory #OpenScience

SpringerLinkDigital humanities in the era of digital reproducibility: towards a fairest and post-computational framework - International Journal of Digital HumanitiesReproducibility has become a requirement in the hard sciences, and its adoption is gradually extending to the digital humanities. The FAIR criteria and the publication of data papers are both indicative of this trend. However, the question that arises is whether the strict prerequisites of digital reproducibility serve only to exclude digital humanities from broader humanities scholarship. Instead of adopting a binary approach, an alternative method acknowledges the unique features of the objects, inquiries, and techniques of the humanities, including digital humanities, as well as the social and historical contexts in which the concept of reproducibility has developed in the human sciences. In the first part of this paper, I propose to examine the historical and disciplinary context in which the concept of reproducibility has developed within the human sciences, and the disciplinary struggles involved in this process, especially for art history and literature studies. In the second part, I will explore the question of reproducibility through two art history research projects that utilize various computational methods. I argue that issues of corpus, method, and interpretation cannot be separated, rendering a procedural definition of reproducibility impractical. Consequently, I propose the adoption of ‘post-computational reproducibility’, which is based on FAIREST criteria as far as digital corpora are concerned (FAIR + Ethics and Expertise, Source mention + Time-Stamp), but extended to include further sources that confirm computational results with other non-computational methodologies.