schedule
recordings
Recordings of most plenary sessions and some others are available on the AIMOS YouTube channel. Enjoy viewing - we hope you can join us at our next conference - subscribe on the AIMOS hompage to be kept informed!
The following recordings are not available due to a technical problem with Zoom - we are trying to recover them:
- Lightning Talks 2
- Plenary Sarah de Rijcke: Responsible Research on Research in the 21st century: a birds-eye view
- MiniNotes: Improving inference: the role of statistics education
- MiniNotes: Scientific communication and evidence based policy in the age of metascience
Convert the Conference Schedule into your local timezone using our Shiny app:
https://aimos.shinyapps.io/2021/
View the schedule in your local time using our Shiny app link above
Time shown on left is Australian Eastern Daylight Time UTC + 11 /
MiniNotes: Evaluating computational reproducibility
Not all research claims are computationally reproducible (i.e. retesting the claim using the same analyses and same data yields different results). This session will explore how often results are irreproducible, how to conduct and interpret findings of reproducibility evaluations, and how researchers can make their studies more computationally reproducible.
Antica Culina: Computational reproducibility in Ecology - Having access to data processing and analytical facilitates understanding of the methodological approach, verification of results, and recycling of already written code. In this presentation I will talk about the availability of code in the field of ecology, and reproducibility of ecological analysis and results. I will then discuss how reproducibly attempts can help us to increase computational reproducibly and argue for including these in student curicula.
Riana Minocher: Challenges to reproducibility of research on the topic of “social learning” - We quantified the rate of reproducibility of the research literature on the topic of “social learning”. In doing so, we attempted to reproduce the results of 40 published analyses. Overall, we estimated a high rate of reproducibility, given data for a publication, but this required a large amount of effort on our part. We also found that even when data was “available” for a particular publication (obtained online or through direct request), it could still be unusable for the purposes of reconstructing analyses. We argue that the recent proliferation of tools to enable data archiving and data sharing offer only a partial solution to the problem of irreproducibility. There remains substantial work to be done to develop community-wide norms on how data should be documented, when shared; on the acceptable standard of data provenance, when data is shared; and what the requirements for code-sharing might be, alongside data availability.
Tom Hardwicke: Computational reproducibility in psychology: Essential, Neglected, and Achievable - Computational reproducibility is a basic quality standard that most scientific articles fail to meet. I will describe several empirical assessments of this problem and suggest some ways in which you (yes you!) can be part of the solution.
ZOOM LINK
PhD student. As part of her dissertation work, she led a project that aimed to quantify the... More
Research Associate Dept. of Zoology. Expertise covers evolutionary ecology of bonding,... More
Marie Skłodowska-Curie Research Fellow in the Department of Psychology at the University of... More
MiniNotes: Advances in alternative models of peer review
Peer review in the context of academic publishing has undergone substantial reform over the last two decades, with the introduction of new policies, and the experimentation with old ones, resulting in an increasingly rapid divergence away from its traditional roots. In this session we will hear from three speakers about some innovative peer review and publishing paradigms including; overlay journals and crowd-sourced reviews, performing collaborative reviews with colleagues (co-reviewing) as well as policies that can be used to increase transparency and make peer review more accountable.
Daniela Saderi: Rethinking peer review through openness, equity, and community engagement - Peer review, just as any other process, protocol, policy, structure, is conducted by people and thus is naturally subject to influence by the biases of people. In rethinking a more open and equitable way of evaluating research, it is imperative to reflect on past and present mistakes that have led to the inequitable system of today, and avoid replicating power structures that reinforce the disenfranchisement and exclusion of individuals, groups of individuals, as well as entire nations.
During my talk, you’ll learn about PREreview, and how we strive to create equity within traditional scholarly peer review by providing concrete opportunities for traditionally marginalized research communities to get involved, train, connect, and be recognized for their contributions to scholarship. You’ll also learn about ways to get involved and join a global community of constructive peer reviewers.
Gary McDowell: Bringing ECR peer review ghostwriters out of the dark - Early Career Researchers (ECRs, such as graduate students and postdocs) are participating in the peer review process, but often in hidden roles on the behalf of invited reviewers that they work for. In order to recognize the scholarship of these co-reviewers, and to ensure the explicit inclusion of peer reviewers across a diversity of career ranges, it is necessary to enact policies to make peer review more accountable, and increase transparency about the process. Here Dr. Gary McDowell will discuss recent work to shine light on the roles ECRs can and should play in the scholarly review process.
Simine Vazire: Making peer review accountable - Despite the central role it plays in ensuring the credibility of science, peer review remains relatively inscrutable and unaccountable. Most of the peer review process happens in secret, and there is little opportunity to evaluate journals and editors to see how well they are fulfilling the functions of peer review. How thoroughly is each manuscript evaluated? How fair is the process? On what basis are manuscripts being accepted and rejected? To what extent are reviewers and editors focusing on objective aspects of research quality versus idiosyncratic preferences? How often do reviewers or editors make self-serving recommendations? By making peer review more transparent, we can begin to assess and improve the peer review process. Increasing transparency can mean publishing the content of peer reviews (but not necessarily the identities of reviewers), naming handling editors on published papers, or asking journals to report more information related to journal operations (e.g., appeals, investigations, frequency of editors submitting to the journal, etc.). Improving the peer review process would improve the quality control mechanisms throughout science.
ZOOM LINK
Co-Founder and Director of PREreview, a fiscally sponsored project of Code for Science and... More
Founder of Lightoller LLC, a consultancy providing expertise on early career researchers, with... More
Professor, Melbourne School of Psychological Sciences
Keynote Address: The last 10 years
Scholarly studies of science are as old as science itself. The last 10 years of metascience has a unique character. Compared with other scholarly treatments, it is more data-driven, collaborative, grounded in the discipline it studies, applied, interventionist, and activist. Metascience studies how the system works, proposes how it should work, develops interventions to change how it works, and evaluates whether those interventions are having the desired effects. The last 10 years reflects the emergence of metascience as a scholarly activity not just to understand science but to improve it.
ZOOM LINK
Co-Founder and Executive Director of the Center for Open Science (http://cos.io/) that operates... More
Lightning Talks 1
North-South and gender co-authorship patterns in an international research institution
Annette Brown
In this lightning talk, we present a study of the co-authorship patterns of researchers affiliated with a single institution that has projects and offices in more than 60 countries around the world. The vast majority of research published by this institution is conducted in or concerns low- and middle-income countries. Using detailed author data coded from 380 journal publications over a two year period, we examine South-North, local-foreign, female-male, NGO-government-university, and other authorship patterns and explore moderators including funding source. We also report how the institution is using these findings as a baseline for changing practices.
Frequency and types of data and code availability statements in systematic reviews of interventions
Matthew Page
Background: An increasing number of journals are requiring data availability statements in their manuscripts. It is unclear how often data availability statements appear in systematic review manuscripts, and what systematic reviewers typically write in their statement.
Objectives: To estimate the frequency of data availability statements in a random sample of systematic reviews and summarise the content of the statements.
Methods: We searched for systematic reviews with meta-analysis of the effects of a health, social, behavioural or educational intervention that were indexed in 4 databases during the month of November 2020. Records were randomly sorted and screened independently by two authors until 300 eligible systematic reviews were identified. Two authors independently recorded whether a data availability statement appeared in each review and coded the content of the statements using an inductive approach.
Results: Of the 300 included systematic reviews with meta-analysis, 87 (29%) had only a data availability statement and six (2%) had both a data and code availability statement. While 42% of authors stated data were available upon request, 40% implied that sharing of data files was not necessary or applicable to them, most often because “all data appear in the article” or “no datasets were generated or analysed”.
Discussion: Data and code availability statements appear infrequently in systematic review manuscripts. Authors who do provide a statement often incorrectly imply that data sharing is not applicable to systematic reviews.
Creating a sustained movement for cultural change in academia (Project Free Our Knowledge)
Cooper Smout
Academia functions like a ‘tragedy of the commons’ dilemma: Open science practices have the potential to benefit all researchers, but remain underutilised due to incentive structures that reward closed and competitive behaviours. Historically, collective action problems like this have been resolved through collective action: the mass mobilisation of individuals toward some mutually beneficial action. More recently, online ‘conditional pledge’ platforms (e.g., Kickstarter, Collaction) have proven to be a powerful tool for organising collective action on a global scale, by allowing users to commit to action on the condition that some critical mass of prior support is met in their community. Here, I’ll present Project Free Our Knowledge, which brings the same approach to academia in support of open science practices. I’ll provide an overview of the project and highlight two of our exciting new campaigns, which aim to motivate researchers to: (1) share their research code and data publicly, and (2) share their peer reviews publicly. I'll also introduce our new Ambassador Community and provide an overview of the strategies we're employing to increase the uptake of collective action pledges in academia. Finally, I'll outline the long-term vision for the project, which is to develop replicable processes and host increasingly stronger campaigns over time that draw on the experience and knowledge of our growing community. In short, we seek to develop a sustained, evidence-based movement for cultural change in academia.
Why Aren't Replications Cited More? Why Don't We Just Ask?
Bob Reed
Recent research (Coupe and Reed, 2021) finds that replications are not cited as frequently as the papers they replicate. Is this because researchers are unaware that the replication exists? Is it because replications are typically published in lower-ranked journals, and authors favour higher-ranked journals when deciding which studies to cite? Is it because authors only cite those studies that support their findings, and they disproportionately report findings that are consistent with the original studies/established literature? The answer is important because citations, and the potential to earn citations, is an important metric of academic “success.” If replications are not cited very much, researchers may not have much incentive to undertake replications. As a result, science misses out on the disciplinary role that replications play in ensuring scientific integrity. This lightning talk promotes the use of surveys of academics to uncover the answers to these questions.
Open Discussion
ZOOM LINK
Group Discussion:Best Way to Improve Scientific Integrity is for Professional Societies to Sponsor Replication Journals
The Best Way to Improve Scientific Integrity is for Professional Societies to Sponsor Replication Journals: Discuss
Bob Reed
Pre-registration, registered reports, lowering alpha, and making data and code publicly available have all been proposed as remedies for the “replication crisis.” This discussion will focus on the argument that none of these initiatives are likely to be as effective as encouraging more replications. Replication encourages scientific integrity because authors must always be mindful that others might check their work. However, there are two major impediments for replication to play this role: (i) it is hard to publish replications, and (ii) the replications that are published do not generally receive much attention. The solution is for the leading professional societies to publish their own replication journals. This would simultaneously improve the visibility of replications and make it more attractive for researchers to perform them. The first 10 minutes of this event will present the problem and the proposed solution. The remainder of the time will be devoted to discussion and debate.
ZOOM LINK
Group Discussion: Academic, interrupted
Adrian Barnett
Academics who take time off to care for children or lose time to illness often struggle to explain the gap in their CV. These career disruptions can end careers because of the hyper-competitive academic world. Career disruptions are unrelated to talent, which means good academics are being lost because of inequitable systems of promotion and funding. Given that women are often the primary caregiver, this is likely a major cause of the current gender imbalance in academia.
To challenge the current broken system, we will put forward three propositions for fairer funding:
1. That funding be segmented to create fairer comparisons. For example, men and women competing for different pools of funding.
2. That career disruptions in funding applications be assessed by an independent panel of medical/social experts rather than peer reviewers.
3. That peer review is used only to filter fundable proposals, with the winners then chosen at random.
We want to hear from the research community about these propositions through an interactive discussion session.
ZOOM LINK
MENTIMETER SURVEY
Group Discussion: Equity in science: Do we need tools to differentiate the barriers to academic performance?
Losia Malgorzata
The lack of diversity at the top ranks of academia is both well-documented and well-entrenched. In this context, the gender gap usually takes the central stage, and less attention is directed towards other biases and barriers. This unconference will bring to light the broad spectrum of barriers and inequalities that lurk in the shadows. We will focus on these that hinder career progression of underrepresented groups in science at early to mid-career stages. This is because the positive feedback loop (the ‘Matthew effect’) is pervasive in academia: early advantage is perpetuated into more advantage, and early disadvantage is like a ball and chain carried by those less privileged.
To level the playing field, and ultimately retain diversity at all ranks, we may need tools to recognise and act upon the cumulative impacts of unequal access to opportunities. During this unconference we will explore: 1) whether it is desirable, or practically possible, to build an explicit taxonomy of biases and barriers to career progression and retention, 2) where and how researcher evaluation process could account for unequal distribution of opportunities and resources, 3) what are the risks of having a structured way for researchers to declare various barriers and biases that shaped their career trajectories? 4) are there any good examples of evaluating academic performance relative to opportunity?
ZOOM LINK
MiniNotes: Connecting Metascience to established science study fields
This series of mini-notes aims to connect metaresearch communities with contemporary research on overlapping topics produced by historians, philosophers and STS scholars. In this session we will explore recent research exploring Bayesian approaches to scientific self-correction; the merits and detractions of pre- and post-publication peer review; and how historians of science can inform contemporary metaresearch debates.
Remco Heesen: Is Peer Review a Good Idea? - I will discuss benefits and downsides of prepublication and postpublication peer review and argue that the latter is preferable. Reasoning from the perspective of epistemic consequentialism, the effects that abolishing prepublication peer review may have on the social structure of science - in particular the incentive structure and its effects on the behavior of individual scientists - are either positive or neutral. I conclude that based on present evidence, abolishing peer review weakly dominates the status quo.
Fallon Mody: Placing the metaresearch movement in historical perspective - The contemporary metaresesearch movement has a well-understood origin story: it was catalyzed by the now infamous Daryl Bem paper published ten years ago which catapulted the replication crisis into mainstream consciousnesses, and brought together an interdisciplinary community united by a common goal of reform. In this talk, I will explore how we might begin to contextualise the emergence of the contemporary metaresearch movement by placing it in wider historical perspective, beginning in the post-WW2 period. And, in a not very common move for historians, reflect on what insights historians and historical scholarship can offer the metaresearch community as it contemplates its future directions.
Felippe Romero: The Many Faces of Scientific Self-Correction - . First, I discuss the received view of scientific self-correction in philosophy of science and how the replicability crisis challenges it. Then, I sketch an analysis of scientific self-correction as a process spanning statistical, methodological, and institutional levels. Finally, I discuss some implications of this analysis for philosophical theories about scientific progress.
ZOOM LINK
Research Fellow, MetaMelb
Assistant Professor in the Department of Theoretical Philosophy at the University of Groningen.... More
An academic philosopher whose research analyzes the social structure of science using a... More
Panel Discussion: How to start a revolution in your discipline
In this panel discussion, we'll talk about how to advocate for reform in our respective disciplines. We all have things we wish we could change about the norms and practices in our field, whether that is individual researchers' and labs' practices, journal, peer review, or publishing practices, institutional practices around hiring, grants, promotion, and awards, or collective norms around research synthesis, research training, or science communication. Our panelists are researchers from different disciplines and different career stages who have successfully advocated for changes related to research integrity. We will discuss topics such as: What are the pros and cons of trying to work within existing institutions vs. creating new ones? What strategies have panelists found to be effective (and ineffective) when trying to get reforms adopted? What are the biggest obstacles to reform? How can we protect against unintended consequences and side effects of reforms? How do we promote diversity, equity, and inclusion in science reform? What are the different roles that senior academics and gatekeepers can play versus early career researchers? How important are formal vs. informal networks (e.g., societies and conferences vs. social media)?
ZOOM LINK
Professor of Developmental Neuropsychology
Adrian has worked as a statistician for more than 22 years, working for the pharmaceutical... More
A Senior Non-Clinical Lecturer in the Centre for Clinical Brain Sciences at the University of... More
Reproducible Research Oxford (RROx) Coordinator, promoting open research practices across all... More
Group Discussion: A song of R and p
Adam Vials Moore
As the academy struggles with reproducibility and veracity in research and it's reporting, how do we capture and allow for access and discoverability in fields which traditionally do not even have text-based outputs, such as practice research? And how does this influence the reliability / reproducibility of that research? This group discussion will try to identify paradigms for reliability and reproducibility across the non-text disciplines, and, where possible, bring those out with real-world exemplars.
PADLET FOR DISCUSSION
ZOOM LINK
Group Discussion: Operationalizing value-driven research and identifying partners to support transformation in academia
Gavin Taylor
Modern institutionalization of research and the influence of different stakeholders and intermediaries have reduced the role of shared values in research. In theory, norms and values express themselves in actions while, in practice, many academics find that aligning research practice with their values is challenging. Indeed, significant parts of academia have reached a stage where the current structures incentivize research practices that frequently stand in the way of academics accomplishing their core mission of generating, curating and sharing knowledge for the benefit of society and the world. Yet the rise of open science, citizen science, and independent scholarship all illustrate that alternative options exist for research to be performed across, or even outside of, academia. In this discussion, we will explore possibilities to systematically strengthen the expression of values in research workflows as well as how research-aligned communities outside formal academia can help drive and accelerate some of the necessary changes in the research ecosystem, including in academia. Some guiding questions will be:
*How do researchers currently put their own values into practice?
*How can research communities operationalize shared values in research?
*What support do reform movements inside mainstream academia require?
*How can independent institutes and other grassroots initiatives support reform movements across the research ecosystem?
ZOOM LINK
Lightning Talks 2 (continued)
A review rubric as a tool for student training and broader engagement in preprint review
Dawn Holford
The number of preprints—typically posted without peer review—has grown in recent years. To support broader engagement with preprint evaluation, we developed a review rubric comprising 26 questions addressing metadata elements and 15 evaluative questions. The goal was to foster open reviews of preprints and engage students and early-career researchers in peer review. We tested the rubric with sixty undergraduate students who completed the rubric and wrote a short narrative review of a preprint as part of their coursework. For comparison, four experts also completed the rubric and a narrative review.
97% of the participants completed the whole rubric, consistent with a high engagement level observed anecdotally. Turnitin (text overlap) scores were lower compared to an earlier essay assessment. Students rated rubric questions as slightly easy overall, with evaluative questions rated significantly easier than metadata questions. For 25 of the rubric questions where experts converged on a common answer, students showed 59% agreement with experts, with higher agreement for metadata questions than evaluative questions. Students’ narrative reviews included mentions of all but one metadata element in the rubric. Students also tended to mention rubric elements more in their narrative reviews compared to experts. These preliminary findings suggest the review rubric is a useful educational tool and highlight student skill gaps which could receive more attention in research methods teaching. We will use these findings to improve and update the rubric.
Open Discussion
ZOOM LINK
Lightning Talks 2
ALL-IN meta-analysis: Anytime Live and Leading INterim meta-analysis
Judith ter Schure
Science is idolized as a cumulative process ("standing on the shoulders of giants"), yet scientific knowledge is typically built on a patchwork of research contributions without much coordination. This lack of efficiency has specifically been addressed in clinical research by recommendations for living systematic reviews and against research waste. We propose to further those recommendations with ALL-IN meta-analysis: Anytime Live and Leading INterim meta-analysis. ALL-IN provides statistical methodology for a meta-analysis that can be updated at any time – reanalyzing after each new observation while retaining type-I error guarantees, live – no need to prespecify the looks, and leading – in the decisions on whether individual studies should be initiated, stopped or expanded, the meta-analysis can be the leading source of information. In this talk we highlight the experience of performing ALL-IN meta-analysis during the Covid-19 pandemic that showed that synthesizing data at interim stages of studies can increase efficiency when studies are slow in themselves to provide the necessary results for completion. The meta-analysis can be performed on interim data, but does not have to. The analysis design requires no information about the number of patients in trials or the number of trials eventually included. So it can breathe life into living systematic reviews, through better and simpler statistics, efficiency, collaboration and communication.
Documenting contributorship with tenzing
Marton Kovacs
Contributors’ information is often handled manually in scientific articles putting an extra burden on researchers. As the number of contributors grows on research projects, it is becoming more prevalent that the current practices are error-prone and inefficient. We created tenzing, an easy-to-use Shiny app and R package to help researchers manage contributors’ metadata and automate their reporting. In this talk, I will showcase how to collect and structure contributors’ information efficiently with tenzing’s current capabilities and talk about plans for future developments.
Incorporating inclusive design into open education for data science
Rose Franzen & Rose Hartman
Designing for inclusion not only reduces barriers to access for the substantial proportion of learners who identify as having a disability or who have other barriers to access, it creates a higher quality educational experience for everyone by increasing options for customization and preference matching. In our process of designing a modular online data science education program tailored to biomedical researchers, we have made inclusion and accessibility a central component of our design. In this lightning talk, we will touch upon a broad spectrum of accessibility concerns, including accessibility for individuals of varying visual abilities, the neurodiverse, individuals with anxiety, and those who do not natively speak English. An overview of our tools and approaches taken thus far will be provided, and we will share some of our favorite resources. We will end with a discussion of unresolved accessibility issues in the data science space and the need for further action.
In defense of using meta-research evidence for evaluating therapies
Jonathan Fuller
It is generally accepted that meta-research findings of trends in medical research can help to identify problems that scientists might then attempt to correct in future work. But how should scientists and evidence users respond when meta-research findings suggest there is bias in the evidence base currently supporting therapies, such as publication bias or industry bias? I argue that these findings should inform evidence evaluation or ‘critical appraisal’ of this first-order therapeutic evidence because they constitute ‘meta-evidence’ with respect to the relevant therapies. While currently critical appraisal in evidence-based medicine is informed by threats to bias that are mostly suggested by epidemiological or statistical theory, the field of meta-research could supply an evidence base for critical appraisal by identifying biases that are suggested by empirical data (for instance, the association between industry sponsorship and favorable research outcomes indicative of ‘industry bias’). There are at least two ways that meta-research evidence can contribute to making critical appraisal itself more evidence-based: through the development of analytic tools for correcting study results, and through updating ‘evidence hierarchies’ that are used by clinical guideline development groups such as GRADE.
Lightning Talks Continued Below
ZOOM LINK
Keynote Address: Responsible Research on Research in the 21st century: a birds-eye view
Research on research is about ensuring that we have the evidence we need to realise the full potential of research. This is challenging in a currently quite fragmented scientific and policy environment, and alongside heightened aspirations of reductions in waste and inefficiency, and of improvements to research cultures, including more responsible research assessment and the integrity and reproducibility of research. This presentation is a call to action to work toward a better global understanding of the meaning, opportunities, and challenges in Research on Research.
ZOOM LINK
Professor of Science, Technology and Innovation Studies and Scientific Director at CWTS, Leiden... More
MiniNotes: Improving inference - the role of statistics education
Statistical misconceptions and questionable research practices (such as p-hacking) are widespread, and have well demonstrated consequences for false positive rates in the literature. Statistical education is often touted as a solution to these problems, but in practice, has perhaps not received the attention it should have. In this session, Mick McCarthy will examine problems with this statistical presentations in textbooks. Bob Calin-Jageman will discuss developments in teaching an estimation approach to statistics, and Lee Jones will talk about statistical education focused on linear models and assumption misconceptions. There are of course limits to what education alone can fix, but this session is focused on the benefits of improved statistics education.
Michael McCarthy: Statistical fallacies taught as facts in most biology textbooks - Errors in the application and interpretation of statistics are common in the published literature for biology, with null-hypothesis significance testing (NHST) being the main source of error. Avoiding these errors requires better education, but the widespread misinterpretation can become self-fulfilling with the errors being promulgated through the discipline. Following a study by Cassidy et al. (2019) that showed approximately 9 in 10 introductory psychology textbooks defined or explained statistical significance incorrectly, I reviewed 17 introductory biology textbooks for their statistical content. Of these 17 books, 11 explained or defined statistical significance or p-values, but only one book did so without errors. I also present information about other aspects of the statistical content contained in the textbooks. The high incidence of errors in these biology textbooks likely reflects the widespread misunderstanding of NHST within biology, but the errors also make it more difficult to reduce these misunderstandings within the discipline.
Bob Calin-Jageman: Teaching Estimation First: New Tools to For Better Inference - The estimation approach to inference emphasizes effect sizes, confidence intervals, and meta-analysis. I’ll show how esci, a new module for jamovi, makes it easy to take an ‘estimation first’ approach to teaching statistics. This approach to teaching can give students the skills to see through inferential pitfalls that are common in the testing: 1) believing that statistically significant means “established” or “likely to replicate”, 2) believing that a non-significant result rules out an effect, and 3) believing that a significant result and a non-significant result establishes specificity. Data sets and resources for teaching estimation will be provided.
Lee Jones: It is not as simple as it seems. Linear regression: common mistakes and misconceptions by health researchers - Linear regression is foundational knowledge from which concepts are used throughout statistical theory. It is the most used statistical technique and thus researchers have to be able to interpret its output. Unfortunately, this basic knowledge is not well grasped by the average researcher, who over-relies on p-values and significance rather than contextual importance and robustness of conclusions drawn. While in recent years there has been a focus on statistical errors, less research has been focused on the statistical understanding of researchers. Making evidence-based decisions informed by research is essential to improving health outcomes. When studies are poorly designed and analysed, they result in misleading conclusions, which, in turn, may result in ineffective or even harmful treatments which will have further costs both to the health system and patient wellbeing. This talk will focus on common mistakes and misconceptions we identified when reviewing 100 randomly selected published papers from PLOS ONE, a large medical and science journal. Our research highlights the most common issues for regression analyses and demonstrates where training and reporting guidelines need to be strengthened.
ZOOM LINK
An accredited statistician (AStat) and past president of the Statistical Society of Australia... More
Professor of psychology and the neuroscience program director. Calin-Jageman studies the... More
A professor of quantitative ecology with a PhD on stochastic population ecology, he is also... More
Panel Discussion: Scientific communication and evidence based policy in the age of metascience
Metaresearch is providing critical insights into the credibility of science. But how do we convey that knowledge outside of scientific circles? We discuss how to more effectively work at the intersection of science communication, evidence-based policy, and meta-science. We do so through both recent and historical examples, tentatively suggesting that bringing in a variety of publics into discussions about metascientific topics has the potential to change and enliven some of the focus of academic metascience research. We go on to discuss the role of metascience in law reform.
Jonathan Wai: Scientific communication and evidence-based policy in the age of metascience: Communication and education challenges. This presentation will explore how to more effectively work at the intersection of science communication, evidence-based policy, and meta-science
Joan Leach: Why Bother? Three reasons to engage publics in metascience issues. Using recent and historical examples, this presentation will argue for bringing in a variety of publics into discussions about metascientific topics. I will tentatively suggest such activity has the potential to change and enliven some of the focus of academic metascience research.
Jason Chin: Where is the evidence in evidence-based law reform? Australian law reform bodies often express a commitment to transparent processes and research-based policy, but they rarely connect those goals. I will suggest that recent law reform controversies could have been prevented or mitigated through practices like transparently conducted syntheses of research and preregistration.
ZOOM LINK
Lecturer at the School of Law at the University of Sydney. Prior to this position, he obtained a... More
Assistant Professor of Education Policy and Psychology and 21st Century Chair in the... More
Director Australian National Centre for Public Awareness of Science. As an academic leader, Joan... More
Lightning Talks 3
Sports science journal policy promotion of transparency and open research
Harrison Hansford
In recent years there has been a ‘crisis of confidence’ or ‘reproducibility crisis’ in many areas of science including psychology, neuroscience and social science. These issues stand to reduce confidence in the findings of science from those fields, potentially resulting in wasted research funding, resources and effort. The field of sport science is not immune to these issues, with several appeals to improve research quality and move the field forward. To illustrate this issue, we appraised how well the policies of the leading 38 sport science journals support transparent and open research practices. We also determined the Inter-rater reliability and agreement of the Take Up Top! Form assessed using intraclass correlation coefficients and the standard error of measurement, respectively. The journals were evaluated using the Take Up TOP! form - for evaluating the degree to which journal policies promote transparency and openness. The average score on the Take Up TOP! form was 2.05±1.99/27, reflecting low engagement with transparent and open practices. Overall inter-rater reliability was moderate (ICC(2,1)=0.68 (95%CI 0.55 to 0.79)), with standard error of measurement of 1.18. However, some individual items had poor reliability. The Take Up TOP! form had moderate reliability and policies of the top 38 sports science journals’ show significant room for improvement in the requirement of transparent and open research practices
The national plan for Australian OA: is it any good?
Kylie Papparlado
Australia’s Chief Scientist, Dr Cathy Foley, is proposing a centralised national plan for Australian Open Access and the Council of Australian University Librarians has taken the lead on negotiating transformative agreements on behalf of its member institutions. This talk highlights the positives and negatives of what is on the table and sketches how universities can use their intellectual property rights as our employers to help academics cut better deals with publishers.
Mandatory open data policies increase error correction in ecology and evolution
Ilias Berberi
Open data practices improve scientific rigour, accelerate discovery, and are gaining traction among researchers, funders, and publishers alike. However, keeping track of rapidly-changing data policies across academic journals is challenging. We generated a living database of open data policies for 199 journals in ecology and evolution and examined how these policies have affected error correction through article retractions. Yearly retractions increased almost five-fold in journals that require open data but remained unchanged in journals with less stringent data policies. Requiring authors to publicly share the data underlying published results helps science self-correct.
Science as a productive force and reproducibility in times of an organic crisis of capital
Rafael Carduz Rocha
The reproducibility crisis is a prominent theme since meta-analysis studies led to the conclusion that “for most [experimental] designs and configurations , the assertion made in a study is more likely to be false than true” (IOANNIDIS, 2005). The embarrassment to the independent verification of the knowledge produced has several distinct technical causes, but they all subsist because they reflect the gears through which capital commands and appropriates scientific work. Subsumed to capitalist relations, science becomes captive of the process of producing surplus-value. The implications of this process are not limited to its commodification insofar as scientific production goes beyond the spheres of search and reproduction of knowledge, but refers to the reproduction of society in general. Science and education are not just commodities, but productive forces and a constituent part of the labor force, respectively. Both gain centrality in this new phase of the industrial revolution, triggered by the application of artificial intelligence in the automation of production. The self-contradictory logic of the capital-relation, which has its moment of self-denial in the crisis, maintains a dialectical relationship with scientific progress. As the scientific work becomes the main social productive force, it inflicts the crisis of the erosion of working time as a paradigm of value measurement. The gears of scientific production and dissemination, coupled since the 18th century with those of the cycle of rotations of capital, feel the faltering movement of the world economy.
Wikipedia integrated scientific journals
Thomas Shafee
Whilst top academic articles get <10,000 views over their lifetime, a Wikipedia article on the same topic can get that many each day. Despite the encyclopedia's broader reach, academics rarely contribute to it.
However, the relationship between Wikipedia and academic publishing is changing. In particular, WikiJournals are bringing together the scholarly rigour of academic publishing with the massive impact of Wikipedia and Wikidata. Participating journals publish a range of formats: from broad reviews copied fully into Wikipedia, to image galleries that get integrated throughout the encyclopedia. Getting both a peer reviewed, citable journal article, that's also integrated into Wikipedia to reach broader demographics like students, journalists and policymakers merges the best of both worlds: Academics get access to one of the most efficient outreach and communications channels, and the encyclopedia gets expert input on some of the most complex topics that it covers.
Open Discussion
ZOOM LINK
Workshop: The trouble with disciplines
Taya Collyer
This workshop will equip you with a range of ways to think about academic disciplines, and will be particularly helpful for those wanting exposure to theories and approaches from established traditions for studying science. Disciplines are an obvious feature of the scientific landscape, but their influence and importance is little discussed, even within research about scientific practice. Very few empirical studies directly tackle disciplinary difference in research contexts, and the literature about disciplines is itself fragmented along disciplinary lines. In this workshop I present an interactive tour of the literature about disciplines, including a range of perspectives relevant to meta-science drawn from four established traditions: Sociology of Scientific Knowledge, Sociology of Professions, Science and Technology Studies, and Higher Education Research. In addition, I present four challenges which complicate the study of disciplines. Practical examples are provided from population health research, an area of science which exhibits substantial disciplinary diversity.
ZOOM LINK
Lightning Talks 4
Proposal for a transparent, evidence-based, community-owned scholarly publishing system with disruptive potential
Cooper Smout
Recent years have seen an explosion of interest in moving beyond traditional peer review and toward a more dynamic, post-publication, open evaluation model. Despite the emergence of numerous such journals (e.g., F1000) and platforms (e.g., PREreview), however, community adoption of these systems remains low. Here, I argue that the principal reason for this is not a lack of desire in the community, but because these platforms fail to tap into the self-reinforcing cycle that maintains the traditional journal hierarchy. I’ll then propose an open evaluation model that aims to generate ‘prestige’ at the journal level and thus attract high quality contributions. In brief, this model proposes that articles be rated by peer reviewers on a number of qualities (e.g., novelty, replicability), which in turn are ‘meta-reviewed’ by an editor. Algorithms are then trained on these data to classify articles into different ‘quality tiers’, which are published as distinct journals. In theory, articles in the upper tiers should attract more citations both because they are of higher quality (‘virtuous cycle’) and receive more attention (‘vicious cycle’) than the lower tiers (Kriegeskorte, 2012), thus developing sufficient ‘prestige’ to compete with traditional journals. The model I propose would be transparent, cheap, fast, scalable, dynamic, crowdsourced, open source, evidence-based, flexible and precise. I’ll conclude my talk with a vision for how the model might also be financially viable, thus allowing the community to evolve it over time in line with our needs and the principles of science itself (via meta-research).
Stopping data collection early with group-sequential experiments
Alexander Eiselmayer
Group-sequential designs are one way of conducting controlled experiments that allow researchers to analyze the data while it is being collected and stop if pre-defined criteria are met. This way, researchers can save resources and study participants if the effect size is either much stronger or weaker than anticipated. To maintain the overall Type I error rate (e.g., alpha = .05), interim analyses uses lower nominal-alpha thresholds that are determined by a spending function. Ideally, researchers should choose a spending function according to how they anticipate how the results will turn out and their resource management strategy. Nevertheless, exploring choices of spending function is cumbersome because of the variety in the parameters across function families and the complex mapping between the parameters and the nominal-alpha output.
Wassmer, and Pahlke developed an R Shiny app and Lakens, Wassmer, and Pahlke recently created a tutorial that provides a menu interface for limited exploration (psyarxiv.com/x4azm). In this lighting talk, we will show a prototype of a mixed-initiative direct manipulation interface for exploring spending functions. Our interface recommends candidate functions based on user-specified characteristics of nominal-alphas. We also provide tools to facilitate inspection, comparison, and winnowing towards the final solution. We hope to make group-sequential design more accessible and to encourage transparency in this important research decision—which will enhance future replicability.
Lightning Talks Continued below...
ZOOM LINK
Lightning Talks 4 (continued)
Using nf-core pipelines for tests of robustness in peer review
Jeremy Leipzig
Support is growing for requiring computational reproducibility as a prerequisite for the publication of manuscripts that include in silico analyses. Runnability audits such as CODECHECK, which confirm source code runs and produces comparable results, have been proposed as a component of peer review. Here we extend this concept and introduce "tests of robustness" which can involve swapping out tools, parameters, references, and data subsets to evaluate the underlying validity of a scientific paper. Tests of robustness can be performed by tweaking existing source code, but a total reimplementation of the analysis from high-level methods may prove an even more powerful technique.
In this study, we performed tests of robustness using off-the-shelf pipelines written in the Nextflow framework and distributed by the nf-core project, a vibrant ecosystem for community-developed bioinformatic pipelines. A groundbreaking paper, Dominissini et al "Topology of the human and mouse m6A RNA methylomes revealed by m6A-seq", was selected as a post-publication review candidate based on contrasting citation statements identified by the scite.ai lexical analysis service. Using the publicly available data and methods, we attempted to reproduce key findings in the paper using nf-core pipelines. We found the technique revealed considerable discrepancies that may have proven useful in the context of pre-publication peer review.
Ending epistemological Thatcherism
Austin Mackell
If, as Margaret Thatcher said, there is no such thing as society, then can there be such a thing as public knowledge? Or is knowledge now a fully private, individual affair, purchased, passively consumed, by individuals?
Is truth now a matter of taste? What are the alternatives to epistemological individualism?
Classical epistemology defines (individual) knowledge as justified true belief. We can look at public knowledge as a justified true consensus.
We contend that there is currently a deficit of justification, preventing information from graduating, as it were, to fully fledged knowledge.
In the olden days, there was a consensus forming process of sorts, where institutions served as gatekeepers, setting the boundaries for acceptable debate. Consensus within those boundaries functioned as social consensus, creating a kind of synthetic homogeneity. We have lost this consensus forming process, as digital technology has democratised the publishing and media creation space, and not yet formed a new one.
By creating transparency around the research process behind news deliverables and other knowledge products (using the Stone Transparency System) we can establish a new consensus forming process based on sourcing and other methodological standards, not centralised or institutional control; A social and civic epistemology.
Open Discussion
ZOOM LINK
MiniNotes: Automated decision support tools for scientists and end users
Various automation tools are now available to support several aspects of the research process, hence increasing efficiencies. In this session, we will learn about automation tools for designing, synthesising and evaluating research, and see how well they apply in practice.
Anna Scott: Designing and visualising efficient literature search strategies with The Search Refiner.
Cyril Labbe: Automated assistance to screen scientific literature for problematic papers.
James Gentile: NLP Pipelines for Knowledge Discovery.
ZOOM LINK
A PhD in computer science and a MS in applied mathematics from the University of Grenoble.... More
Research Director of Complex and Social Systems at Two Six Technologies. After earning his Ph.D.... More
Assistant Professor at Bond University, with background in epidemiology (MSc) and health... More
Panel Discussion: Correcting Historical Errors
Is the foundational belief that science is self-correcting a myth? Or does the answer depend on the goals of the communities we seek to serve? The focus of this panel session is to explore and discuss research projects -- both scientific and historical -- that have led to the correction of the scientific record, and reflect on the concept and dynamics of self-correction in science over time. The findings and experiences of the panellists serves as an excellent starting point for a wider discussion on the complexities and contingencies of self-correction in science, as we discuss what is the role and scope of active error detection? What are the challenges of correcting historical errors? And how does and should the metaresearch community progress this agenda?
Julia Rohrer: The Loss-of-Confidence Project - Mistakes are a routine part of the scientific process, and yet we rarely talk about them openly. We invited psychological researchers to share situations in which they had lost confidence in one of their own published studies due to a mistake for which they are willing to take responsibility. I will talk about our findings, as well as obstacles to scientific self-correction.
Rod Buchanan: I want to reflect on the case of Hans Eysenck and the dubious research on disease-prone personalities he published with Ronald Grossarth-Maticek in the 1980s and 1990s. The belated and still ongoing correction of this research has taken a great deal of effort from many people. This was never a detection or an adjudication problem, more a matter of overcoming institutional inertia and the lingering fealty to a foundational figure. In all, it has been a fraught but instructive process that has produced some welcome if uneven results.
Riana Minocher: Insights from a survey of reproducibility of social learning research
ZOOM LINK
A personality psychologist whose substantive research covers a broad range of topics including... More
Roderick D. Buchanan is an Honorary Research Fellow in the History and Philosophy of Science... More
PhD student. As part of her dissertation work, she led a project that aimed to quantify the... More
Group Discussion: Moving Open Science forward at the institutional/departmental level
Esther Plomp
Open Science practices have seen an increase in uptake in recent years. Nevertheless, some researchers are still thinking of Open Science solely in terms of Open Access to publications. This discussion session aims to address what your institute/department could do to improve the awareness of all Open Science practices and support the change towards a more open research culture. To facilitate this discussion we will address questions such as:
- What is needed for your institute/department to improve the uptake of Open Science practices?
- What credit mechanisms need to be in place and how do you establish these?
- What can we do to prevent reinforcing existing inequities?
Bring your recommended practices, (not so) success stories, questions and requirements to the session!
ZOOM LINK
Group Discussion: Whither authorship?
Marton Kovacs, Alex O. Holcombe and Balazs Aczel
Large-scale collaborations have highlighted the need to better address the ‘who did what’ question when publishing research. Current contributorship documentation practices often lack transparency or use taxonomies that only vaguely describe the actual contributions. In this discussion, we will argue that we need to redefine our concepts and practices for a transparent and fair representation of contributorship. Among others, we will talk about the following questions: How can journals better record authors’ contributorship information? Is CRediT good enough or how important is it to have an alternative for your field? What is missing from the CRediT categories? How can we get teams to discuss authorship earlier in a project? Are there ways to have the discussion that would reduce dominance by the powerful members of the team? How can we define a threshold for the minimum size of a contribution needed to have one’s name appearing under the title of a paper (the traditional authorship position)? Or do we not need to do that?
ZOOM LINK
Workshop: How to ReproHack
Daniela Gawehns
How does research reproducibility work in practice? Try organizing a ReproHack for skilling up on reproducibility.
During a ReproHack, participants try to reproduce published research of their choice from a list of publications with open access data and code. Participants give feedback to the authors on a number of aspects including reproducibility, transparency and reusability. It is a learning experience for the participants, who can apply what they learnt when publishing their own research, and for the authors of the papers who get their work test-driven by other scientists.
In this workshop you will make first steps towards organizing your own ReproHack (for your group, institute, faculty or university). We will start the session by reproducing a paper in a live-coding session to get a feel for what Reprohack is all about from a participant's perspective. In the second part of the workshop, we will work our way through several worksheets that will help you with the organizational and technical aspects of a ReproHack. We will leave enough room for discussion and to exchange ideas between participants to make this workshop as interactive as possible.
ZOOM LINK
Workshop: Did Thatcher really say that, though? Parallel research comparison with Stone Transparency
Austin Mackell
In my lightning talk I mentioned a quote commonly associated with Margaret Thatcher (“There is no such thing as society” ). But actually I don’t know if that quote is correct and I have reason to believe it may not be.
How do we find out? How do we know when we have found out? How do we share our critical thinking and research in an age of uncertainty? We use the Stone research transparency platform. Stone is a new tool which allows researchers creating online non-fiction content (fact checkers, reporters, true crime junkies) to track and share their research processes. In this session users will be tasked with checking the veracity of the quote, often attributed to Margaret Thatcher that “there is no such thing as society”. After a five minute introduction to the software, participants will have 40 minutes to install the software and track their research as they hunt down the origin of this quote. Staff from Stone will be on hand to provide support. In the second half of the session, we will examine, compare and discuss the different methodologies used by participants, and try to form a consensus on the veracity of the quote, and if possible, propose a formal generalised theory of what counts as a verifiably accurate quote - creating a microcosm of the broader meta-research discussions Stone will provoke across the public and intellectual spheres.
ZOOM LINK
Workshop: Replications & Reversals in Social Sciences
Flavio Azevado
A (medical) reversal is when an existing treatment is found to be ineffective and harmful. In recent years, scholarship in Psychology showed that only 40-65% of some classic results were replicated. And even in those that replicated, the average effect found was half the originally reported effect. Psychology is not alone: medicine, cancer biology, and economics all have their share of irreplicable results. It is based on this that FORRT started a new initiative ("Replications & Reversals in Social Sciences") that collate reversal effects in social sciences to encourage researchers and educators to incorporate replications of these effects into their students' project (e.g., third-year, thesis, course work) as a means to provide them with the opportunity to experience the research processes directly, assess their ability to perform and report scientific research, and to help evaluate the robustness of the original study, thereby also helping them become good consumers of research. Join us in this hackathon to join this project, help collate reversals, and, eventually, draft a manuscript about the pedagogical consequences and applications of this project.
ZOOM LINK
Lightning Talks 5
Networked marketplaces for responsible sharing of research data
Valentin Danchev
In this talk, I will first outline current practices, incentive structures, and challenges in sharing research data and individual participant data (IPD) in particular. Then, drawing on multidisciplinary insights from network science, online marketplaces, sociology of science, and metascience, I will sketch out the design of safe and secure data marketplaces and the potential of such socio-technical infrastructures to generate incentives for data sharing, build public trust, and integrate the landscape of research data, thereby harnessing responsible data sharing and reuse.
PRVoices: Updates from the Practice Research Intersectional Project
Adam Vials Moore
In this talk we give a rapid overview of the progress of the PRVoices project - a national project with an international scope. It aims to examine immediate interventions in the repository space to support non-text output and build an intersectional community addressing the needs of practice, infrastucture and culture in this area.
You don’t need a PhD to practice good science: the case for studying behaviors of research staff
Rose Franzen
In many large research labs, much of the execution of the day-to-day mechanics of science is not done by PIs but rather by undergraduate students and employees with associates or bachelors degrees, such as research assistants, clinical research coordinators, lab managers, and lab technicians. Despite this, the majority of research on open science practices to date has focused on Primary Investigators and others who hold advanced degrees. While research staff have little if any authority over high-level decision making regarding the research of the lab, they often make dozens or more of micro-decisions throughout their daily workflow that can have real implications for the final integrity of the data and replicability of the study. In order to truly move the needle on the quality of scientific research, the metascience field must ensure that an outsized emphasis is not placed on studying PIs, postdocs, and individuals in the PhD pipeline. By gathering empirical information on the day-to-day operations of research, we can begin to develop interventions and educational resources intended for research staff just as the community has begun to develop for PIs and other, more academically decorated researchers.
What Professors Have Against Reproducibility
Adrienne Mueller
Affecting a culture-shift is an uphill battle. But your professoriate is not a legion of amoral megalomaniacs; it is comprised of intelligent, overworked, specialists - some of whom are jaded, and some of whom are idealists. This is case study about the resistance one educator faced in her attempt to teach reproducible methods and open science at a Tier 1 research institution. It will unpacks specific reservations that faculty have with respect to training in reproducible and open methods, and discuss ways of addressing those reservations. The audience will come away anecdotal, but likely representative, insights into the perspectives that may be inhibiting broad adoption of open science methods in academia.
Open Discussion
ZOOM LINK
Panel Discussion: Assessing Researchers
Assessment of researchers rarely includes considerations related to trustworthiness, rigor, and transparency. Furthermore, the COVID-19 pandemic has widened the pre-existing gap between men and women in prevailing, publication-based measures of productivity used to determine academic career progression. In this panel discussion, we will hear about several strategies that academic institutions, funders, and journals can take to address these issues.
ZOOM LINK
Senior scientist, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Director of... More
A post-doctoral fellow with the Knowledge Translation Program at Unity Health Toronto,... More
Digital Science Professor of Research Policy at the University of Sheffield and Director of the... More
Lightning Talks 6
Preregistering qualitative research
Tamarinde Haven
Preregistration of quantitative studies was developed to enhance the validity and credibility of quantitative research results. We reviewed whether the preregistration could also lend itself to boost the credibility of qualitative research. Different authors made suggestions for what to include in a qualitative preregistration form. The goal of our study was to understand what parts of preregistration templates qualitative researchers would find informative. We conducted an online Delphi study with qualitative researchers and methodologists. Forty-eight researchers participated (response rate: 16%). The result is an agreement-based form for the preregistration of qualitative studies that consists of 13 items. The form is now available as a registration option on Open Science Framework (osf.io). We believe it is important to assure that the strength of qualitative research, which is its flexibility to adapt, adjust and respond, is not lost in preregistration. The preregistration should provide a systematic starting point. Coauthors - Kristian Skrede Gleditsch, Leonie van Grootel, Alan M. Jacobs, Florian G. Kern, Rafael Piñeiro, Fernando Rosenblatt and Lidwine B. Mokkink.
A peer review intervention to improve transparent reporting
Robert Thibault
Preregistration aims to increase the trustworthiness of research; in part, by clearly demarcating exploratory and confirmatory design choices and analytic decisions. Clinical trial research uses a comparable procedure that several organizations mandate—prospective registration. In practice, departures from registered study plans often go undisclosed in publications. We conducted a systematic review and meta-analyses finding that 10-68% (95% prediction interval, I2 = 86%)) of studies have at least one discrepant primary outcome between their registration and associated publication, and 13-95% (95% prediction interval, I2 = 90%) have at least one discrepancy secondary outcome. We then ran a feasibility study to test the implementation of a peer review intervention we call ‘discrepancy review’. For this intervention, journal editors invited a member of our team to peer review submitted manuscripts specifically for discrepancies. Whereas we found this intervention feasible for clinical trial registrations, typical registrations on the Open Science Framework were too imprecise to effectively implement discrepancy review. If discrepancy review is successful in a follow-up trial, it will present one feasible solution to improve the trustworthiness of published research.
Stewardship and curation of computationally reproducible research
Limor Peer
With increased reliance on computation in every aspect of science and inquiry, there are more opportunities for close examination of the scientific record. Such examination depends on research practices as well as on the practices of those who are entrusted with keeping, logging, and stewarding the artifacts that comprise the scientific record. Data professionals, including curators, can help ensure the transparency and reproducibility of these artifacts, and shore up the public’s trust in science. In this lightning talk, we will explain why and how curation and stewardship responsibilities can concretely enhance the quality of scientific artifacts. We stipulate that, first, curation includes activities that verify (to the extent possible) that statistical and analytic claims about given data can be reproduced. Second, the object of curation is the entirety of the data collection and analysis process, as well as its component parts (e.g., data, code, documentation). Third, the goal of curation and stewardship is to ensure that the quality of the objects meets community standards for FAIR and archival preservation. To illustrate, we focus on the curation of social science research. Through a series of examples from social science research, we highlight the skills and the actions data professionals bring to bear. We will also discuss the benefits reaped as a result of these actions – for science and society, as well as for the researchers and curators.
How do science journalists evaluate research?
Julia Bottesini
Science journalists play an important role in science communication, choosing what scientific findings to report on. How do journalists make those choices? How do journalists evaluate scientific research, and to what extent do they consider the quality of the research? Media attention is an important reward shaping scientists’ incentive structure. It can affect hiring, promotion, and funding prospects, so it is critical that we understand which research practices are being rewarded. In this work, we examined 181 science journalists' ratings of 22 fictional social and personality psychology study vignettes, written to resemble press releases. By randomly varying 4 characteristics of these vignettes -- the prestige of the university the study was conducted at, the sample size of the study, whether the sample was nationally representative or not, and how extreme the resulting test statistic and p-value are -- we can capture the effects of these characteristics on journalists' evaluations of the trustworthiness and newsworthiness of a given finding. Sample size was the most important factor among the 4 we varied, while our exploratory analyses suggest experimental vs. correlational design and journal prestige may be important predictors of trust for journalists.
Lightning Talks Continued below...
ZOOM LINK
Lightning Talks 6 (continued)
The Open Science discourse as a means for the advancement and communication of science
Cristian Constantin
This research aims to complement our understanding about Open Science and its contribution to the communication of scientific research by means of an analysis of relevant, Web of Science journal articles published up to the end of 2020.
Relying on both qualitative and quantitative methods, it addresses the content of this communication. On the one side, the analysis identifies the general research areas and specific disciplines Open Science has been associated with along the years. On the other side, the conceptual evolution and the different core elements of the Open Science framework led us to a more granular view into what specific Open Science traits were adopted by the aforementioned research areas and disciplines. The worldwide geographic dynamic of this scholarly dialogue is presented in a decade along perspective, mapping the core clusters of knowledge creation, the main topics of the exchange, and the Open Science practices employed in the process.
Open Discussion
ZOOM LINK
MiniNotes: What does damage to scientific progress look like?
The focus of the session is to explore three case studies of how entrenched and unquestioned practices and interpretations can have costly and far-reaching impacts across the sciences.
Gideon Meyerowitz-Katz: How meta-analyses failed: the tale of ivermectin and COVID-19 - Meta-analyses have unquestionable benefit for translation of disparate findings into clinically-actionable results. However, they rely intrinsically on trust - all of our current techniques for meta-analysis assume that researchers are honestly representing their findings. This talk will discuss the story of COVID-19 and ivermectin, and how much of the failure of science could have been prevented by taking a skeptical eye to research in published meta-analyses.
Kirsten Parris: The toe-clipping wars revisited - Individual marking of frogs using toe clipping has been a common practice in zoology since the 1950s. In a series of papers published in the early 2000s, Michael McCarthy and I demonstrated that toe clipping reduces the return rate of marked animals, either by changing their behaviour or increasing their risk of mortality. Either way, this violates the assumption of mark-recapture models, leading to erroneous estimates of population size. In this presentation, I will consider the current status of the debate surrounding the continued use of toe clipping, especially in field studies of endangered frog species, and what progress has been made.
Timothy Clark: The importance of research integrity and transparency in combatting the reproducibility crisis - It has been argued that the majority of scientific findings cannot be replicated by independent scientists, the so-called ‘reproducibility crisis’. I will discuss the roles of research integrity and transparency in shaping and combatting the reproducibility crisis, with reference to some of my own work in the field of animal biology.
ZOOM LINK
Australian Research Council Future Fellow and an Associate Professor at Deakin University,... More
A Professor of Urban Ecology in the School of Ecosystem and Forest Sciences at The University of... More
An epidemiologist working in chronic disease with a particular focus on the social... More
Hackathon: Designing the Journal of Metaresearch
Jason Chin
AIMOS is starting a journal! We'd (the journal committee: Jason Chin, Alex Holcombe, Simine Vazire) like it to set the standard for rigorous and open practices, and to be diamond open access. But, that's easier said that done! Come help us map out this journal's future.
ZOOM LINK
Hackathon: arXiv.wiki Edit-a-thon
Kunal Marwaha
How do you get the main idea of a paper?
Although abstracts are intended to summarize one's work, researchers have incentives to use technical jargon and formal language to impress the experts within their field. We propose an alternative open-source tool, called the arXiv wiki (https://arxiv.wiki/). This is a place to store summaries and links of arXiv preprints. Anything that could accompany an arXiv paper belongs here.
We propose a hackathon to add to the arXiv wiki. This could be as simple as a two-line research summary of the paper, or a link to a talk of a particular paper. Editors do not need much experience with wikis prior to this. After the hackathon part, we will have a short debrief about ways to improve the arXiv wiki and other ways to accelerate open science communication across researchers.
ZOOM LINK
Group Disc: Studying researchers as learners: What factors impact educational engagement throughout a scientific career?
Rose Hartman & Rose Franzen
"Because open science and reproducibility represent a recent paradigm shift, the associated behaviors differ from what many researchers were taught during their formal training. As a result, the spread of open science practices necessarily relies upon researchers engaging in continued education. Many researchers have already taken it upon themselves to self-educate on these important topics, and as incentives shift slowly to place more value on reproducible methods, more will continue to do so. Despite this, our understanding of researchers as students is decidedly limited. Of the few studies that exist characterizing researchers and scientists in their capacity as learners, most focus exclusively on those still in graduate school or med school. Widespread culture change in the sciences will require researchers at every career level --- not just students --- to dedicate precious time to studying new methods and tools. Understanding the needs, values, constraints, and motivations of this unique body of learners will be critical to designing successful educational interventions for them.
We propose a discussion session around measurement of constructs relevant to researchers as learners. Which factors impact motivation to continue study? Which characteristics can be used to build productive communities of practice, helping to sustain long term commitment? What do we need to know about a researcher to match them to appropriate and effective educational content to move them on their path toward open science?"
ZOOM LINK
Lightning Talks 7
Analysing Published Limitations in Social and Personality Psychology
Beth Clarke
Proper acknowledgement of research limitations is critical for scientific credibility and in informing scientists’ future endeavours. Our study focused on the field of social and personality psychology, exploring how authors' claims about the limitations of their studies have changed throughout psychology’s replication crisis. We analysed the limitations sections of 221 empirical articles published in the journal Social Psychological and Personality Science between 2010 and 2020. To interpret these, we drew on Cook and Campbell’s (1979) four validities framework, coding limitations as threats to: construct, internal, external and/or statistical conclusion validity. We also explored limitations concerning 12 subcategories of interest (e.g., small sample size concerns within statistical conclusion validity). From these 221 articles, we extracted 573 limitations (avg. 2.6 limitations per article). Limitations concerning external validity were most common (70% of articles) and limitations concerning statistical conclusion validity were least common (33% of articles). Authors reported more limitations over time (avg. increase of 0.84 limitations per decade, p = .004). Therefore, the replication crisis may have prompted increased attention to research limitations. However, this increased attention did not translate to increased discussion regarding any of the four validities individually. Our state-of-science review offers an empirical foundation for discussion regarding the pragmatic steps researchers can take to improve the field.
Do errors associate? An analysis of two types of statistical reporting errors in psychology.
Peter Gillam
Nuitjen et al. (2016) launched Statcheck (an automated method for checking reported p-values) and found that over 50% of published psychology papers report at least one p-value that is inconsistent with its test statistic and degrees of freedom. These results were supported by Zhang, Thompson, Singleton Thorn, and Saw (2021), who found a similar rate of p-value inconsistency in a larger set of published psychology papers. Brown and Heathers (2017) developed and introduced the Granularity-Related Inconsistency of Means (GRIM) test and found that 50% of the published psychology papers they checked contained at least one inconsistency between a reported mean and its sample size. In the present study, the data set from Zhang et al. was subjected to the GRIM test to assess the prevalence of inconsistent means and determine the nature of the relationship between the two types of inconsistencies. Results indicate independent support for the results of Zhang et al. (2021) and Brown and Heathers (2017), but only a very weak association between these two types of inconsistency. A brief review of the major implications would conclude the talk.
Methodological integrity and the search for epistemic equity
Iza Munir Hamdani
Much of the discourse which makes up the field of academia is subject to inaccuracies; in that it tends to be crafted from a place of unknowing privilege. The natural and social sciences are peppered with deficit narratives which claim to understand the nuances of particular lived experiences when, in actuality, they do not consider the voices of the people whose experiences these actually are.
Cisgender men speak for women and gender diverse people, White people speak on behalf of black, Indigenous, people of colour. There are plenty of more examples, but all boil down to the fact that we can only achieve methodological integrity and validity when we strive for epistemic equity. Indigenous people, women of colour, queer folks (in essence, all groups with identity markers which shape their day-to-day lives) must be given the space to pave the way for research – and not for that which the ‘Other’ may feel is the scientific priority, but what one knows for a fact is the priority of their own community and livelihood.
It is a vital task for scholars to self-examine the condition of being, in that each experience is entwined with unique social, political and economic challenges which different members of society either are, or will eventually be, liable to give a voice and a platform to all – it is the only way we can acquire knowledge and validity in our quest for true and open scientific practice.
Open Discussion
ZOOM LINK
Panel Discussion: ‘What counts as success? The case of evaluating open science badges
Hilda Bastian, Brian Nosek and Anisa Rowhani-Farid
Metascience seeks not only to critique science but also to advance ideas to improve or reform science. How can initiatives for scientific reform be assessed? How is "success" or "failure" at being an effective intervention decided, and on what criteria? How should these decisions be made, and who should be making them? This panel aims to address these questions through the specific example of Open Science Badges, an initiative of the Center for Open Science, looking at the evidence and arguments for and against their effectiveness.
ZOOM LINK
Co-Founder and Executive Director of the Center for Open Science (http://cos.io/) that operates... More
A Postdoctoral Fellow at the Restoring Invisible and Abandoned Trials initiative at the... More
Hilda Bastian was a long-time consumer advocate in Australia, whose career turned to studying... More
Hackathon: Developing a taxonomy of interventions to improve the quality and reproducibility of research
Paul Glaziou
Over the past decade the problems of research waste and the reproducibility have been extensively documented. A 2014 series in the Lancet demonstrated that approximately 85% of research goes to waste through the combination of poor design, non-publication, and poor reporting. Several studies in both psychological and biological sciences have illustrated the poor reproducibility of many studies. Poor research reproducibility is partly traceable to design flaws and to incomplete or poor documentation of research processes. The flow on effects of wasted research impacts research end-users and ultimately has systemic impacts on industries that utilize research to facilitate practice. Many of these problems are avoidable and might be reduced with sustained interventions at the research systems level.
We are planning a review of interventions used at institutional level (university or research institute) which might improve quality and reproducibility, including training, mentoring, incentives, tools, assistance, infrastructure, or combinations of these. To enable the search for, and analysis of, such studies we need to develop a taxonomy of the types of interventions. This hackathon aims to help build such a taxonomy.
ZOOM LINK
Keynote Address: Tackling the Elephant in the Room: Better Science Needs Better Causal Inference
Causal inference lies at the heart of scientific inference, and yet in certain fields, scientists are apt to avoid any discussion of causality, in particular if no randomized experiment has been conducted. In this talk, I am going to argue that this leads to suboptimal research practices and huge amounts of wasted research efforts. A simple intervention -- demanding that every journal article spells out the theoretical estimand of interest, as well as the assumptions under which the theoretical estimand corresponds to the statistical estimand of the study -- could not only reduce the number of conceptually confused studies in the literature, but also encourage researchers to improve their understanding of scientific inference.
ZOOM LINK
A personality psychologist whose substantive research covers a broad range of topics including... More
Workshop: Measuring Ideology: Current Practices, Consequences, and Recommendations
Flavio Azevedo
Political ideologies are foundational to a broad range of social science fields such as political science, and social and political psychology. Ideologies serve to aid individuals in navigating the complex socio-political world by offering an organization of values, justifying social arrangements, and describing power relations. Applied to research, scholars use diverse and wide-ranging approaches to measuring an individual's political ideology. We sought to investigate standard practices by conducting an exhaustive literature review of 400 scientific articles, spanning from the 1930s to 2020s, across social sciences sub-fields. We identified and coded 207 works, finding more than 150 unique ideological instruments measuring ideology. That is, >75% of all cataloged measures were unique. We performed a content analysis to measure overlap between found measures—an indication of content validity across studies— obtaining very poor similarity indices, suggesting that topics measured across instruments vary widely from one instrument to another. The little to no overlap in approaches to the measurement of ideology suggests that scholars may not be justified in generalizing findings across studies. Lastly, we applied five different ideological scales to different established theories in the field and showed empirically that results indeed can change as a function of the instrument used. In this workshop, we are introduced with this research topic in detail in the first 45 minutes & then we will brainstorm about possible solutions to this problem across other topics and fields.
ZOOM LINK
Workshop: Mini ReproHack
Anna Krystalli
Reprohacks are workshops providing a sandbox environment for practicing reproducible research. Prior to the session, we will invite authors to submit paper with associated code and data for review. During the session, participants will attempt to reproduce submitted papers of their choice and feed back their experiences to authors by completing a review form. Authors get friendly feedback on the reproducibility of their work while participants get practical experience in reproducibility through working with other people’s materials.
Normally, ReproHacks are full day events, but in this session, we are interested in exploring how successful participants can be when condensing the activity to a speedy 3 hrs!
We will be using the ReproHack Hub (reprohack.org) to manage paper and review submissions. We will also spend time sharing and learning from the experiences of participants in the session."
ZOOM LINK
Hackathon: Composing the Flow
Adam Vials Moore
Working on intersectional metadata: we will identify how complex, well curated narrative descriptions in custom built repositories are exposed via standards and then mapped to global registries. We will look at the richness in the base metadata and how that it transformed and transferred to distributed schema and taxonomies – identifying issues and challenges in workflows, standards and registry targets.
ZOOM LINK
Hackathon: Stresstesting ResearchEquals.com
Chris Hartgerink
On February 1st, we will launch ResearchEquals.com to publish research modules (previously known as Hypergraph).
Instead of publishing entire projects, you publish steps of your research instead, and document the order of events at the same time. Collected some data? Publish! Wrote a script to analyse these data? Publish!
In this hackathon, you get a sneak peek of the platform. We provide a short 5-10 minute introduction to publishing research modules and how to report issues with the platform, after which you try to break as many things as you can.
Your contributions will be openly recognized and credited. Your participation will also introduce you to see the way we aim to further develop the platform after launch- with you as researchers.
Required: A GitHub account (free).
ZOOM LINK
Workshop: Walking the non-textual landscape
Adam Vials Moore
In this session we will examine the challenges facing scholarly communities whose main method of communication is not the written article but some other form of output.
How do we ensure that the rich metadata, functionality and narratives built into institutional and other tailored platforms can expose and transmit that information to the wider academy and global registries, enabling discovery, access and inquiry, without loss of detail, classification of the work as “OTHER” and downstream degradation of the richness of the data first created? What are the challenges to the embedded cultures and how do we go about reforming and reframing them? We will work together, identifying lived experience and collected stories. We will identify who the stakeholders in the processes are, and how we might best leverage the collective to initiate change. Where further investigations / developments are needed, these will be recorded and reported following the session.
ZOOM LINK
Lightning Talks 8
Tracking the influence of problematic preclinical research papers cited by literature reviews
Yasounori Park
Objective: Preclinical research papers with wrongly identified nucleotide sequence reagents or other features of concern could describe incorrect and misleading information. To determine whether literature reviews cite problematic preclinical research papers, we analysed papers cited by 12 literature reviews that examined the roles of human genes in cancer.
Method: We analysed 12 reviews published from 2013-2019 that described gene functions in cancer. References in each review were checked for wrongly identified nucleotide sequence reagents and whether concerns had been raised on PubPeer. Problematic cited papers were examined for their citation context(s) in each review. Papers that had cited each review according to Google Scholar were examined to identify citations based on information within problematic papers.
Results: The 12 reviews cited 1,610 references, including 90 (6%) problematic papers. Between 1-13 claims per review (78 claims in total) were supported by problematic papers. The 1,837 citations of the 12 reviews included 3 citations that reflected claims from 3 problematic papers. The 90 problematic papers that were cited by reviews have been cited 15,477 times.
Conclusion:We demonstrate that literature reviews cite problematic papers and that review citations can reflect information described in problematic papers. Future analyses will examine how problematic papers cited by literature reviews have influenced other research efforts
Animalstudyregistry.org - Speeding up scientific progress with preregistration of animal research
Celine Heinl
Biomedical science shows only poor translation into clinics. Metaresearch has already identified a variety of underlying causes including selective reporting and questionable research practices but suggested measures that would tackle the problem are still not broadly adopted. This raises ethical concerns as patients are put at risk in trials based on loose fundaments and animal lives are wasted when conducted experiments are not published.
With the launch of the preregistration platform animalstudregistry.org, we address these ethical issues and aim to speed up scientific progress at the same time. With our guided preregistration template, specific to animal research, we support scientists all over the world in thoroughly planning their studies. We enable them to claim the intellectual property of their idea without making it immediately public and to prove their commitment to open science practices towards reviewers, funders and academic boards.
Preregistration is a proven tool to increase research quality and reporting in different research fields. In biomedical research, it is still widely unknown. With this talk, we would like to raise attention on the ethical challenge, biomedical research is facing and the possibility to improve the current problems by preregistration in a specialized online platform, animalstudyregistry.org. This could be interesting for funders, journals and institutions. Animalstudyregistry.org also represents a future research opportunity for meta-scientists to evaluate the effectivity of preregistration in animal research.
Integrated approach to drive responsible research practices at an institutional level
Delwen Franzen
Academic institutions can foster responsible research practices (RRPs) by providing incentives, education, and infrastructure, and rely on information on their performance and opportunities for improvement. We developed an integrated approach to evaluate, communicate, and drive RRPs at University Medical Centers (UMCs) in Germany. First, we conducted a policy review to assess whether UMC policies mentioned or incentivized RRPs. Second, we performed a status quo analysis of a broad set of RRPs at UMCs. Third, we developed a dashboard to communicate these baseline assessments. The dashboard shows UMCs their performance relative to international guidelines and can inform the development and evaluation of interventions to drive RRPs. Fourth, we solicited international stakeholder feedback on our dashboard approach and metrics related to RRPs. Besides Open Access, current institutional policies include little focus on incentivizing RRPs. While some RRPs have increased over time, there is room for improvement. Stakeholders considered the dashboard approach helpful but missed a narrative explaining the choice of RRPs. Based on this feedback, we narrowed our focus to clinical research transparency as an actionable area for improvement with established RRPs. To improve clinical research transparency, we developed trial-specific report cards and guidance and are collaborating with core facilities to conduct and evaluate an intervention at the level of trialists. In this lightning talk, we will present our integrated approach to support institutions foster RRPs to the aimos community. Co-authors: Maia Salholz-Hillel, Tamarinde Haven, Martin Holst, Benjamin Gregory Carlisle, and Daniel Strech.
Lightning Talks Continued below...
ZOOM LINK
Lightning Talks 8 (continued)
Clinical Trial Registration Changes: Assessing ‘Retroactively Prospective’ Registration
Martin Holst
Prospective registration of clinical trials is required to safeguard the integrity of human research and help prevent certain kinds of bias or fraud. The ClinicalTrials.gov registry allows entries to be edited at any time, creating a trail of historical versions. While changes to a trial protocol are normal, this feature might be used to mask important changes that were made after a study had already started. In this project, we assessed the frequency of ‘retroactively prospective’ trial registration, i.e., trials that are registered retrospectively at study start, but whose start date is subsequently changed so that by 5 years post-registration, the start date lies after the first registration date (prospective registration).
We analyzed all ClinicalTrials.gov entries for Completed or Terminated interventional clinical trials first registered in 2015. Historical trial registry entries were downloaded using a web scraper. We found 249 of 11,910 registered trials to be ‘retroactively prospective’ (2.1%). The majority of these (169, 67.9%) changed their start date to be prospective after the clinical trial completed.
To our knowledge, this is the first investigation into ‘retroactively prospective’ clinical trial registration. Our method provides journal editors and peer-reviewers with an automated tool to easily uncover potentially questionable reporting practices.
Open discussion
ZOOM LINK
MiniNotes: Transforming Evidence Synthesis
Researchers, policy makers and other decision makers seeking to make evidence-informed decisions face an information overload given the deluge of research published each year. Evidence synthesis (e.g. systematic review and meta-analysis) is designed to address this overload. In this session, we will hear about advances in evidence synthesis methods being implemented in different disciplines.
Living evidence synthesis: practice, pitfalls and promise
Tari Turner
Beyond mean differences: comparing stability, variability or inter-personal differences between two groups
Shinichi Nakagawa
Creating Reproducible and Reusable Evidence Synthesis Data
Joshua Polanin
ZOOM LINK
Professor of Evolutionary Biology and Synthesis at Evolution & Ecology Research Centre,... More
Principal researcher in the Research & Evaluation program at American Institutes for... More
Director Evidence and Methods, National COVID-19 Clinical Evidence Taskforce, leading... More
Group Discussion: Trust Me, Maybe?: Consequences of Teaching about Meta-research in the Classroom
Cassie Whitt
For educators, especially those in the social sciences, teaching about meta-research and open science means discussing issues surrounding the replication crisis (e.g., lack of reproducibility, the prevalence of questionable research practices, etc.). Given the breadth of coverage the replication crisis has received in both lay and academic circles, it seems epistemically irresponsible to disregard these issues when it comes to our pedagogy. However, much is still unknown about the consequences of broaching such topics with students. One concern is that presenting these unsavory realities could increase uncertainty and distrust in science. In the proposed discussion group, we wish to deliberate the benefits and disadvantages of including metascience work in our curriculum. We hope that the discussion will lead to concrete suggestions for efficient ways to present these topics in the classroom so that we can train our students to become critical consumers of research, without eroding their ability to find merit in empirical endeavors. Thus, our central question of interest is this: How – and should – we be teaching meta-research and open science?
ZOOM LINK
Group Discussion: Modelling as Ways of Knowing
Eden Smith
Scientific models are often positioned as representations of a specific target system. However, within the philosophy of scientific practices, studies have demonstrated that modelling is also an activity that contribute to our ways of generating knowledge. In this context, models are understood as ‘epistemic activities’ - activities that build and use models as tools to contribute to the production and evaluation of research claims.
Can viewing modelling as an epistemic activity help highlight aspects of modelling practices that need to be considered within metaresearch studies of scientific practices?
The session will include:
- A short intro to philosophy on modelling as an epistemic activity
- An short example of how account of modelling as epistemic activities can be useful in Ecology
- An open discussion on ways that this example highlights aspects of modelling that need closer attention within metaresearch studies of scientific practices
ZOOM LINK
Workshop: The ROB-ME tool: a new approach for assessing risk of reporting biases in evidence syntheses
Matt Page
Background: The credibility of evidence syntheses can be compromised by reporting biases. These include ‘publication bias’, when the probability that a study is published depends on the P value, magnitude or direction of the results, and ‘selective reporting bias’, when these factors influence whether a published study’s results are reported completely. Purpose/audience: Introduce systematic reviewers to ROB-ME (Risk Of Bias due to Missing Evidence), a new tool for assessing the risk of reporting biases in evidence syntheses, and provide participants with the opportunity to apply ROB-ME. Format: The workshop will be split into two parts.
1. Introduction to ROB-ME: I will provide a brief overview of the ROB-ME tool. Its key components include: select and define which syntheses will be assessed for risk of bias due to missing evidence; determine which studies included in the systematic review have missing results; consider the potential for missing studies across the review; answer signalling questions to inform risk of bias judgements (e.g. How many studies are missing from the synthesis because their results were selectively omitted? and Do graphical methods suggest the synthesis is missing results that were systematically different to those observed?).
2. Application of ROB-ME: Participants will apply ROB-ME to an example systematic review, within small groups. In a plenary session, we will discuss the results of each group’s assessment and issues that arose during the assessment process.
ZOOM LINK
Group Discussion: Responsibilities for receiving and using study data in health research
Kylie Hunter, Anna Lene Seidler , Angela Webster
The Open Science movement is gaining momentum as the importance of research transparency, validation and collaboration are increasingly recognised. A main component of Open Science is data sharing or Open Data, which describes the sharing of de-identified row-by-row data. While there is strong in-principle support for the concept of data sharing, a number of barriers remain, e.g. lack of clear and consistent standards and regulations around appropriate re-use of data, ethical concerns around re-identification and data misuse. In addition, there is limited existing guidance around consistency and integrity checks for secondary datasets.
The focus of this session will be to determine the responsibilities of recipients of secondary datasets. The session will include structured discussion prompted by key questions across the stages of data receipt, processing, cleaning and analysis. For instance, we are looking for answers to the following questions: What are the practices and principles you follow when using secondary data (if any)? Where do you see the main problems around potential for data misuse? How could they be prevented? What data checks should be performed? Attendees will also be invited to bring their own questions. This session will be of interest to anyone who uses secondary data for their research, e.g. systematic reviewers, clinicians, guideline developers. Post-workshop we are looking to incorporate findings of this discussion into a manuscript, that attendees will have the opportunity (but are not required) to contribute to.
ZOOM LINK
Keynote Address: The next 10 years
Season three of television's 'The Good Place' presents a wicked problem. Outdated metrics govern the afterlife. Under this broken system, no one has entered The Good Place in 521 years. All are sent to The Bad Place. Aghast at the eternal torture of even the most virtuous humans, our protagonist, Michael, seeks help from The Good Place's administrative committee. Michael is relieved when the committee expresses deep concern and promises to take decisive action. But his relief is short-lived: the committee first plans to form an elite investigative team to better understand the problem. It will take, they proudly declare, no more than 400 years. Wait, says Michael. I was thinking that we could do something now-ish. Like, right now. Oh no, the committee replies. We have rules, procedures. We can't just do stuff. Similarly, meta-researchers are deeply concerned with how researchers are evaluated and rewarded. We can easily spend the next 10 years talking about how deeply concerned we are. This presentation will speculate that there are better ways to spend our time.
ZOOM LINK
A postdoctoral researcher in the Data Management & Analysis Team for the repliCATS... More