Addressing Reproducibility in Science:
In 2012, Amgen researchers published a paper in Nature that discussed the high rate of failure the scientific community had had in translating their understanding of cancer into effective clinical treatments. Despite the tens of thousands of researcher hours and the hundreds of millions of dollars spent on developing them, the success rate of clinical interventions had proven remarkably low.
In an attempt to identify the cause of these challenges, these researchers then took a simple but audacious step: they decided that they would try to reproduce, for themselves, many of the findings in basic cancer research. After careful consideration, they chose 53 papers - published in some of the most prestigious scientific journals - that were recognized as "foundational" in the field. The team spent countless hours and millions of dollars to conduct this study - and to share its findings with the world - in the hopes that it would further humanity's efforts in combating one of the most pernicious diseases of the 21st century.
And so, when the last of the reproduction attempts had been completed, the data analyzed and debated, and the results finalized, what were the results? Which of the findings had stood up to scrutiny and which had failed to be reconfirmed? In the end, out of 53 papers that had guided hundreds of millions of dollars of funding and countless hours of researcher effort, how many papers were they able to successfully reproduce?
Science Has a Reproducibility Problem
The scientific method - exploration, hypothesis, controlled experimentation, and sharing of the results - has been, quite literally, the most powerful methodological advance in human history and at the root of virtually all technology that dominates modern life. The culture of modern scientific communication, however - publish or perish incentives in academia, scientific publication being dominated by a few large publishers who decide what gets published and what doesn't, and a general lack of transparency and accountability to the larger scientific community for political or commercial reasons - can lead to tremendous misallocation of money, talent, and public attention, and can act as a significant impediment to scientific progress.
Changes in both cultural norms and information technology are beginning to change these things; the advent of open-access journals and scientific social networks are steps in the right direction, as are other previous attempts - whether journals or web sites - to address the issue of scientific reproducibility. However, we believe they have not yet succeeded in addressing systemic issues on a large scale for a few reasons:
- Critical mass. If you were the second person ever to visit Yelp and you saw one review of one restaurant on the other side of the country, how likely would you be to visit the site again (let alone contribute your own review)? Whether a social network like LinkedIn, a review site like Yelp, or even a print journal like Nature, platforms must reach a critical mass of users, readers, and content before it reaches a point of stability.
- Incentives. There are many scientists who would like these issues to be addressed, but the reality is that folks are busy and have their own problems to worry about. "Will this help me get a postdoc position? How about a job? Will this increase my standing in the scientific community?" Any platform that seeks to be effective must align both self-interested and nobler motives.
Political realities. In an ideal world science is an objective, selfless pursuit of the truth, but anyone who has ever gone to graduate school or worked in a large company knows that's not always the way it works. How likely is it that a graduate student would publicly criticize her professor's work, even if her point is valid? A failure to recognize the prominent role that these dynamics play in human behavior will limit any solution’s effectiveness.
Balancing new & old. While the proliferation of open-access journals are a tremendous step in the right direction, the vast majority of scientific knowledge still resides in the traditional channels. An ideal solution would allow the publishing of new work while building on the existing base of knowledge.
Our Experiment: SciBase
It is with the above in mind that we're trying to start our own platform for addressing these issues. We call it SciBase. SciBase is meant to be a platform for crowdsourced reviews of scientific papers; what Yelp is to restaurants or Amazon reviews are for products, we want to be for scientific work.
The basic premise is pretty simple: scientists provide their reviews of papers they themselves have tried to reproduce. In turn, other scientists rate that scientist's papers and reviews, and the best-rated papers, reviews, and scientists rise to the top. For researchers contributing content, this makes it easier for them to get recognized for work that might not otherwise get much attention (if published in a less widely-read journal) or at all (most journals won't publish a reproduction attempt by itself). And - as the database gets built - it can help many scientists hone in on the best papers or find new papers they might have missed.
You could say that we started SciBase as an experiment. Our hypothesis? That if we could create a platform that facilitated transparency and accountability in science by providing better data while also addressing each of the points above, we could succeed in creating a systemic change in the way science is conducted and communicated. As with any experiment, SciBase could fail. This is a tricky problem that (we don't believe) anyone has gotten right just yet, and - though we believe someone will get it right at some point - it may not be us.
With that said, what steps are we taking to address what we believe were a few causes of failure previously?
Regarding critical mass, we're starting by focusing only on the general areas of chemistry, materials, molecular biology, and biotechnology and devoting all of our resources to getting as much traction as we can in those areas. We'd rather have 50 reviews all in the area of amine synthetic chemistry than have 25 in psychology, 25 in sociology, 25 in economics, and 25 in physics.
Regarding incentives, we're putting a lot of effort into trying to make contributing to the platform worthwhile. Contribute high-quality content and we'll make sure that your work gets ranked highly on Google and other search engines, that other scientists are aware of your work, and that you'll be first in line for job openings that employers post on our site. Heck, we'll even try to help get your work published in traditional journals, as well as a few other ideas we have that we're still brainstorming. If you're an employer or hiring manager, we'll help do your recruiting for you, with much higher quality and often at lower rates than industry norms.
Regarding political realities, we're working hard to try to balance privacy and accountability, such as allowing users to post anonymously or under an alias but still keep the quality of their contributions tied to their public-facing profile and provide enough information about a contributor to help other users make decisions about how much weight they should give someone's review. We also have ways to identify and discourage users from being indiscriminately critical (say to a 'rival') or praiseworthy (say to a friend).
Finally, regarding balancing new and old, while we'll soon provide the ability for scientists to publish completely new work, we're actually first focused on allowing scientists to find and review previously published work, since that's still where the vast majority of scientific knowledge resides.
How You Can Help
If you read everything above, you probably have one of a few reactions. Maybe you're skeptical. Maybe you're excited. The only thing we know for sure is that - if you're read this far - you're at least interested. And we thank you for that, because we need as many people as possible interested in this topic. But we need more than people interested; we need them engaged. As mentioned earlier, the key to making this work is quickly getting to a critical mass of users and reviews. If we do, this will work. If not, it won't.
To do this, we 100% absolutely need your help. So we're asking: please help us if you can. Here's how:
If you're hiring PhDs - entry-level through executives - contact us. We'll source, screen, interview, and recommend candidates for the position...and for any openings posted before May 31, we'll do any entry-level positions for free.
If you're looking for a job as a PhD, we'll help you find one. Complete a profile and contribute a few reviews of papers you've previously tried to reproduce - your papers and reviews will be rated by our community of scientists and the top-rated folks will be first in line for job opportunities.
If you believe in encouraging transparency and accountability in science, we'd ask you to do the following:
Share this page with your friends in chemistry, materials, mol bio, biotech, and related fields.
If you have any interest in keeping up to date on our experiment, subscribe to our blog at the bottom of this page. We have no interest in bothering you and will make it very easy for you to unsubscribe should you decide to do so in the future.
Take a few minutes to register on the site and contribute a review or two. Yes, this will require work and effort on your part and many of you probably won't contribute. We don't fault you for that. However, if you are someone who strongly believes in what we're doing, the single best way you can show it is by contributing.
Provide feedback. Any feedback: praise, criticism, concerns, features, ideas for incentivizing people to contribute, we want it all. You are exactly the folks we're looking to help with this platform, so any feedback you provide will be taken very seriously. (See the blue tab on the left side of the browser window? Use that.)
We feel strongly that improving transparency and accountability of science is something that can have significant positive benefits for both science and society. SciBase is our experiment in an attempt to find a solution.
We don't know if it's going to work, but we think it's worth trying.
Please help if you can.
The SciBase Team