What Academia can learn from Open Source¶

What Academia can learn from Open Source¶

From

I'm an academic. I love doing research and writing papers. What I don't love is playing the publishing game and waste my time micro-managing all these bureaucratic aspects of academia. I also love open-source software, and while the FOSS community is far from perfect, there are some ideas I think Academia could borrow that would make it more inclusive for everyone and more useful for society.



⚠️ This is a rant about some things I think are wrong in Academia and some ideas about how to improve this situation. I mostly focus on Artificial Intelligence because that's my field, but I think most of these ideas apply everywhere. I'm not trying to discredit or criticize any individual or organization, but rather raise some questions that I think all of us scientists, as a community, should attend. I declare myself as guilty of all the sins I describe.



🗞️ The setup

If you've ever tried Academia you have surely been in this situation. You come up with a good idea, do some experiments, write a paper about it and... that's when the real work starts. Whether you send that paper to a conference or a journal, you'll get 2 to 5 reviewers to critic your paper, ask you for improvements and decide if your work is good enough for publication. If it's a conference, you'll usually either get accepted or rejected, but if it's a journal, you might get a second chance to improve and resubmit. This process is called peer review, and it's one of the fundamental pillars of Science.


Don't get me wrong, peer review is extremely important. You see, Science is a social process. Yes, you can follow the scientific method and come up with a Frankenstein monster all by yourself on a private island, and you would be doing science (without capital "s"). It is only when those results are scrutinized, retested, and confirmed by additional researchers, that they become part of the continuous and incremental body of accumulated knowledge that we call Science.


Peer review is a fundamental part of this process because it ensures that you are not deluding yourself into believing what you want to believe. It also guarantees we all follow the same high standards of openness, honesty, and goodwill. However, problems arise when the means become an end in itself.


Since peer review is such an important concept in Science, we have built all our social scientific processes around it. We set deadlines, ratings, whole systems to formalize and organize what peer review means. We have double-blind and single-blind peer review to guarantee that authors and reviewers don't take revenge on each other. We have evaluation forms and protocols, and we have workshops and workshops about peer review.


And yet, time and time again, experiments have shown that reviews are significantly inconsistent. If you randomly redistribute the papers on a top AI conference, a large part of the accepted papers get rejected, and vice-versa. However, I do not take this as evidence that scientists are lousy reviewers. Not even close. Scientists are pretty good at being objectively critical of other's and our work, we do that every single day! I think the problem lies in the system and the incentives built around it, mostly for the benefit of the big players in the Academic world, the publishers.


🤕 The symptoms

Every time you take a metric and turn it into an objective, it ceases to be a useful metric. This has happened in Science with the concept of publishing a paper. Publishing a paper is the main mechanism for socializing research. A research paper usually describes some scientific hypotheses in as clear terms as possible, a protocol to test (i.e., falsify) those hypotheses, and an honest and critical discussion of results and their implications. By reading a paper, fellow scientists can come up with additional hypotheses or ideas, and build on top of previous work. And every time you use someone else's ideas as part of your own, you are supposed to include a citation. This is what Newton was referring to when he said he had "stand on the should of giants".


In time, the most significant scientific discoveries should get a large number of citations, because everyone building on top of your ideas would cite you. Hence, a large number of citations is seen as a sign of scientific achievement, and that is often taken as the One Metric of Academic Success. See the problem here?


Once citations become a distinction mark, everyone tries to maximize them. A lot of strategies begin to arise, like publishing lots of low-effort papers instead of fewer and better ones, and working only on the most fashionable topics. Since to get cited you have to get published first, publishers become the gatekeepers. A feedback loop starts to build in which publishers try to be as exclusive as possible to attract better papers, since more citations imply more readers which implies more subscriptions; and authors try to aim for the most exclusive publishers since, otherwise, they won't get enough citations.


In this dynamic, two very harmful things start to happen.


🅰️ First, scientists spend a lot of effort and money, very often public money, on research that never gets published because of the massive competition. Ironically, once that research made with public money is published, is often put behind a subscription paywall, which most universities and institutions subscribe to. So taxpayers end up paying for research twice, once when done by Alice and again when Bob wants to read Alice's paper. Isn't that crazy enough?


🅱️ The second issue is more subtle but far more harmful. In this process of out-competing each other for citations, we forgot what's important about Science. It's a social process designed to improve human life by solving humanity's most pressing problems. But this competition, far from what free-market ideologists could believe, only serves to undermine the very purpose of Science:



  • 👎 The most fashionable topics get the most attention, and those are often not correlated with the need of the many.

  • 👎 Also, scientists are not born, they are educated. If competition is so fierce that junior researchers don't get a break, we end up losing the best minds before they get a chance to shine.

  • 👎 And finally, this constant competition for citations discourages any kind of self-critic research, any analysis of negative results, and any replication study, because no one will cite you for saying "yeah, I retested this, and it does seem to work as they originally said...".


This discussion started with peer review, and how the whole academic publishing is built around this concept. Now is the time to criticize it. Since scientists are forced to compete for attention, we have turned peer review from the supportive and self-healing process it should be into the most unpleasant part of doing research. To be fair, not all reviewers are nasty, and when we do, I'm arguing is more often than not because we are forced by the system.


⭐ The new paradigm

I believe the root of the problem in this picture should be clear by now.


⚠️ The incentives for scientists are not aligned with the purpose of Science.


So, how do we realign the incentives of scientists with the original purpose of Science, and make it better for everyone? Honestly, I don't know. But I think we can take some ideas from the FOSS community to at least foster some good practices which I believe might put us on the right track.


The idea starts with embracing Openness in the whole process of scientific discovery and innovation. This is not my original idea, of course, there are some commonly shared principles of "open science" in the academic community. This is one possible way to express them:



  • Open Methodology: Document the application of methods and the entire process behind them as far as practicable and relevant.

  • Open Source: Use open source technology (software and hardware) and open your own technologies.

  • Open Data: Make the data freely and easily available.

  • Open Access: Publish openly and make publications usable and accessible to everyone.

  • Open Peer Review: Provide peer review in an open and public forum.

  • Open Educational Resources: Use free and open materials for education and in university teaching.


In this form, these principles are quite abstract, and there are many ways in which they could be implemented. There are plenty of degrees of "open science" like publishing in open access journals managed by non-profit organizations, publishing pre-prints before submitting to "traditional" journals, and all the good practices around making data and protocols publicly available.


I want to focus on some key ideas I think could be fruitful to try, without implying that this is the absolute solution to this problem, but rather a small part of a much larger paradigm shift that Science has to undertake.


⚙️ The practices

These are my proposals. Most of them relate specifically to the peer-review process because, as I said before, this process is a pillar of the scientific process, but also because I think this is the one place where we as a community can innovate the most, without requiring government grants or changing the way bureaucratic institutions work. The peer-review process is at the base of the entire scientific process and any major change in its functioning could have a massive impact up the chain.


1️⃣ Public reviews


Let's start by acknowledging that single- and double-blind reviews are more harmful than helpful. These measures are supposed to shield reviewers and authors from future retaliation and disallow any form of favouritism, which should make the review process more just and honest. In practice, they shield reviewers from criticism and make the whole review process less transparent.


I propose to turn this concept around and make all reviews completely public. We have to trust we are all reasonable individuals and professional scientists, who should be able to provide objective judgment without favouritism. But if we don't, then our reviews themselves are public, and subject to review and criticism.


👉 This is very easy to implement with any workflow that allows posting comments on a public forum. Note that I don't necessarily mean that anyone can review (this is discussed further down) but even if only specific "official" reviewers are assigned to a paper, their comments and their identity and credentials should be public.


2️⃣ Continuous peer-review


This idea ties in with the previous point. Currently, almost all peer-review (that I'm aware of) happens in the context of some specific conference or journal. What I'm proposing here is to detach the peer-review process from any journal or conference and make it instead integral to the paper. Every paper would carry around the Internet with all its reviews, and if rejected at some previous point, a future conference or journal editor would have access to the full history of reviews and changes to reconsider the paper for "mainstream" publication (we'll talk more about what this means later).


👉 I can see this happening similar to how issues are handled in Github. You publish a paper, and potential reviewers would open "issues" against it, one for each important thing to address. Issues would be discussed and worked on in public and there would a history of every change introduced into the paper with links to which issues are being fixed.


Since no paper is perfect, conferences and journal editors should not aim for publishing issue-free papers, but rather papers that show a healthy list of open and closed issues and demonstrable usefulness in their current state. A healthy list of open and closed issues would be an indication of a solid paper, the same way as for software.


3️⃣ Encouraging reviews


The next problem I want to tackle consists of how to kick-start the reviewing process. Once we detach reviews from specific conferences or journals, how can we guarantee everyone has access to good reviews? For sure, rockstar scientists will get thousands of reviews but what about junior researchers who are just starting?


One idea is to see reviewing as an integral part of the scientific career. Researchers should be evaluated also in terms of how much value they put back to the community, and one way senior scientists can contribute is to review junior scientists. We should be proud to put in our CVs how many reviews we have given. And good reviews, which are in turn evaluated positively by the author and other reviewers, should count towards one's scientific output.


👉 Scientists would get a "badge" with the number reviews they have given, and display it on their homepage, their LinkedIn or ResearchGate profile, etc. This badge would link to some online list that links back to all reviews. This could be maybe hosted our ORCID profiles or any similar non-profit initiative.


Also, senior researchers are part of a community, and are often connected with like-minded individuals in other institutions and countries. It should be part of their work to look up for each other's students and junior researchers. And yes, someone will say "but then you can give a good review to my students if I give a good review to yours". Again, this is why all review is, first and foremost, public in nature.


4️⃣ Qualitative evaluations


Now let's move on to specific review formats. Too often I see very long lists of checkboxes and 1-5 ratings, etc. I believe there is value in having a structured evaluation template, to make sure we more or less agree on what are the core issues we should care about. But going to the extreme of having 10 different ratings for a paper is insane! What is the difference between 6 and 7?


When putting reviewers under the pressure of giving numerical scores, we are asking them to unconsciously introduce all the biases they have about that particular problem or field or approach or author. There is simply no objective way to numerically compare two different papers.


A good research paper needs to have a solid methodology (correctly apply the principles of science as it is common practice in that field), provide relevant results and conclusions (either positive or negative), be feasible to reproduce by independent researchers, and have a clear presentation. Either the paper is good enough to be considered publishable, if all these aspects are covered, or it isn't. That's it.


👉 I prefer a simple evaluation form that asks: "Is this aspect of the paper up to the scientific standard?, and a piece of free-form text for you to explain what is lacking in each aspect.












Methodology
[x]👍
[ ]👎


Results
[x]👍
[ ]👎


Reproducibility
[ ]👍
[x]👎


Presentation
[ ]👍
[x]👎



A specific conference or journal might want to evaluate the potential impact or significance of a paper before accepting it for publishing. But impact or significance is not what Science is about. There are however legitimate cases where impact or significance is important.


If you have to allocate a restricted pool of resources (e.g., grant money) of course you want to evaluate impact. Yet, I argue this is not part of the peer-review process, but a posterior analysis that each institution or publisher should do based on their specific criteria. Peer-review should be a process by which the scientific community as a whole evaluates that some research is sound science, irrespective of idiosyncrasies.


5️⃣ Self-publishing


Now that the peer-review process is completely detached from the "mainstream" publishing industry, who decides when is a paper ready to be published? Well, of course, the authors! It is up to the authors to determine that, given all the feedback received, they consider their work is production-ready.


👉 All papers would be published first in draft-mode, perhaps even before being completely written. During the draft phase, you collect all the feedback you can from peer reviews and work on the issues you consider more relevant. When you feel it is good enough (possibly because most of the recent reviews are favourable) you hit that Publish button and create a release. If some errors appear, later on, you fix them and publish another release.


👉 What becomes citable then? Easy, each release of each paper gets a unique DOI that will forever point to that exact version, together with all its metadata and reviews. If I cite something yours and criticize it, and you later fix it, that's OK. My critic is still valid because it points to a previous version that is indexable, and the fact that you fixed it only speaks higher of you!


But wait, won't authors publish a lot of low-effort papers to engross their CVs? Well, maybe someone will but, who cares? We started by saying that, intuitively, citations should be a good measure of scientific quality. This is still true in this format. If I release a bunch of crappy papers, no one will cite them.


And also, who thinks CVs are useful? Anyone trying to evaluate me as a researcher would not look into some list of titles and numbers I pasted in a Word document. They would go to my researcher profile and see my most significant work, the reviews it has received, and how my whole research process works!


This doesn't mean that I get to decide my work is relevant, however. This just means I get to decide my work is ready to be consumed by the scientific community. The community will still judge my work's relevance by citing it, criticizing it, and in two more ways I left for the end.


6️⃣ Conferences for networking


Now that all papers are being published by their authors, what's the purpose of scientific conferences? We can now recover their original purpose. Conferences were created as a medium to get like-minded scientists together to share their experiences and to discuss the most relevant problems of their field. But as conferences became more and more a mainstream path for getting published, their organization has become more and more about managing the peer-review process.


Now that peer-review is detached from conferences, their organizers are free to focus on scooping what are the most interesting topics and the most significant results in those topics and invite those speakers they believe will the bring the biggest value. Which papers get to be presented? I think we could deal with that in two ways:


👉 As an organizer of a thematic conference, I would spend half of the year looking around for interesting papers to invite their authors. They would still pay for their accommodations (or their institutions would) and they would come to enjoy what's best in every conference, the networking.


👉 I could also open a call for papers, as usual, but authors would submit papers that are already reviewed and released. My role would be to decide, based primarily on thematic fit, what I think is more relevant for my community.


👉 There is even no need to attach participation to a published paper. Authors could simply submit "talks", possibly backed by one or more papers that support their submission, as it is already common in some conferences.


This would completely reshape what conferences are for (at least in my field). There is no reason why we should wait to the top conferences of the year to be able to read papers. We would go to conferences for the chance to talk with the researchers we admire about their work.


And someone asks, but how would conferences compete if they have no publication rights? Well, I argue this would be very good. Conference organizers would have to compete on the grounds of providing a better environment for networking, interesting events, nice amenities, but no one would have a monopoly on the knowledge itself. There is even no reason why the same talk cannot be presented in more than one conference if enough people are willing to listen.


7️⃣ Journals for socializing


And finally we come round to the original culprit, the infamous research journal. Now that papers are published openly, what are journals good for? Well, what they were originally designed for, socializing research! Journals were created as a means for academic societies to collect the most relevant research in a given community and publish it for a larger audience.


Then commercial publishers arrived and turned science into a business, and journals became paywalled gatekeepers of knowledge, that require original research often paid with public money that they resell again for public money. The largest academic publishers often state they have costs to cover, but there is plenty of evidence that they make a significant profit. And that's OK, but if I as a journal editor want to make a profit, I'm gonna have to innovate.


👉 Like conferences, I could scoop around and feature the most interesting papers in some thematic issues, maybe ask the authors to give some new comments on them, prepare explainer videos, add links, and put some effort into turning those "raw" papers into beautifully typeset pages.


👉 SOTA reviews would be a nice fit for journals as well. These are not original research papers, but they often provide a lot of value by analyzing a bunch of papers and giving advice on common trends or highlighting interesting lines for future research.


👉 I would also have editorial articles specifically written for an issue that could summarize in layman terms about a particular subject, to introduce it to a larger audience. I would even pay scientists that are good communicators for this work.


Journals would have to compete on the grounds of being good at selecting topics and papers to socialize, and provide some additional editorial value. In any case, original research papers would be owned only by their authors and would be published always with some public license (e.g., Creative Commons). This would ensure that Science belongs to the ones who ultimately pay for it, that is, society.


💻 The system

Putting all these ideas together in a functioning system will require a lot of work. From the infrastructure point of view, I envision something aking Github, a repository of open access papers with builtin comments, reviewing, and social features. Ideally, it would also have a web UI for editing, similar to Overleaf and, of course, fully integrated with Git. I understand this might not be the best solution for academic communities that are not very closely related to software, i.e., social sciences, mostly because it could pose a significant learning curve for their members and become more of a hinder than a help.


From the social point of view, kick-starting such a system would require a massive community effort. And not because of the infrastructure cost, that's the minor issue. I think the largest obstacle for this kind of paradigm shift is that a large part of the community would have to move away simultaneously from journals and conferences as the main publication channels. Otherwise, the few that start the effort will be completely disconnected from the rest of the community.


I can see this happening as an effort from, say, the AI community, or any other technically-savvy collective. Tomorrow morning, all the senior scientists that publish in ICLR, ICML, NeurIPs, and ACL, could suddenly decide they want to go fully open. It would require the conference organizers to support the initiative as well. Instead of opening a call for papers, the conferences could decide that they would open a call for submissions, which should peer-reviewed and published in this new format.


Some non-profit organization could be formed from within the community to provide the infrastructure. Since there will be some operative costs, this platform would require some payment, but it would be very small compared to publishing fees in most major open access journals. Also, some of the big players in the industry could support this initiative by providing hosting and infrastructure for free. This would be a big PR boost to these companies.


📊 The metrics

We started this discussion by saying that an intuitively good metric to estimate scientific impact, i.e., citations, had become an objective and thus lost their entire meaning. But citations are not an inherently bad metric, it's just when we use citations as the one quantitative metric to compare individual researchers that we miss the entire point.


Likewise, the fact that we self-publish all our papers doesn't mean that being featured in a major conference or journal is worthless. On the contrary, when everything we publish is open, being featured in a mainstream publication becomes an even better measure of impact, because it is no longer tied to my financial capacity or any other unfair advantage I might enjoy in the community. This would be very good for Third World researchers, who produce valuable science, but are often cutoff from mainstream publication for reasons completely unrelated to the quality of the research.


If we can restructure the incentives and processes of Science such that they are aligned with the purpose of Science as primarily a means to improve human life, everything else would fall into place. Once researchers are free from predatory publishing practices, meaningless numerical statistics and unhealthy competition, I believe we will all focus what we love most, doing sound research for the good of mankind.


Then, all those metrics that are used today will regain their meaning. Being invited to a top-tier conference would mean that what your community wants to hear from you. Being featured in a top-tier journal would mean that some editors consider your work is high-quality. And being cited often would mean that your research is producing real impact, that you are becoming a giant on whose shoulders others can stand.


🗨️ This is a topic of which I'm very passionate about, and I want to hear your feedback. If you want to discuss this topic with me, .


🚫 If you find some typo, error or you have any suggestion to help me improve this page, you can or .


Report Page