For years, academic hiring and promotions (in computer science, at least) have focused on precisely one thing: number of first-author papers in top-tier venues.
Focusing on the number of papers encourages people to publish the so-called minimum publishable unit: The smallest thing that stands a chance of being accepted. This discourages large research projects where you may take multiple years to reach something worthy of a top-tier venue. It also discourages high-risk projects (or, as they are more commonly called: research projects) because there’s a chance of not reaching a publishable unit at all.
Focusing on first author publications discourages collaborations. If two people work on a project that leads to a paper, only one will get the first-author credit. If a large project needs ten people on it, you need to do ten publications per year for it to have the same return on investment as everyone doing small incremental projects.
The net result of this is that even top-tier venues are saturated with small incremental projects. Academics bemoan this fact, but continue to provide the same incentives.
In the #CHERIoT project, we have tried, in a small way, to push back on this. We list authors in alphabetical order and we add an asterisk for people who deserve the conventional idea of ‘first author’ credit. Alphabetical order makes it clear that the author list is not sorted by contribution size (and such a total ordering does not exist when multiple authors made indispensable contributions in wildly different areas).
I was incredibly disappointed with the PC Chairs at the #ACM conference for a recent accepted submission deciding to push back on this (in particular, on including the exact same words that we used in our MICRO paper). ACM should be actively trying to address these problems, not enabling editorial policies that perpetuate them. If I had understood that the venue had such a strong objection to crediting key contributors, I would not have submitted a paper there nor agreed to give a keynote at one of the associated workshops.
I am in the fortunate position that paper credit no longer matters in my career. But that doesn’t mean that I am happy to perpetuate structural problems and it is very sad to see that so little thought is given to them by the organisations with the power to affect change.
The one exception that I have seen, which deserves some recognition is the Research Excellence Framework (REF) which is used to rank UK universities and departments. This requires a small number of ‘outputs’ (not necessarily papers) by each department, scaled by the number of research staff but not directly coupled to the individuals. These outputs are ranked in a number of criteria, of which publication venue is only one. It is not perfect (you will hear UK academics complaining about the REF with increasing frequency over the next couple of years as we get closer to the deadline for submission in the next round), but at least it’s trying.
Simply not actively trying to make the problem worse is a low bar that I would previously have expected any conference to clear.