Saturday, December 27, 2014

Three observations about anonymity in peer review

I made a vow to myself to not blog about peer review ever again. Oh well. Anyway, I have been thinking about a few things related to anonymity in the review process that I don’t think I’ve heard discussed elsewhere:
  1. Everyone I talk to who has published there has raved about eLife. Like, literally everyone–in fact, they have all said it was one of their best publication experiences, with a swift, fair, and responsive review process. I was wondering what it was in particular that made the review process so much less painful. Then somebody told me something that made a ton of sense (I forget who, but thanks, Dr. Insight, wherever you are!). The referees confer to reach a joint verdict on the paper. In theory, this is to build a scientific consensus to harmonize the feedback. In practice, Dr. Insight pointed out that the main benefit is that it’s a lot harder to give those crazy jackass reviews we all get because you will be discussing it with your fellow reviewers, who are presumably peers in some way or another. You don’t want to look like a complete tool or someone with an axe to grind in front of your peers. And so I think this process yields many of the benefits of non-anonymous peer review while still being anonymous (to the author). Well played, eLife!
  2. One reimagining of the publishing system that I definitely favor is one in which every paper gets published in a journal that only publishes based on technical veracity, like PLOS ONE. Then the function of the “selective journal” is just to publish a “Best of…” list of the papers they like the best. I think that a lot of people like this idea, one which decouples assessments of whether the paper is technically correct from assessments of “impact”. In theory, sounds good. One issue, though, is that it ignores the hierarchy on the reviewer side of the fence. Editors definitely do not just randomly select reviewers, nor select them just based on field-specific knowledge. And not every journal gets the same group of reviewers–you better believe that people who are too busy to review for Annals of the Romanian Plant Society B will somehow magically find time in their schedule to review for Science. Perhaps what might happen is that this new version of “Editor” (i.e., literature curator) might commission further post-publication reviews from a trusted critic before putting this paper on their list. Anyway, it’s something to work out.
  3. I recently started signing all my reviews (not sure if they ever made it to the authors, but I can at least say I tried). I think this makes sense for a number of reasons, most of which have been covered elsewhere. As I had noted here, though, there is “Another important factor that gets discussed less often, which is that in the current system, editors have more information than you as an author do. Sometimes you’ll get 2/3 good reviews and its fine. Sometimes not. Whether the editor is willing to override the reviewer can often depend on relative stature more than the content of the review–after all, the editor is playing the game as well, and probably doesn’t want to override Prof. PowerPlayer who gave the negative review. This definitely happens. The editor can have an agenda behind who they send reviews to and who they listen to. So no matter how much blinding is possible (even double blind doesn’t really seem plausible), as long as we have editors choosing reviewers and deciding who to listen to, there will be information asymmetry. Far better, in my mind, to have reviewer identities open–puts a bit of the spotlight on editors, also.” Another interesting point: as you work your way down the ladder, if you get a signed negative review, you will know who to exclude next time around. Not sure of all the implications of that.
Anyway, that’s it–hopefully will never blog about peer review again until we are all downloading PDFs from BioRxiv directly to our Google self-driving cars.

Friday, December 26, 2014

Posting comments on papers

For many years, people have wondered why most online forums for comments result in hundreds of comments, but even the most exciting scientific results lead to the sound of crickets chirping. Lots of theories as to why, like fear of scientific reprisal or fear of saying something stupid or lack of anonymity.

Perhaps. But I wonder if part of it is just that it feels… incongruous to post comments on scientific papers. To date, I have posted exactly two comments on papers. My first owed its genesis (I think) to the fact that I had just read something about how nobody comments on papers, and so I was determined to post a comment on something. And it was a nice paper on something I found interesting and so I wanted to say something. I just now wrote my second comment. It was on this AWESOME paper (hat tip to Sri Kosuri) comparing efficiency of document preparation using Word vs. LaTeX (verdict: LaTeX loses, little surprise to me). Definitely something I found interesting, and so I somehow felt the urge to comment.

And then, as I started writing my comment, something just felt… wrong. Firstly, the process was annoying. I had to log in to my PLOS account, which I of course forgot all the details of. Then, as I was leaving my comment, I noticed a radio button at the bottom to say whether I had a competing interest. The whole process was starting to feel a whole lot more official than I had anticipated. Suddenly, the relatively breezy and light-hearted nature of my comment felt very out of place. It’s just very hard to escape the feeling that any commentary on a scientific paper must be couched in the stultifying language and framework of the typical peer review, which is just so different than the far more informal commentary than you get on, for instance, blog posts. And heaven forbid if you actually posted a joke or something like that.

I feel like part of the reason nobody comments is that publishing a paper seems like a Very Serious Business™, and so any writing or commentary associated with it seems like it should be just as serious. Well, I agree that publishing a paper is a very tedious business, but I think making scientific discourse a bit more lighthearted would be a good thing overall. And who knows, one side-effect could be that maybe someone might actually read the paper for a change!

Tuesday, December 23, 2014

Fortune cookies and peer review

Ever play that game where you take the fortune from a fortune cookie and then add “in bed” to the end of it for a funny reinterpretation? I’ve found it works pretty well if you just replace “in bed” with “in peer review”. Behold (from some recent fortune cookies I got):

Look for the dream that keeps coming back. It is your destiny in peer review.

Wiseness makes for oneself an island which no flood can overwhelm in peer review.

Ignorance never settles a question in peer review.

In the near future, you will discover how fortunate you are in peer review.

Every adversity carries with it the seed of an equal or greater benefit in peer review.

You will find luck when you go home in peer review.

Also reminds me of the weirdest fortune I ever got: “Alas! The onion you are eating is someone else’s water lily.” Not sure exactly what that means, in peer review or otherwise…

Saturday, December 20, 2014

Time-saving tip–make a FAQ for almost anything

One of the fundamental tenets of programming is DRY: Don’t Repeat Yourself. If you find yourself writing the same thing multiple times, you’re creating a problem in that you have to maintain consistency if you ever make a change, and you’ve had to write it twice.

In thinking about what I have to do in my daily life, a lot of it also involves repetitive tasks. The most onerous of these are requests for information that require somewhat length e-mails or what have you. Yet, many times, I end up answering the same questions over and over. Which brings up a solution: refer to a publicly available FAQ.

I first did this for RNA FISH because I was always getting similar questions about protocols and equipment, etc. So I made this website, which I think has been useful both for researchers looking for answers and for me in terms of saving me time writing out these answer for every person I meet.

I also recently saw a nice FAQ recently (can’t find the link, darn!) where someone had put together a letter of recommendation FAQ. As in, if you want a letter of recommendation from this person, here’s a list of details to provide and a list of criteria to determine whether they would be able to write a good one for you.

Another senior professor I met recently said that she got sick of getting papers from her trainees that were filled with various errors. So she set up a list of criteria and told everyone that she wouldn’t look at anything that didn’t pass that bar. Strikingly, she said that the trainees actually loved it–it made a nice checklist for them and they knew exactly what was expected of them.

I think all of these are great, and I think I might make up such documents myself. I’m also thinking of instituting an internal FAQ for our data management in the lab. Any other ideas?

Sunday, December 14, 2014

Origin and impact of stories in life sciences research: is it all Cell’s fault?

I found this article by Solomon Snyder to be informative:

http://www.pnas.org/content/110/7/2428.full

Quick summary: Benjamin Levin realized in the 80s that the tools of molecular biology had matured to the point where one could answer a question “soup to nuts”. So his goal was to start a journal that would publish such “stories” that aimed to provide a definitive resolution to a particular problem. That journal was Cell, and, well, the rest is history–Cell is the premier journal in the field of molecular and cellular biology, and is home to many seminal studies. Snyder then says that Nature and Science and the other journals quickly picked up on this same ideal, with the result that we now have a pervasive desire to “tell a story” in biomedical research papers.

I was talking with Olivia about this, and we agreed that this is pretty bad for science. Many issues, the most obvious of which is that it encourages selective omission of data and places undue emphasis on “packaging” of results. Here are some thoughts from before that I had on storytelling.

I also wonder if the era of the scientific story is drawing to a close in molecular biology. The 80s were dominated by the “gene jock”: phenotype, clone, biochemistry, story, Cell paper. I feel like we are now coming up on the scientific limitations of that approach. Molecular biology has in many ways matured in the sense that we understand many of the basic mechanisms underlying cellular function, like how DNA gets replicated and repaired, how cells move their chromosomes, and elements of transcription, but we still have a very limited understanding of how all this fits together for overall cellular function. Maybe these problems are too big for a single Cell paper to contain the “story”–in fact, maybe it’s too big to be just a single story. Maybe we’re in the era of the molecular biology book.

As an example, take cancer biology. It seems like big papers often run from characterizing a gene to curing mice to looking for evidence for the putative mechanism in patient samples. Yet, I think it is fair to say that we have not made much progress overall in using molecular biology to cure cancer in humans. What then is the point of those epic papers crammed full of an incredible range of experiments? Perhaps it would be better to have smaller, more exploratory papers that nibble away at some much larger problems in the field.

In physics, it seems like theorists play a role in defining the big questions that then many people go about trying to answer. I wonder if an approach like this might have some place in modern molecular biology. What if we had people define a few big problems and really think about them, and then we all tried to attack different parts of it experimentally based on that hard thinking? Maybe we’re not quite there yet, but I wouldn’t be surprised if this happened in the next 10-20 years.

(Note: this is most certainly not an endorsement for ENCODE-style “big science”. Those are essentially large-scale stamp collecting expeditions whose value is wholly different. I’m talking about developing a theory like quantum mechanics and then trying to prove it, which is a very different thing–and something largely missing from molecular biology today. Of course, whether such theories even exist in molecular biology is a valid question…)

Saturday, December 13, 2014

The Shockley model of academic performance

I just came across a very interesting post from Brian McGill about William Shockley’s model for why graduate student performance varies so much. Basically, the point is that being successful (in this case, publishing papers) requires clearing several distinct hurdles, and thus requires the following skills:
  1. ability to think of a good problem
  2. ability to work on it
  3. ability to recognize a worthwhile result
  4. ability to make a decision as to when to stop and write up the results
  5. ability to write adequately
  6. ability to profit constructively from criticism
  7. determination to submit the paper to a journal
  8. persistence in making changes (if necessary as a result of journal action).
Now, as Brian points out, if you were 50% better at all of these (not way beyond the norm, but just a little bit better), then your probability of succeeding in your assigned task (which is the product of the individual probabilities) is roughly 25 times better. This is huge! And it’s also to me a reason for great hope. The reason is that if, alternatively, being 25 times better required being 25 times better at any one particular thing, then it seems to me that it would require at least some degree of unusually strong innate ability in that one area. Like, if it was all about writing fast, then someone who was a supernaturally fast writer would just dominate and there’s nothing you could really do to improve yourself to that extent. But 50%? I feel like I could get 50% better at a lot of things! And so can you. Here are some thoughts I had about creativity, writing with speed, execution and rejection, and there are tons of other ways to get better at these things. Note that by this model, by far the most important quality in a person is the ability to reflect on their strengths and weaknesses and improve themselves in all of these categories.

I think this multiplicative model becomes even more interesting when you talk about working together with people in a lab. One point is that establishing a lab culture in which everyone pushes each other in all regards is critical and will have huge payoffs. Practically, this means having everyone buy in to what we collectively think of as a worthwhile idea, how we approach execution, how to write, what our standards of rigor are, and sharing stories of rejection and subsequent success through perseverance. This also provides some basis for the disastrous negative consequences of having a toxic person in lab: even if the effects on each other person in the lab in all or even some of these qualities are small, in aggregate, it can have a huge effect.

The other point is delegation strategy. It’s clear that in this model, one must avoid bottlenecks at all costs. This means that if you are unable to do something for reasons of time or otherwise and the person you are working with is also unable to do that task, things are going to get ugly. The most obvious case is that most PIs have only a limited capacity (if any) to actually work on a project. So if a trainee is unable to work on the project, nothing will happen. Next most obvious case is inability to write. If the trainee is unable to write and you as a PI have no time or desire to write, papers will not get written, period. Deciding how much time to invest in developing a trainee’s skills to shore up particular weaknesses is a related but somewhat different matter, and one that I think depends on the context.

This model also maybe provides some basis for the importance of “grit” or resilience or motor or drive or whatever it is you want to call it. These underlie those items on the list that are the hardest to change through mentorship. If someone just doesn’t have an ability to work on a project, then there’s not a whole lot you can do about it. If someone does not have the determination to do all the little things required to finish a project or to stick to it in the face of rejection, it will be hard to make progress, and there’s not much that you can do to alleviate these deficiencies as a mentor. I think many PIs have made this realization, and I have often gotten the advice that the most important thing they look for in a person is enthusiasm and drive. I would add to this being open to reflection and self-improvement. Everything else is just gravy.