Saturday, April 26, 2014

How to re-review a paper

Just wrote a post on how to review a paper, but I realized that there's another step to review, which is how to deal with revisions. Anyway, here's a small follow-up on how to re-review a paper, which is when the authors revise the manuscript and resubmit for you to look at and approve.

1. If you now find something problematic from the original submission that made it into the revision, well, too bad. In my opinion, you are not allowed two bites at the apple. I realize that this is sort of "bad" in the sense that the manuscript may be published with a flaw, and reasonable people could disagree on this point. But this is my policy, because it's just absolutely deflating to deal with this as an author. I should say that it is pretty rare for something really critical to only surface at the second round. When I've gotten these second round criticisms, they are typically just for inconsequential stuff that the reviewer is trying to trump up because they are a jerk.

2. Keep it short.  Like, two lines or so.

3. Don't get mad if they didn't do everything you asked. I had one reviewer for a paper who clearly didn't like our work ask us repeatedly for more experiments and by the end basically just said something to the effect of "I just feel like they should have done an experiment." Not cool.

At this stage, the editors have probably basically signed off on it (though not always, we all know horror stories), so I don't think it's worth getting too involved.

Another, somewhat related issue: how to review a paper that you've already reviewed for another journal. Only been in that situation a few times myself, and it's always weird. I guess you can just recycle your previous review, unless the paper has changed somewhat. Or just decline to review the paper (seen that as well). Better thing is to just try and get the paper accepted at the first journal so that you don't ever have to see it again.

Friday, April 25, 2014

“Why don’t you just use [some old boring tech] instead?”

One common criticism/comment about measurement techniques that I hear scientists hurl at each other is “Why don’t you just use [some old boring tech] to measure that instead?” Like: “Couldn’t you just use RT-qPCR instead of RNA FISH?” Or “Couldn’t you get the same results with a western blot?” Or “Do you really need single cell analysis to get that result?” These points are fair ones, and often it is completely true you could use something less precise, cheaper, more standard, etc. to measure something. I sometimes say (or at least think) these sort of things myself.

But I think this ignores the fact that once a new, high resolution and more precise technique becomes standard, it can free your mind to think in new ways. Take an example from numerical methods (I forget who told me this example–I want to say it was my office mate at Courant, the absolutely brilliant Yoichiro Mori). When I type in sin(1.4787) into MATLAB, it just gives me the answer. It gives me the answer out to far more decimal places than probably need, but whatever, it just works. The most important part is that *if* what I’m doing requires more decimal places, I don’t need to think “oh, I should probably use this other algorithm for that, maybe that’s why this thing isn’t working”–no, it just works all the way, every time, whether I need it or not. Not having to worry about it frees your mind from the details and let’s you ask things you couldn’t or wouldn’t ask otherwise.

For a biological example, let’s say I need to measure a fold change in transcript abundance. In our lab, we’d probably do RNA FISH, which gives us absolute numbers and single cell counts, etc. Now, someone could say, well, do you need counts in single cells? Do you need absolute numbers? Why not just do RT-qPCR? Sure, fair enough. But to the extent that we believe RNA FISH is more accurate, then why not? Then I just don’t have to worry about all the controls, etc. And if I want to ask a question of my data that does require more accurate numbers, well, then I have it. The point is that I have the answer and I can move on to the next thing.

(Cost is of course an important consideration as well, and one that I'm not considering here. Costs change, though, and I also think that scientists often don't factor in the time required. RT-PCR is relatively cheap, but many fail to take into consideration the cost of the time used to validate primers, build standard curves, etc.)

I think this “arbitrary precision” principle is important for building assays that we can build upon. I think that it is generally true that it is hard to build new assays out of parts that are finicky and require a lot of careful consideration. Take Sanger sequencing. When we sequence, we don’t usually have to worry too much about how the loaded the sample, etc. It just works and we take it as such. Same with Illumina sequencing, for the most part. Or buying an off-the-shelf microscope. Yes, these methods and tools have important tricky points that we have to watch out for, but the point is that you don’t have to think about them most of the time. It makes it just that much easier to innovate further upon them. It's much harder to build something on top of a coin flip.

Anyway, just something to think about when someone introduces a fancy new method to measure something. Going to try and stay more open minded myself.

Sitting in the very front of the plane is no fun

I've never sat at the front of the plane before–my few experiences with plane boundary conditions have been at the other end of things. But yesterday I got on a really small plane and there was a seat in front open (no, definitely not first class) and they directed me there. It was awful! You can't do anything bad like play with your laptop at the wrong time, etc. because they are hovering over you like a hawk. Wish I could have gone back to sit in seat 3F again...

Sunday, April 20, 2014

Talking with my mom about GMOs

(I normally steer well clear of anything remotely political on the blog for obvious reasons. But this is not really about politics. Sort of. Whatever.)

I just had a huge argument with my mom about genetically modified organisms (GMOs). My mom is staunchly anti-GMO, and will not change her position no matter what I say. Despite the argument that the scientific consensus is that genetically modified organisms are not intrinsically a bad thing (indeed, essentially all scientists I have met who are even tangentially qualified to speak on the topic agree), my mom simply will not budge. My mom belongs in some region of a Venn diagram of people who in the most extreme intersection are simultaneously reasonably (often formidably) well-educated, believe in global warming, are anti-vaccination, and are anti-GMO. They are also probably very likely to eat gluten-free diets and kale chips. (Note that my mom is neither anti-vaccine nor gluten-free. I have not asked her about kale chips. For the record, my own personal political views are that I am neutral on kale chips.)

What's interesting here is that if you push these folks on climate change, they will probably tell you that the scientific consensus is in very strong agreement that anthropogenic climate change is real. Why would the same argument based on science not apply to genetically modified organisms or vaccines? I think that reveals a more fundamental truth: nominally, you may expect folks like this, who are well-educated and most likely politically labeled as liberal, to be intrinsically pro-science, and perhaps that is true on some level. But I think that a more accurate characterization would be "pro-nature" or "pro-environment" or maybe "anti-man-made". If this coincides with science, then science is right. But if not, well, science is wrong. The mentality is not much different than those on the other end of the political spectrum, just with a different set of beliefs.

(Again, for the record, I am both pro-Nature and pro-Science.  I would happily publish papers at either one.) (Haha, bad joke!) (You know you love it.)

I think we scientists had better keep this reality in mind. Just remember that virtually nobody outside of science really knows what the hell we are talking about. Some people may want to support us based on some alignment of their belief system and priorities with some aspects of our beliefs and priorities. Fine. But there are very few people out there who support us based on an actual, real understanding of what we do. I'm not saying this is good or bad, rather that it's a reality and we should brace ourselves accordingly for when those belief systems shift. Overall, perhaps we're lucky to be just a rounding error in the federal budget. I think this reflects the fact that we are a rounding error in most people's minds.

We can of course rightly argue that we provide society with incredible (and outsized) benefits in that scientific progress has led to an enormous gains in virtually every aspect of human life. So perhaps the fact that there are people out there whose interests align with ours, however imperfectly the motivations may match up, is good. But I think that we have to be very careful about relying on a system that is set up in such a way. Here's a Feynman quote that I serendipitously happened across just now that is particularly apropos:
I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I am not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being. We’ll leave those problems up to you and your rabbi. I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you are maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.
For example, I was a little surprised when I was talking to a friend who was going to go on the radio. He does work on cosmology and astronomy, and he wondered how he would explain what the applications of this work were. “Well,” I said, “there aren’t any.” He said, “Yes, but then we won’t get support for more research of this kind.” I think that’s kind of dishonest. If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing–and if they don’t want to support you under those circumstances, then that’s their decision.
(I actually came across this quote in this blog post about the recent, umm, "back and forth" between Lior Pachter and Manolis Kellis. Scientific celebrity death match: Fight!)

I guess I think that it's a sign of a highly civilized society that we can have people sit around and spend precious resources thinking about stuff that quite often just doesn't matter. Would that be enough justification for the rest of society? Should I be okay with my mom supporting science funding despite her views? Is it possible or even desirable to live a life that is completely free of hypocrisy? To the latter question, I think the answer is no. People would be so boring otherwise.

Saturday, April 19, 2014

How to review a paper

Or, I should say, how I review a paper.

Peer review is a mess. We all know it, and have written about it endlessly, including myself. And we’ve also railed against a system in which we do all the work for the benefit of the publishers. But wait: if we are doing all the work, then we should be able to bend this system to our collective will, right? When we complain about bad reviews, just remember that we ourselves are the ones giving these terrible reviews. So our goal should be to give good reviews!

How do you do that? Here are some principles I try to think about and follow:

1. Don’t review papers you think will be bad. You can usually tell if a paper is low quality (and thus unlikely to make the cut) just by looking at the title and abstract. If you think it’s a waste of your energy, it probably is, and so don’t waste your time reviewing it. Somebody else will do it. Or not. It’s not your obligation to do so. Beware also the ego issue.

2. Give the authors the benefit of the doubt. The authors have been thinking about the problem for years, you have been thinking about it for a couple hours. So if you find a particular point confusing, either try to figure it out and really understand it, or just give the authors the benefit of the doubt that they did the right thing. They may not have, but so what? The alternative is worse, in my opinion. I hate getting the “The authors must check for XYZ if they want to make that claim”, when in fact we already did exactly that.

3. Stick to the evidence and the claims. I stay away from any of that crap about “impact". To me, that is an editorial responsibility. I try my best to just evaluate the evidence for the claims that the authors are making. If the authors claim something that they can’t based on their evidence, then I just say that the authors should alter their claim. And I try to say what they can claim, rather than just what they can’t claim. I generally do NOT propose new experiments to support their claim.

4. Do not EVER expand the scope of the paper. It is not your paper, and so it is neither your responsibility nor mandate to dictate the authors’ research priorities. The worst reviews are the ones in which the reviewers ask the authors to do a whole new PhD’s worth of studies. Often, these types of comments include vague prescriptions about novelty and mechanism. To wit, check out this little snippet from another reviewer for a paper I reviewed recently: "However, the major pitfall of current study is not to provide any novel information/mechanism behind the heterogeneity.” What is the author supposed to do with that?

5. Worth repeating–do not propose new experiments. Those reviews with a laundry list of experiments are very disheartening, and usually don’t add much to the paper. Remember that experiments are expensive, and sometimes the main author has left the lab and it’s very hard to even do the suggested experiments. Again, far better to just have them alter their claims in line with what their available data shows. I will sometimes explicitly say “The authors do not need to do an experiment” just to make that clear to the authors and the editor.

6. Be specific. If something is unclear, say exactly what it is that was confusing. If some claim is too broad, then give a specific alternative that the authors should consider (I’ll say “the authors could consider and discuss alternative hypothesis XYZ.”). This helps authors know exactly what they should do to make the paper go through. Also clearly delineate what are critical points and what are minor points.

7. Be positive and nice. Every paper represents a lot of someone’s blood, sweat and tears, usually a young scientist, and scathing reviews can be personally devastating. If you have to be negative (which is difficult, following the above rubric), then try not to phrase your review in terms like “I don’t know why anybody would ever do this”. Here’s an example of the opening lines from a review we got a while ago: "My opinion is that this manuscript is not very well thought through and of rather low quality. The authors' misconceptions are most obvious in their gross misstatement… [some relatively minor and inconsequential misstatement]”. Ouch. What’s the point of that? That review ended with “If [after doing a bunch of proposed experiments that don’t make sense] they find that [bunch of stuff that is irrelevant] they would begin to address their question.” It’s very belittling to basically imply that after all this work, we are still at square one. Not true, not positive and not nice.

8. (related) Write as though you were not anonymous. That will help with all the above issues.

One other note: I realize that for editors, even academic editors, the issues of novelty and impact are difficult to gauge and that they feel the need to lean on the reviewers for this information. Fine, I get that. But I will not provide it. Sorry.

Anyway, my main point is that WE run this system. It is within our power to change it for the better. Train your people and yourself to be better reviewers, and maybe we will all even be better people for it.

Wednesday, April 16, 2014

Machine learning, take 2

As mentioned earlier, one of my favorite Gautham quotes is "Would Newton have discovered gravitation by machine learning?" I think the point is solid, that a bunch of data + statistics is not science.

At least not yet. Technically, Newton's brain was a machine, and it came up with gravitation. So it is formally possible to have a machine come up with a theory. And I don't think this argument is just based on a technicality. I was chatting with Gautham yesterday about what a theory is, and doesn't it start with observing a pattern of some kind? Newton had access to centuries (millennia?) of star charts–people had misinterpreted them into epicycles, but the data were there for him. In response to my previous post on statistics, Shankar Mukherji mentioned the work of Hod Lipson, in which they are able to deduce physical laws from the data. Very cool. It seems that progress towards this goal is already underway. My guess is that as we make more progress on machine learning (my completely uninformed bet is on neural network approaches), computers will start to see more seemingly incredible inferences about the world. My other guess is that this will happen a lot sooner than we think.

In the meantime, though, I still think we are pretty far from having Newton in silico, and I think that Gautham's point about real learning vs. (the current state of) machine learning is still a valid one. Until this future of intelligent machines arrives, I think most fields of science will still require a lot more thinking to make sense of the data, and simple classifiers may not yield what we consider scientific insight.

Monday, April 14, 2014

Papers are a lot of work, and some of it is even worth the effort

I often say that the current model for publishing is a complete waste of time, and I still think that's true for so many parts of the publishing process, like dealing with reviews, etc.  It's hard for young faculty and even harder for trainees, for whom so much rides on the seemingly arbitrary whims of reviewers and editors.  Wouldn't it just be better to post on a blog, I often wonder?

I think deep down I know the answer is no.  Not that publishing in a particular journal is really important.  But there is something to putting together a well-constructed, high quality paper that is a worthy use of time. Often it feels like finishing a paper is just a bunch of i dotting and t crossing. Yet I've often found that it's in those final stages that we make the most crucial insights. Hedia's lincRNA paper is a good example: it was only towards the end when we were writing it up that we figured out what was really going on with the siRNA vs. ASO oligonucleotide treatment.  The details aren't so important, but the point is that this was in some ways the most important finding of the paper, and it was lurking within our data almost until the very end.

I've found the last few weeks before submission to be a stressful period, when you really want to get the paper out the door and at the same time you feel like you're putting a lot on the line that you want to get right.  It's exciting but scary to put something out there. And it's especially scary to look at your data again, here at the end of the road, and wonder what it all means after years of hard work. But I feel like this mental incubation period is a necessary part of doing good science, and where many new ideas are born.

Thursday, April 10, 2014

Why is everyone piling on that poor STAP stem cell woman?

I just read a little news feature in Nature today that made me very sad. For those of you who don't know, it's about the researcher from Japan who came up with this STAP method (stimulus-triggered activation of pluripotency), in which squeezing cells and putting them in acid can make them into pluripotent stem cells. This is a huge discovery, because it means you can make stem cells without having to perform the usual manipulations (such as genetic ones) to convert cells into stem cells.

Nature published these studies to huge fanfare a little while ago, but then, almost within a month or so, many people started to publicly question whether the results were true, including even one of the coauthors (one of those "victory has a thousand fathers, defeat is an orphan" situations). People started saying that nobody could replicate the findings, and also found some errors in the manuscript, including some plagiarized materials and methods, an old image of a teratoma and some gel-lane mixups. Her institute started an investigation, and she's had to hire a lawyer and defend herself to the press and (from this little Nature article) appears to be in the hospital.

This whole situation is completely ridiculous and strikes me as something that has gotten completely out of hand. Seriously, people, it's just a paper. First, to the method itself: it seems weird to me that people are criticizing this method already so soon after publication. Honestly, if I had a nickel for every time someone couldn't do RNA FISH and said our method doesn't work, I'd have, well, a lot of nickels. And that's something so easy to do that undergrads routinely do it on their first day in lab. Something tells me that this method must be fairly tricky, otherwise someone would have probably already figured it out by now. So let's give her the benefit of the doubt, at least for a couple years.

All the investigations into the little errors and discrepancies in her paper strike me as silly and vindictive. Would all of your papers survive such deep scrutiny? Yes, her paper is very important, significantly more so than anything I've ever done, but remember that's she's still just a scientist working in a lab like you and me. Any paper is such a huge mess of data and figures that little errors will creep in from time to time. To discount her work because of them is utterly ridiculous. And plagiarism of materials and methods? Come on! How many ways can you describe how you culture cells?

And if her work doesn't end up panning out? SO WHAT! Again, it's just a paper! If I had a nickel for every Nature paper that ended up being wrong, well, you know what I'm saying. I personally know of several examples of big Cell, Science, Nature papers that are wrong that got people fancy jobs at top institutions, grants, tenure, etc. Some of these are cases in which people have grossly overstated the effect of something through some sort of tricky analysis. Some of these are cases in which the authors greatly overinterpreted the data, leading them to the wrong conclusion, often because of some sloppy science. Some of these are in the fraud gray zone, where they cover up particular discrepant results that either confuse or refute the main conclusions, or do experiments over and over again until they get the "right" outcome. Those people have jobs and everyone's happy–they're certainly not being investigated by their own institutions. Why is this woman being taken down so hard? Is it because what she's doing is so important? In that case, the lesson is clear: don't do anything important. Is that the message we want to be sending?

Wednesday, April 9, 2014

Terminator 1 and 2 were the first great comic book movies

Just watched Terminator 1 again–how awesome! Not quite as good as Terminator 2, which is probably one of the top action movies of all time, but still great, maybe top 10-20. As I was watching it, I was thinking that a lot of what made the movie so appealing is the character of an unstoppable super man (or in this case, robot). Much better as a bad guy than as a good guy, because the unstoppable good guy is boring (see: Superman). Isn't this the prototype for all the modern day comic book movies? One of the things that makes comic book movies exciting is the epic battles between the comic book characters, both doing incredible things, and waiting to see who breaks first. Terminator 2 is still amongst the best (if not the best) in this regard. Another cool thing is that the Terminator movies did this with much worse special effects than we have today, especially Terminator 1, which looks prehistoric. Practically expected claymation sometimes. But it's still awesome. Compelling movie action is more about engendering fear, suspense and relief than just special effects. Still, Terminator 2 would just not have been as awesome without the (for its time) unprecedented special effects, which have aged remarkably well.

NB: Yes, I realize that the original Superman movies came out before T1. But they just weren't as good. And that's a fact. You know it, too.

Sunday, April 6, 2014

The principle of WriteItAllOut

After Gautham's thoughts about code and clarity and lots of paper writing and grant writing these days, a couple of conclusions. First, grant writing is boring. Second, when in doubt, write it all out. For computer code, this means having long variable names. If you have the option of writing a variable name of "mntx" or "meanTranscriptionSiteIntensityInHeterokaryon", go for the latter. Yes, it takes a little more effort, but not much, and its a MUCH better idea in the long run. I wish we could do this in math and physics also. Same holds for papers and grants, both in figures and in text. In figures, if you can give an informative axis label, do it. "Mean (CRL)" is much less informative that "Mean transcript abundance per gene in human foreskin fibroblasts". It's longer, but with some creativity you can make it work. In main text, AVOID ALL ACRONYMS! People less often read papers straight through from beginning to end these days, and if someone looks at a paragraph halfway through the text and sees something like:
Similarly, we find that 9.3% of autosomally expressed accessible novel TARs show ASE, we expect this number to be lower than genes as novel TARs correspond to exons of genes.
then they will be lost. And I don't think the space taken by expanding out these acronyms is a legitimate excuse. For the record, though, I do use DNA, RNA, SNP and FISH. Actually, I'd probably be well served to expand out the latter two, although they are fairly standard.

Remember, the main point of a paper is not to make little puzzles for your readers to decipher, but to convey information, both accurately and as efficiently as possible. For grants, well, after getting some... strange reviews, I'm honestly not sure what the goal is. Except to get money.

Figures for talks and figures for papers

We've been working on writing up Olivia's paper, and I've also been giving some talks about the work, which has given me a chance to compare those two modes of communication. There are of course many differences, but one of the most striking is that the figures you use for papers seldom work right for a talk. Paper figures tend to be WAY too information dense for a talk. I noticed this recently when I gave a talk on this material and I lazily just incorporated one of our nicely constructed paper figures, only to realize when I was up there talking about it that it would probably take me a good 5 minutes to explain everything in that one picture. Note that this is not just about conveying too much data, but in this case just a diagram to illustrate the comparison between two hypotheses. There is a fundamental conflict: giving a talk, you really can only present one concept at a time and need to make sure people are coming along for the ride. In a paper, you can (and often must for space reasons) present multiple conceptual layers on top of each other. Hence the high cognitive density of those figures.

Anyway, I reconfigured the talk with some rather different figures, and it went much better (or at least I thought so). Maybe something to keep in mind when preparing a talk.

Saturday, April 5, 2014

Publishing survives partly because of our egos

The internet abounds with discussions about how the scientific publishing system as it currently stands is completely ridiculous: somehow, we scientists do all the work, both the blood, sweat and tears of creating the content, then reviewing the content, not to mention writing the reviews, perspectives and news and views, writing the little protocol pieces… and we typically pay for the “privilege”, often directly with page charges on top of institutional subscription fees. It's a tax, and it happens up and down the food chain. Yes, the system is pretty messed up. But a lot of people have already written about that, so I won’t bother writing any more on that point.

Instead, I wanted to point out how some social aspects of how the system maintains itself. Why do we scientists do all this work for free? Yes, partly because of the desire to maintain the scientific enterprise. But I think another big part of it is because of appeals to our ego. And that gets exploited throughout the publishing ecosystem. Who hasn’t had that warm feeling the first time you get asked to review a paper? After that wears off, then the first time you get asked to review a paper at Nature or Science? Or to write a news and views? Or a review article? Or guest edit a paper? Or asked to be on the editorial board? Or to assemble a collection of reviews or protocols? At which point, you probably go out and ask some young investigators to write little pieces for you, and they will probably be honored that you asked them. Note that at none of these stages do I think any of the scientists involved are purposefully trying to take advantage of anyone, at least I hope not. Nor are all the publishers who manage the content, especially the bigger players. But I’m pretty sure at least some of those publishers are. Consider those little reviews that people are always asking you to write. Like a chapter in a review book or encyclopedia or whatever. Typically some (probably well-meaning) senior professor in the field will ask you to write it, and you spend time on it and NOBODY reads it. For the author, it's basically just a chance to add a single citation count to your papers. The only solace is that nobody is wasting their time reading them, at least. So who gains? Certainly science gains very little from this enterprise, I can tell you that. Said senior professor gets to say that they edit this review journal as a line item on their CV, so there’s that. But my guess is the winner is the publisher, who gets to say that they have all this content when negotiating with the universities. There's a whole content industry out there based on scientific fluff, based off of our hard work, and enabled by people appealing to our sense of self-importance within our scientific social hierarchy.

So what to do? I can only speak from the perspective of a junior faculty, but I'm trying to be more judicious about what I choose to do with my time. Of course, I’ve done plenty of time-wasting content generation in the past, and will probably continue to do some, sometimes against my better judgement. And I’m guessing I’ll be presented with tantalizing-sounding opportunities in the future. I just hope that if I do decide to pursue those opportunities, I do so for the right reasons. I feel like as a community, when we are faced with a choice, we should remember that we're highly skilled scientists being asked to do free work for someone. And that someone is probably not working for free. Many companies would pay dearly for access to your knowledge. We shouldn't sell ourselves short, even when people try and make us feel tall.

Tuesday, April 1, 2014

Hypotheses and breadth vs. depth first searching

Given the avalanche of data out there, there is a notion that one can do "hypothesis-free" research, in which scientific findings arise out of sifting through large amounts of data for little nuggets. Indeed, on the face of it, this seems like a very efficient way to do science, because you don't have to collect new data.

Then I was reminded of an optimization problem I had to do once. It involved trying to fit parameters using maximum likelihood–basically, you have a function of a few variables and you try to find a minimum of the function. Now, superficially, you might expect that the best way to solve this problem, especially if you have multiple data sets, would be to pre-compute the function over a big range of values, saving you the time of recomputing over and over again. However, even for just a few parameters, it turns out that the optimization approach is more efficient: you would have to precompute an enormous number of functions, whereas solving the optimization problem requires comparatively little work because it converges quickly to the right answer.

In some ways, having a scientific question and then trying to answer it is like doing an optimization problem through the space of experiments, whereas the hypothesis-free approach is like precomputation (breadth first search). The problem with precomputation is that it is inefficient because you precompute many things you don't need (analogy: you make measurements you don't need) and then are probably missing the refined data you do need to converge to the exact answer (analogy: you are missing measurements you do need because nobody thought they were specifically necessary). Indeed, often times I've found that there is some data set out there that sort of gives us what we need, but when it comes down to it, we'd have to do it ourselves because we have something very specific in mind, and it's critical to get exactly that.

Then again, local optimization means that you don't learn what the entire function looks like–continuing the analogy, that's like not getting the big picture. And I'm certainly not saying hypothesis driven research is the best way or even a better way, mostly for the simple fact that most hypotheses in biology are wrong. Honestly, I'm not even 100% sure what hypothesis driven research really means. But when it comes to the most efficient way to learn something in science, I'm not sure the answer is as clear cut as it may seem...

Art

I don't know much about art.  But I'm wondering if good art is finding the balance between proportion and disproportion–making something just right enough to bring you in and just wrong enough to drag you along for the ride.

You know, like Michelangelo's David.  Or Terminator 2.