But not in the ways you might think. Let me tell you a story.
A few months ago, I was using a program someone else wrote to do some data analysis for a star I’d observed. I understood how the program was supposed to work, and what kind of output it should have. But for weeks, even when feeding this program “fake data” in such a way that I knew exactly what result I should get, it insisted on giving me garbage.
It felt like I had tried everything, and nothing I tried worked. I discussed the problem with my advisor and more than one colleague. At one point, we were poring through the specifications for how to format a file so the program understands the information you’re giving it. One column of data the user must provide is the “logarithm of the wavelength.” As it turned out, I had interpreted this as log-10, while the program was expecting natural log (ln). I made the change, and voila, the output was beautiful and science marched forward.
This week, a very similar thing happened. I had spent months using an even more complicated program written by somebody else. It is supposed to take many observations of a star and use a Markov chain Monte Carlo technique to find a set of physical values that best fit with those observations. But it, too was consistently giving me garbage. I would tell the program, “this is very close to the right answer, please just tweak things to make it a bit better!” Instead it would jump off a proverbial cliff in parameter space.
My colleagues were beginning to think the program was useless, or that I didn’t know how to use it at all. Then one day, after I had tried everything I could think of and ten more things for good measure, I finally emailed the individual who wrote the program. This person took a look at what I had been feeding the program, and replied simply, “You need to switch Star 1 and Star 2 in one of the input files. They’re backwards.” I did so, and suddenly, finally, the output made sense.
I write plenty of my own programs too, mostly for data analysis or making graphs. In an attempt to avoid problems like these down the line, I enthusiastically write huge numbers of notes to myself. My programs are littered with comments. I keep a virtual “research notebook” with Evernote. I share some of my more versatile programs on github so people solving similar problems won’t have to reinvent the wheel. But I still find it amazingly easy to misinterpret or overlook something small, even when revisiting my own work.
Yes, you have to learn lots of math, physics, programming, and many other related things in order to tackle new and interesting research questions in astronomy. The same is true for many fields. But at the end of the day? We are all banging our heads against walls over a minus sign, or a factor of 2, or mixing up log-10 with natural log, or losing track of which star is which. This kind of “stupid mistake” hurdle is what really makes research hard.
I have experienced this cycle many, many times: learn new technique, apply technique, get clearly incorrect result, try twenty new things (repeat this step several times), confer with colleagues about persistent incorrect result (repeat this one too), and finally discover what stupid mistake I have made. This has happened so much that I am legitimately euphoric when I figure out just how I am being a moron. This moment is easily the highlight of my week. It’s no wonder so many scientists experience impostor syndrome!
And, of course, once I finally identify and correct a mistake, it’s not like the research project is done and the paper is written. Instead, I can confront the next set of problems to solve that have been looming over me while I was busy hunting down that darn mistake. All the while writing, and teaching, and taking care of whatever other commitments I have. It’s exhausting, and can feel never-ending.
That, my friends, is why research is hard.