Andy Brunning has a chemistry-oriented website, but he had a broader-interest post last year with the “Rough Guide to Spotting Bad Science.” It floated around Facebook for a while last summer.
When I teach 100-level classes here at Michigan, and sometimes when I teach higher-level courses, I often include a day introducing the basics of good versus bad science. As the post linked above says, most people get their science from other people or media reports. My reading of this guide is that it focuses mainly on conducting the science and the write-up in the obscure journal (like JGR Space Physics). It includes just a few points (the first two?) related to the equally important issue of “bad science” at the later steps in the process: the media presentation of the research and the individual’s interpretation and usage of that media report. That topic deserves its own post. Here, we’ll stick to the bad science at the researcher level.
What are some key elements for our field from this list? I’ll highlight a few. Correlation and causation (#4) is sometimes an issue for space science as we look for meaning between quantities that don’t have a plausible physical connection. Sample size too small (#6) is another that can easily get us because we are often limited by the observations available or the computing constraints of running a big code. I hope we are not plagued by “Cherry-picked” results (#10), but the temptation is always there to exclude results that do not support a preferred hypothesis. Probably the biggest for us is Unreplicable results (#11). First off, we rely on nature to conduct the experiments and we get whatever the data we can from whatever observatories are available at that time. Second, and more to the point here, is the issue of “closed” data sets, data processing techniques, and computer code. For others to independently verify the findings and build consensus around a hypothesis, everything that went into that result should be available for others to use.