After reading Andreas Zeller's book "Why Programs Fail", I've ben using scientific debugging method while tracing defects in programs.
As I understand it, if you can't fix a bug intuitively in 10 minutes, apply scientific debugging:
- write everything down, keep all generated test files (or, more to the point, the exact commands used to create them)
- follow a cycle of making observations, formulating a hypothesis, devising an experiment that should either confirm ot reject the hypothesis. If it's rejected, you have to formulate a new hypothesis. If it's confirmed it either leads to a diagnosis and fix or to having to refine the hypothesis for a new cycle of tests.
My first attempt was while tracing the causes of timing errors in a buggy FPU, documenting the (many!) steps of the above cycle until I found the cause of the bad floating point results making a test suite fail: that if the result of one FPU instruction happened to be used by a following instruction exactly N clock cycles later and with the right FPU-result-pipeline delays in the intervening instructions, then the result was picked up as garbage.
I'm pretty sure I'd never have got as far as I did without SD.
What I'd like is to integrate the scientific debugging method into the current bug trackers. Where the Scientific Debugging Method is appropriate (variances from desired behaviour with unclear causes), should we look at integrating the finer-grain documentation of the bug's resolution into the issue tracker itself so that, instead of having issues, each with a linear sequence of comments, each would have any number of hypotheses, experiments, observations, with a confirmed hypothesis leading to one or more refined versions of it, and a rejected one having no children.
sndfile-spectrogram on GitHub: Strange florets in sndfile-spectrogram output and Horizontal comb of shadows in what should be smooth spectrogram output
A few months ago, in a private two-person project without a public tracker, we created a folder in the source tree called "issues", containing text files named "1", "2", "3" and so on, each of which is added to the git tree as an issue-creation commit, together with any associated test files. These text files are then modified as issue investigation and resolution proceeds, and the issue file is removed from the source tree with the commit that resolves the issue. It lacks some things, like being able to see, for example, all resolved issues, though I guess a shell script should do it.
But, hang on a minute, a source code tree's issues *should* be part of the source tree.
If you take a copy of a source tree, you should get the list of known bugs in it too.
Web-based bug trackers like Github divorce the code from its issue tracker and keep the list of issues on their servers. I mean, I think web-based issue trackers are great! But can we think about doing something similar, that travels along with each copy of the code it speaks of?
And can that the operations on that in-source-tree mechanism embody scientific debugging steps in its logic, dealing in Observations, Hypotheses, Predictions, Experiments, Rejections or Confirmations and Diagnoses-Fixes? Is the SciDebug workflow diagram comprehensive enough to make it a mandatory procedure in a bug tracker? If so, it would give speed and power to our collective debugging by forcing everyone working on the project into a "scientific" way of reasoning in their work, as well as documenting the reasoning that led them to their conclusions.