Something I learned long ago was that performance testing starts neat but gets messy quick. Generally you spec out what you plan to test, round up your hardware, write your tests, and get ready to “collect the data”. But inevitably you find bugs, max out machines, change your mind about the best possible configuration, change your mind about what you’re doing in the first place, smoke a hard drive, or whatever.
When you’re in the thick of it, you know exactly what you’re doing and why. It’s amazing how quickly that context is lost, however. I discovered years ago that keeping a log while you test is astonishingly helpful. I take notes in many contexts and usually they are write-only. I take them purely because it helps me remember it and in the rare case that I need to double-check or someone asks me, I can go back to them. But I have leaned heavily on every performance log I’ve ever done. Inevitably two weeks later you’ll be analyzing the data and you can’t remember if you set the -wobble flag in run 53 or not. Thank goodness you’ve got a log!
I find two kinds of logs useful – configuration log and a journal. The configuration log just tracks the configuration (and results) of every test run. The poor man’s configuration log is just a pad of paper or a spreadsheet. If you feel like shaving some yaks, it’s fun to automate the configuration log. Just build it into your test startup process to swipe all your critical configuration information (JVM startup parameters, critical config files, OS info, whatever can change). Drop it all in a timestamped directory and you’re good to go. Bonus points if you integrate the collection of results.
It’s also helpful to keep some kind of a journal. When doing any kind of exploratory or investigative performance testing, this is essential. But even when you’re following some sort of pre-set performance testing plan, it can be useful to have some place to drop interesting things you noticed and whatever context might come in handy later to recall what happened.