STP – Call for Authors and Speakers – Contribute Now
I received an email for contributions to SoftwareTestPro.com. See their editorial guidelines for more information.
I received an email for contributions to SoftwareTestPro.com. See their editorial guidelines for more information.
One of my team members was talking about test heuristics after attending the CAST 2011 conference. I went looking for Elisabeth Hendrickson’s excellent 2 page PDF entitled "Test Heuristics Cheat Sheet" and discovered the link I had not longer worked. A quick search revealed the current location is http://testobsessed.com/wp-content/uploads/2011/04/testheuristicscheatsheetv1.pdf.
This sheet is divided into the following sections:
I used to keep this pinned to wall above my desk. I should do it again. Recommended.
I recently listened to an older podcast by Pradeep Soundararajan entitled "Touring Learning & Testing". In it, Pradeep develops the analogy of how testing is very much like being a tourist. I like this analogy.
It is a quick listen. Recommended.
I have long been a proponent of making sure that someone less familiar with a feature take the lead on testing it immediately before release. My general reasoning was that a new set of eyes will notice the issues that everyone else has accepted.
Jono Bacon in his book The Art of Community has a a great description of this phenomena in a sidebar on page 123 entitled “The Risks of Autopilot”
A common problem that can occur when observing how people use software is when the user knows of a particular quirk in a product and works to naturally avoid triggering the quirk. This is common with software developers, and before release, the software typically is not used in the same manner as it is by normal users after release.
Thanks Jono for such a great description of this issue.
I have written before about lessons software development professionals can learn from Atul Gawande.
Cem Kaner read an article about how doctors can use checklists to radically improve patient care and reflected on his own background in both law and testing. The result is the presentation The Value of Checklists and the Danger of Scripts: What Legal Training Suggests for Testers. I found the second half of the presentation most fascinating with the many examples of how lawyers use checklists and how this can apply to testers.
Kaner’s point about learning – scripted testing does not make the person running the test a better tester. Checklists encourages the tester to think. I couldn’t agree more.
Nagios is a well known tool with operations teams. It is used to monitor all kinds of operational parameters – from simple machine up/down monitoring to detailed data collection. However, I have rarely seen this useful tool used in test environments. Here are three ways Nagios can provide benefits to a test team.
First, just using Nagios to monitor whether test systems are up and running can provide useful information and possible time savings to a test team. Knowing that a database server has gone down might save the entire test team time and frustration from tracking down "bugs" which are just the result of a machine outage.
Second, consider basic CPU and memory utilization monitoring of all test systems. This data can be collected and graphed with a variety of tools. I have had success using RRDTool and nagiosgraph. This toolset allows the team to see the variation of CPU utilization, memory utilization and whatever else you decide to measure over time. This view may allow the team to spot potential performance or scaling issues long before formal performance testing begins.
Finally, consider writing your own plugins for measurements unique to the system under test. Once you start doing this, you will discover all kinds of things Nagios could be used for to aid in not only monitoring the test environment but actually testing the application. For example, I once wrote a plugin that would run a database query to verify the number of record processed in the last 15 minutes. I set appropriate thresholds. Weeks later, I received a monitor email alerting me that no records had been processed in the last 15 minutes. Even though I was not testing that part of the system, I immediately knew we had a major issue with the latest build.
I have written about accidental correctness before. I recently released a fix for an item that was "accidentally correct" for nearly a year.
Early last year, we released a new feature which used Lucene to index the entries. As part of the UI, the entries were displayed – newest entries first. Things worked great. Until the new year.
It turns out we were sorting not on the date but on the date string stored in Lucene. The string was in MM/DD/YYYY format. So, the entries that started with 12 were before 11, etc. All appeared correct. The newest entries were at the top. Then January came. All of a sudden, the newest entries were at the bottom of the list. Sigh.
Lesson re-learned: make sure your test dates span years.
A friend sent me a link to this graphic. Not sure who to attribute it to. The image is hosted on http://media.oldben.com.ar.
In a prior post, I described an example of a side effect of a feature that our users came to rely on as a feature in its own right.
While reading Raymond Chen’s The Old New Thing: Practical Development Throughout the Evolution of Windows, I came across what I consider the mother of such side effects. In an essay entitled “The hunt for a faster syscall trap”, Raymond describes how an Intel representative was perplexed when a Windows engineer requested speeding up the fault when an invalid instruction is executed. This would seem to imply that the Windows code was buggy in some sort of way. The story goes on to relate that executing an invalid instruction was intentional – as a way to get into the CPU kernel mode.
A version of the essay can be found online here.
In show 146 of Hanselminutes, Scott interviews agile coach Scott Bellware.
It is an interesting conversation about whether test driven development is a misnomer, the misuse of the word testability, test smells, etc. A valuable conversation for testers to consider when discussing ways to improve the quality of a product. (Especially if someone asserts that additional testing is not required since "we use test driven development".)