Recently I have given a lot of thought to the concepts of Testing vs checking and to Scripted testing vs Exploratory testing. I felt that I had grasped them quite well (but not fully) individually but I did not quite manage to relate them to each other. What is for instance the difference between pure scripted testing and pure checking? Can checking be included in pure exploratory testing? After Twitter discussions with James Bach and Michael Bolton and reading and commenting on their blogs I am now starting to put the puzzle together. Some of the stuff below I have made up myself, for my own understanding. If there is something missing or incorrect in there, blame me, not them. I will gladly receive any feedback that can improve the models and my own understanding in this area. Anyway, since I am a visual learner I thrive on models and charts. It makes my understanding of new concepts so much easier. So here are the models I have made of different scenarios involving the different concepts.
I am starting off with a picture modeling a tester testing a system without any additional tool support (what some oldschoolers would call manual testing).
In the bottom left corner we have a script which is basically something controlling the test from outside, i.e. something that´s been predefined before the test activity started. Everything within the elliptical figure describes what happens “inside” the tester performing the tests. The idea is that the tester will have some input from the script and some input from test ideas generated during testing, combined they will generate actions performed on the system. In the process of generating actions expectations will consciously or subconsciously affect the observation filter. I.e. when performing a certain actions the tester will focus more on some specific output and less on other. Some observations will be filtered out for human checking. These observations will have an algorithmic decision rule applied on them and a resulting pass/fail is generated. Other observations goes directly to the fuzzy evaluation cloud where the brain does its magic processing all the observations and results of the checks. (In order to keep the picture from being completely cluttered, I have intentionally left out the oracles which will serve as inputs to the evaluation and the decision rule clouds). After the evaluation, some kind of results will be stored in the testers memory bank and new test ideas will arise contributing to the testing feedback loop.
So what would the picture look like if we were to imagine pure exploratory testing without any predefined test instructions or test ideas? In my mind like in the picture below.
How will the pictures look when the tester is focused on human checking only? Answer: they will be the same. All the paths will still be there but the amount of information flowing through the paths may differ. When trying to perform human checking the tester will modify the filter to try to filter in specific observations on which he or she can apply an algorithmic decision rules. Hence there will be less focus on the other observations. But since the tester is a human being, the filter will not be perfect and some observations not belonging to the check will slip through. Neither the applying of the decision rule or the evaluation procedure will be perfect, so results can both be miscalculated, forgotten and misinterpreted. On the flipside there is still a chance of discovering unexpected issues due to the imperfectness of the human mind.
In the same reasoning that that human checking can never be the same as when performed by a computer, there can be no pure scripted testing performed by a human either. The feedback loop in the human brain generating new test ideas and reacting on observations cannot be completely removed. There will always be deviations in how the tester interprets the instructions in the script and in how those instructions are transformed into actions.
So that was my take on scripted/exploratory testing and checking without tool support. I the next blog post I will try to find out what the models would look like with tool aided testing and pure automated checking. Stay tuned!