So, in my previous post regarding models of testing and checking I try to model a scenario where a tester interacted with the system under test without the help of any additional tools. In this blog post I will try to model the scenario with tools included. Using tools in the testing is often named automation. Although, some only refer to it as automation when there is no human involvement at all. Such as daily build tests running seperately independently and generating results in some kind of reporting tool. Those persons would probably refer to the case with the tester using a tool as semi automation. Anyway, the terms differ but all strive to express different levels of tool involvement. But eventually there will be human involvement at some point since someone needs to look at the results generated. In the model I use I only consider what is happening within a test session, i.e. when the testing is ongoing.

Let´s start off with the case of “full automation”, i.e. no tester involved.


As in the last post, there is a script controlling the test instructions. The script could be written in any programming language (e.g. Python, Java etc) or consist of key words depending on how the test tool operates. The test tool (probably named framework in this context) will translate the instructions from the script and inflict actions on the system. After that, specific observations will be gathered and evaluated to a result stored for future evaluation. The specific observations is a term defined in the Test vs Checking blog post and refers to observations that can be represented by a string of bits. Note that there are no additional observations made. This is a big difference between human checking and machine checking, the machine will observe what we tell it to observe. This example is where scripted testing and checking meets, a scenario that is 100% scripted will also consists of 100% checks. There can be no testing and no exploration since no human is involved.

So what about the case where a tester uses a tool to interact with the system? The tool could be any tool helping the user to make observations or inflict actions on the system. The model below illustrates the (theoretical) case of pure exploratory tool supported testing.


The scenario above is not scripted at all since we don´t have a script initating any instructions. The tester is in full control and may use observations or results from the tool or own observations from the system under test. Alternatively the tester will send instructions to the tool which in turn will inflict actions on the system. All combinations of these different inputs and outputs are plausible and the model can be adapted by removing arrows not applicable in the current context. This scenario contains both human checking, machine checking (when using observations/results from the tool) and testing (the feedback loop in the testers mind reacting on observations generating new test ideas).

A third scenario would be the one of partly scripted testing. In this case the model will look like the previous model but with a script added to initiate instructions on what to test.



This scenario should be quite common since tools are frequently used when testing and there always is ideas on what to test before the test session start (i.e  the testing is in some sense partly scripted).

So that was my attempt to model testing with tools including the terms of testing, checking, scripted testing and exploratory testing. These models and the models in the previous post will probably be refined the more I learn, but they work as a snapshot of my current understanding of the matter. Any feedback is welcome!