In my last blog post I tried to summarize the SWET8 conference as a whole. One thing I omitted on purpose was all the great pointers to books and other sources of knowledge that I got during the weekend. This blog post will try to gather them all in one place. The variety of different topics really speaks for how diverse discussions we had throughout (and after) the conference. And remember, these are only the ones I have encountered, I expect to have missed many more in the discussions I was not part of.
Disclaimer: I have enlisted the items from my vague and sometimes ambiguous notes, so there is not guarantee that I always picked the right thing.
So that´s it for now. If something more pops up that slipped my memory I will add it later.
Meanwhile, credit to all participants who contributed to this list: Anna Elmsjö, Joel Rydén, David Högberg, Sigge Birgisson, Fredrik Almén, Göran Bakken, Martin Jansson, Erik Brickarp, Carita Jansson Tsiantes, Mattias Gustavsson and myself.
TL DR: Peer conferences are really great, go to one if you have the chance.
After having participated in various larger conferences such as Eurostar, Let´s Test and SAST, it was now time for something different: a peer conference named SWET. This was the 8th incarnation of the conference and hopefully there will be many more. This time, eleven testers from all over the country (well, if you count Örebro to the north) met in Gränna to discuss testing over an intense weekend. Or wait, the people were not really testers, and we were not really talking about testing, or were we? The theme for the conference was “Testing that is not testing” and derived from the fact the organisers found that they had moved on to roles where they were not testing anymore, but rather worked with coaching, managing or facilitating testing. So the idea was to discuss testing related things that are not actually test execution. The format of the SWET conference is very much based on facilitated in-depth discussions, and each discussion is kicked off with a presentation from one of the participants. Another core concept of the conference is that it is heavily experience based, and that was reflected in the presentations which all had clear connections to something the presenter had worked with and experienced first hand.
The first presentation was “How to understand other peoples´understanding” by David Högberg. David talked about how he as a new tester at a bank using some whiteboard drawings and well targeted questions sparked discussions that revealed over-simplifications and misunderstandings regarding the functionality of the project. He also talked about how he at another place visualised his test progress in realtime on a mindmap displayed at his desk. This turned out to be a powerful tool for the common understanding in the team regarding test progress and state of the product. The common theme of the presentation was centered around the power of visualisation and using “ignorance as a superpower”, this by enabling open and fundamental questions. The discussions that followed covered such topics as (and much more):
How to enable experienced people to ask about basic stuff they think they are already supposed to know about?
When and how to ask questions about what is being/has been developed. Throwing risks and “have you thought about this?” at people might be received very differently depending on context and timing.
How to combine experienced and non-experienced testers in a good way to enable growth and getting multiple perspectives. The same question was also applied to technical and non-technical testers.
What to we actually mean when we are talking about non-technical testers?
How to visualise complex realities with mind maps, and also how to collaborate around them and keeping them “alive”.
Why mind maps aren´t always the best option and that it is important to take one step back and think about the best way to visualise the model you have in mind.
The second presentation was “How I got the whole team to test” by Anna Elmsjö. Anna talked about her experience joining a couple of established teams that previously had no testers. She described the challenge of establishing the concept of testing in that team and how she step by step gained the trust of the team and made them try out new things. Some of the discussion/questions after the presentation revolved around:
Testing environments and testing in production.
Important things when scoping out and facilitating test sessions. How to make people keep focus etc.
Will quality suffer when testing is handled by non-testers?
How automation was utilised in that context.
How to visualise and represent testing on scrum boards etc.
The importance of finding allies when promoting change.
Disaster recovery in production.
The third and last presentation second day of the conference. The presentation was named “Flow visualisation and LEAN” and was held by Joel Rydén. Joel talked about his experience in change management as part of establishing a new cloud solution. He described how a common language was established early by identifying and defining values, principles, methods and tools. One other important part was also looking at the whole chain from idea to delivery and trying to identify waste (stuff not adding value). Some interesting discussions and takeaways that sparked from that presentation:
How quality might not be the best word to use as a value, it is often too broad and relative concept. It might be a better idea to focus on a certain quality characteristic instead.
Focusing on removing waiting time is important to increase speed. One common mistake is to measure WIP limit over just one column instead of all columns.
Once again how powerful it is to visualise things. In his context, Joel had used Jira to display a real-time picture of at what stage in the process different tasks were located. This enabled them to monitor and tweak as time went on.
In addition to the main presentations there were also a round of lightning talks followed by very brief, time boxed discussions:
Martin Jansson talked about the quality coach role and the risks that he perceived might come with it when testers take a step back from testing.
Sigge Birgisson talked about “Safety instructions for developers”. It was a story about how he made the expectations on testing for developers explicit by providing them heuristics on what to test.
Erik Brickarp followed up on Sigges talk by providing his experiences around the same topic.
David Högberg explored the topic on why failing is good and if you can learn anything from success.
Carita Jansson Tsiantes provided the question of the difference between testing and quality assurance, and what the words meant. A topic that has been widely discussed before but got some really good and nuanced opinions in this discussion.
So that was a short (relative to the actual length) summary of the presentations and discussions that were held at SWET8, and this was just the scheduled conference part. There were many more interesting (and more philosophical) discussions and testing anecdotes in the evenings, and for some people, late in the night. That is all a great part of the fun of peer conferencing. I want to give a special shoutout to the organisers Erik Brickarp, Sigge Birgisson and Göran Bakken for the excellent job of setting this up. The topic could not have been more appropriate either for me personally, since I have recently found myself doing more work around testing than actual testing. I got many takeaways and things to follow up that I will take with me and nourish in my future work. I plan to create a second blog post with all the studying tips I got in form of books, presentations and other various stuff during the weekend, so stay tuned.
For the future, I can only hope to get the opportunity to participate in more of these kinds of conferences, because they rock.
The first Swedish edition of SIGIST (Specialist Group in Software Testing) was held in Stockholm 23:rd of November. The organization has existed since 1989 and has previously organized conferences around the world, e.g. in the UK and in Israel. Have a look at their website if you are interested in learning more.
The conference was run over a day and had five different speakers in the line-up. In between the speaker talks, there were short organized discussions called “corner talks”. Preselected topics were distributed to the four corners of the room. All conference participants walked to the corner with the topic they wanted to discuss and had a ten minute discussion of the topic. The takeways from each corner discussion were then presented to the other groups. I applaude this effort to involve all participants in the discussion, but I found it hard to get much value from the discussions due to the time constraints. I would suggest that next time, the subjects should be more specific. Maybe all corners should discuss the same specific question/problem and then the takeaways from each group could be compared.
Now on to the speakers. First out was Jörgen Damberg. Jörgen stated upfront that his talk might be confusing to some since it really did not have a read thread. Rather it was a collection of things Jörgen had learned during his career. The topics varied from what quality is, to different good (not best) practices when working with automation or integration and working efficiently with other roles. Some takeaways:
What is quality?
-Quality is hapiness
-Quality is subjective
-Quality is context sensitive (e.g feelings for the product could differ between morning and afternoon the same day, depending on the mood).
Pool-ball vs floorball metaphor. The requirements for a product are like a floorball, an empty shell with holes in it. Our mission is testers is to make sure that the product delivered is more like a pool-ball, e.g. a solid object without holes.
Tres amigos approach can be useful, i.e. bringing testers, deveopers and requirements together to get a collective grip of quality.
What to automate: test data preparation, enviroment configuration and test execution. Other testing activities needs a human brain.
When testing complex systemst try to de-couple and test things seperately, integrate in the end.
The next speaker in line was Rikard Edgren. His talk had the title: “Rikard´s super-good testing practices” and delivered things he has found being extra useful in his testing career. Rikard is a great speaker with lots of useful knowledge, and while I have read a lot from him in the past, I still had some great takeaways from his talk. Some of the things he brought up:
Earning your respect by finding valuable information (aka finding cool bugs).
To find useful information: ask the customer what quality aspects/functional areas that he/she finds important. Use customers words when talking about and reporting later on. Even if something is not reportable since it is too vague(i.e. charisma bug), you can still talk about it at the coffee machine.
Information objectives as testing drivers
Find out how often to report, how to report and what to report. Testing is not better than the quality of the reporting (loosely quoted).
SFDIPOT mnemonic is great to generate test ideas and to reduce the risk to overlook some important area/aspect.
Recycle and add to others´ models. Share your own for common understanding.
When describing your testing, the essence often suffice, e.g.:
-One-liner test ideas
-One-liner test reports
A good tester is often lucky (serendipity). But it´s not purely luck, rather:
-Good observation skills
-Rich test data
-Variation in running tests
After a good lunch it was time for the late replacement Samantha Peuriere. She summarized what she had learned from working as a quality coach (my own wording) in an agile enviroment. The talk contained a lot of great pointers, especially interesting for myself since I am transiting to an agile enviroment myself in the near future. Key points:
-Invite yourself to meetings if you are not invited (e.g. early design meetings etc)
-Open up test environment, e.g. for customers if possible
-Test with real users in sprint.
-More releases reduce risk due to fewer changes in each release. Since the next release is not far away you might also monitor in production, and fix next release.
-Remove waste (don´t repeat tests)
-Moving things out to production without passing QA is a way to remove the quality gatekeeper. This trick could be used for low risk changes if you need to establish the culture of “quality is owned by the whole team”
-Put strategies etc on the wall
-When in doubt, whiteboard it to resolve ambiguities.
Quality is owned by the whole team
-Review test plans as a team
-Follow on late bugs found (learn from mistakes)
-Celebrate both mistakes and successes (removes blaming culture)
-Make quality fun (mob testing sessions, provide food :-))
I´m here to make you look good
-Rename defect or bugs to feedback or rework. Less negative wording has big impact on conversations and makes people less defensive.
Don´t get blocked
-Get help from the community
-There is always a way forward
The next speaker in line was Andy Redwood who talked about his experiences as a test manager for a large bank. Some takeaways:
Some things are quick fixes, but some things take many years and iterations.
Flexibility is key, everything does not need to go through the same test phases. Somethings can even be picked up in production if they fail.
As a tester. attend some training in facillitating. Learn to gather a group of people at a witheboard and to extract information from them and then package that in a presetable format.
One cool thing they had implemented was an automated process for problems found in production routed to the correct test manager enabled continuous learning and improvements.
The last speaker of the day was James Bach. James was as always entertaining, with a strong and well thought through message. Even though I had heard or read most of the content in some way before, James always provides some twists or interesting side stories to learn from. Some of the things brought up:
Testing is not test cases. Talk about performing test activities instead of test cases.
There are many layers to testing that affect the performance, e.g. tester temprament, current role in project, test design etc.
Tacit vs explicit knowledge is important to understand. The testing activity cannot be encoded, e.g. described fully in written language.
Putting words on what you do is important, to make other understand and respect your work, but also to understand yourself.
After James´ talk there was a panel discussion about the future of testing. The panel consisting of Rikard Edgren, James Bach and Andy Redwood agreed that testing 10 years from now will be roughly the same as today. James´ vision was that testing will have a higher status and that it will be a profession people will enter later on in their career, for example after having worked as a developer. An intersting local counterpoint to that idea is the specialized testing educations that we have in Sweden, which likely will draw in more young people in to the profession. The increased amount of testing services offered was also under discussion. The panel concluded that we will see more of that but that there are challenges in having parts of your testing outsourced, far from the development teams. The topic of artificiall intelligence was also covered, the belief here was that it will not have a big impact in the near future, but that we will see AI assisted testing in some areas (e.g. test design suggestions).
To summarize, I really enjoyed this first edition of SIGIST Sweden and I would gladly attend the conference again in the future. The low price combined with the one day format makes provides high value to a low cost. Hats off to the organizers for all the work they put in!
So I recently attended a brilliant workshop on the Cynefin framework at Let´s Test held by Duncan Nisbet. That workshop got me thinking about the control theory I studied at the university which I up until this day have never really used. But when starting to think about software development as a control theory problem I could connect the dots between an unstable control system and the Cynefin domain of chaos. Control theory is a collection of general theories about controlling systems. It could be a thermostat controlling the indoor temperature, making a Segway keep the balance, adjusting the flow of packets in a computer network etc. The main principle is showed in the picture below.
The Reference signal would in the software development world be the requirements leading to some kind of change to the system. The Controller would be the developers making changes to the system from the requirements. The System is the system under development , and finally, the Sensor is our beloved testers observing the system and sending feeback to development. The feedback received is compared to the reference signal (difference between actual behavior and desired behavior) and adjustments are made (bugs are fixed). You would probably like your development process make the system get closer and closer to the desired behavior over time. But sometimes the behavior is oscillating (when bugs are fixed, just as many new ones are introduced) or worse (fixes and changes steadily decreases the quality of the system). Getting closer and closer to the reference signal is refered to as stability In the control theory world. There are different kinds of stabillity but this is one of them “A linear system is called bounded-input bounded-output (BIBO) stable if its output will stay bounded for any bounded input.” The inverted pendulum in this video is a good example of an unstable system:
A limited poke at the system makes it flip out of its normal boundaries. In the Cynefin model the act of “flipping out” would correspond to the domain of chaos. This is an undordered state without lack of control, a state you don´t want to stay too long in and might be hard to get out of (the inverted pendulum had to be reset manually). In software development, this would compare to some requirement initating a change in the system which will introduce one or several severe problems that puts the organization into chaos. So how can this be avoided? Control theory tells us that systems/feedback loops with these properties are hard to control and tend to be unstable:
Long time delays
Low observabillty of the system to be controlled
So if we in our organization can acheive short time delays within our feedback loop (test often, test early), little noise (direct and clear communciation) and high observability (good testabillty) the chance of keeping the process stable and thus avoid drifting into chaos should be significantly bigger. No revolutionary ideas here, this is what generally is preached in software development and testing. But it is rather nice to see that we have support in our claims from the field of control theory.
After two really intense days of Let´s Test it was time to get out of bed and muster up energy for the last day. The first session of the day was Baldvin Gislason Bern talking about his struggle to “Find purpuse in automation”. Baldvin is a former colleague of mine so I might be a bit biased but I thought he delivered a great presentation. The theme was revolving how he got assigned to different automation tasks in his organization and his struggle to understand why that automation was important, i.e. what purpose it served. I could not agree more about the importance to find out why to automate. All too often automation efforts seem to start with “we need to automate” instead of “how can we test better?”. Baldvin also managed to provoke a little by bashing the test automation pyramid, the Agile movement and also by telling us about how he told his automation team to throw away 10% of their checks. These deliberate provokations sparked a great discussion after the presentation and it felt like everybody got something to think of from this session.
What made me reevaluate my thoughts the most was the critique against the automation pyramid. I have always found it a useful model to apply for trying to “push tests down the stack”. I like the idea of having a few high level checks to find problems with how the different parts of the system interact, but to test the bigger set of variations on a lower level to reduce execution time, feedback loop and debugging. But I can agree with Baldvinds points that it cannot be applied in all contexts and that in some contexts comparing unit checks with higher level checks is apples and pears (my rephrasing). I guess it comes down to what checks you have control over in your context. If you work in a context where the development team is separate from the test team and the checks on different levels are divided between development and test, it is much harder to keep an overall strategy of how to distribute the checks. So I feel that the pyramid is valuable to illustrate an important principle, but that myself and many others might have been throwing it around too easily without providing the right context for it.
I have often felt that it is really important to understand the roles of people around you in your organization. Having only worked as a tester I don´t have the luxury to have gained that knowledge by first hand experience. So that was why Scott Barbers session “Experiencing product owners” appealed to me a lot. Scott, known as the “performance tester guy” had recently worked as a product owner and wanted to share his experience from that. I was surprised of how much stuff the product owner is involved in besides prioritizing the backlog for the development team and communicating with customers. It seemed like he needed to be everywhere discussing revenue, packaging, support issues, marketing etc, etc. If the normal product owner role has only half of the responsibillites that Scott´s role had, it is still baffling that they have any time for the development team at all.
The main purpose with the workshop was however to get to understand the perspective of the product owner so that we as testers can learn how to provide useful information to her/him. This was acheived by a couple of a simulation where we got to play different roles on a fictive team during different times of a project (just started, close to release etc). We made stuff up about our progress and Scott told us how he would have replied. As an extra dimension, Scotts answers in the beginning of the workshop simulated how he acted in when he just started as a product owner and in the end of the workshop he acted as he acted as he did after gaining more experience. I liked this workshop a lot since it did not only bring insight in to the role as a product owner, it also provided good tips on how to handle status meetings, distributed teams etc, all in Scotts very entertaining way. Favourite quote of the session:
“Done doesn´t mean done and I´m not talking about the whole ‘done’ versus ‘donedone’ thing” – Scott Barber on when things are really done for a product owner
Now the end was near, only the final keynote before it was time to split up and go back to reality. The keynote by Tumo Untinen told us the story about the finding and publishing the famous Heartbleed bug. It was interesting to hear about how it was exposed and how the release to the general public was handled. An interesting subject but I feel that keynote would have been better if some technical details had been left out.
As always (I get to say that now since this was my second Let´s Test) it was sad that the conference was over. Unfortunately I had to leave immediately so no time for huging and goodbyes this time. Thank you all you awesome peers who enriched my conference experience, hope to see you next year if not sooner!
The second day of Let´s Test kicked off with an hour of “Utilising automation tools in a CDT environment” with Guy Mason.
He talked about how automation is much more than regression checks and how it can be used to assist the testers in different ways. Some examples of these were: automation of work flows, data creation and performance testing. It is indeed an important aspect of automation that is often forgotten in the pursue of the ultimate regression test suite. I had thought about this topic myself a lot recently and Guys presentation helped strengthen my beliefs on this topic.
After the short morning session it was time for a full day workshop with Michael Bolton (@michaelbolton) and Laurent Bossavit (@Morendil) named “Defense against the dark arts”. Having quite recently read Laurents book about “Leprechauns in software engineering” I was looking forward to dig deeper into questioning various claims that gets thrown around a lot in our business.
The session started by a some background info followed by a short excercise where we were intstructed to put numbers on our gut feelings of different claims, such as “Spinache is a good iron source”, “Some developers are 10 times more effective than others” or “Hurricanes with female names are deadlier” (these are not the exact phrasings of the claims since I´m citing from the top of my head). We also noted what made us react to in certain ways to the claims. Maybe a number used was suspiciously precise or it sounded to much of a sweeping generalization to be true for all contexts. After a discussion around our notes we started digging into one claim that was chosen by the group at each table. Our group chose a claim stating something along the lines of: “Three quarters of the DoD´s budget 1995 was wasted on failed waterfall projects” (yet again not an exact citation). It turned out that it was really hard for us to find the original claim, despite all the collected googling skill in the group. It turned out that certain claims are easily twisted into new forms and meanings, and it can be really intereresting to follow a claim through the citation history to see how it has been transformed. After the exercise the groups tried to formulate the thought process on how to investigate and evaluate claims. All groups produced quite similar results with the magical “context” word appearing everywhere. This was the result from our group:
We continued by getting some pointers and tricks from Laurent and Michael on how to find potential problems with articles. For instance, searching for exact sentences that probably are quite unique might reveal other articles that are suspiciously related to the article under investigation. Also, I finding out that one of the main references is an unpublished master thesis doesn’t set off your smoke detector, I don´t know what will. Finally we got to choose a last exercise and I together with a peer tester decided to play around with the data suposedly supporting the claim that “Hurricanes with female names are deadlier”. The basic idea was that people are less scared of hurricanes with female names and would therefore be less cautious. It was fund to investigate a data set (we used Excel) in different ways to spot pontential problems. When we colored the cells for hurricanes with female names we found out that there was a period from the early 50´s to the late 70´s where all hurricane names where female (during other time periods it was roughly 50/50). The numbers of people dying from hurricanes where much higher than the total average and also much higher than the average of the “female hurricanes” from later years. This suggested that the death rate could be highly affected by the time era rather than the actual name. Investigating data sets like these is definitely something I would like to be more skilled at. Book pointers anyone?
The main takeaway from the workshop was that the research field of software engineering is in bad shape and that we as professionals working in the field have a responsibillity to try to make things better (or at least not making them worse by throwing around folklore). It also helped me in striking a balance between being a critical thinker but still somewhat open minded to new input.
For the evening session I joined Duncan Nisbet (@DuncNisbet) in “Cynefin sensemaking surgery”. I had recently tried to understand the Cynefin framework but felt that I was lacking examples on how it could be applied. Duncan had done some work on his own to as he put it “shoehorn software testing into Cynefin” (very successfully in my opinion).
After a short introduction to Cynefin we got to put down different problems we had encountered on stickies and the placing the stickies in the appropriate domain of the framework. This was followed by group discussions where we were telling stories around our stickies and the group helped out on if the stickies were put in the right domain and how they could eventually be moved clockwise to a more ordered domain. These disussions were exactly what I needed to get a better grasp of Cynefin (which might feel very abstract when you first encounter it). I recommend that you check it out if you already haven´t, it is very useful to make sense of your complex and sometimes chaotic surroundings in a software project, or in life in general.
The evening was rounded off in the test lab (a place I missed last year and had on my to-do list for this year). I paired up with another tester and we made some exploration of a new planning tool for complex projects. It was fun to make some hands-on testing and we managed to observe some important problems during our short session.
The product feels far from ready in its current state but did show some promise for the future. It was a great thing though to test an actual product for someone appreciating our services. Bonus points awarded for having the developers of the product available on instant messaging.
So it´s that time of the year when the Europe is hosting its big song contest and also its great test conferences: Let´s Test and Nordic Testing Days. I currently have the pleasure to attend the former for the second year in a row. I also decided to try to blog about it every day like I did last year. Partly due demand and partly because I like to summarize and write stuff down before I forget it. There is so much input during conferences like these so it will help me remember more of what I´ve learned and experienced. The extra spelling and grammar errors that comes with this way of working late in the evening will have to be considered a part of the authentic experience.
The first day of kicked off with the traditional AC/DC-intro followed by a brief introduction and then a keynote by Ben Simo (@QualityFrog).
Ben talked about his experience investigation problems with the controversial healthcare.gov webpage and the attention it got both on social media and in traditional media. Although the start was a bit slow, the presentation really took off when he started discussion the problems he had encountered, and how the feedback was received. The whole keynote was a great reminder that we as testers can make a difference, even if all problems were not fixed in this case. Ben also listed a bunch of heuristics and mnemonics he used during his testing, including both the OWASP top security threats as well as ethical considerations. It really displayed the range of dimensions we as testers need to keep in mind while doing what we do.
After the the keynote it was time for the first workshop. I chose to attend “Automation in testing workshop” hosted by Richard Bradshaw (@FriendlyTester). I´ve been working with automation quite a lot lately and was hoping to get some good pointers on automation in general as well as getting a tool to communicate the challenges of automation with non-technical people, I was not disappointed. The idea of the workshop was to assemble Duplo-pieces according to given requirements (a picture of the finished assembly). Initially we were split up in pairs where one person was the tester and the other one was the automation. The tester had to provide instructions to the automation and the automation was obliged to follow these instructions literally, without any own sapient reasoning. I got to play the automation at first and it turned out to be quite a difficult task to turn the brain off, blindly following instructions. The first assembly went well though, with a combination of luck (there were different shades of green, but the first piece I picked happened to be the right kind) and semi-sapient automation (I knew somehow at which tables the right pieces were). When we switched roles and the tester tried to follow my written down instructions, things did not really work out:
I think my testing partner did a better job of simulation a computer since he was able to follow the instructions correctly and get the result above. It was also interesting to see the failures and successes at the other tables, the approaches and results was quite mixed. As the session went on we got new constructions to make and was able to make abstractions in our automation implementation that provided possibillities to assemble different products only swithing a list of colors instead of changing all the code. In between the different excersizes we had brief discussions on the topics of what automation is, how do we know that it does what is supposed to and how can we make it more reusable. Dividing the automation framework into different components was a key point of the workshop, not only to make the solution more maintainable, but also to be able to reuse different parts to help the testers test. After all, automation is so much more than automated regression checks. This was illustrated by the last constructions we did where we were allowed to combine our human brains with automation assistance and was able to assemble constructions like these:
All in all a great workshop, if you are attending Nordic Testing Days, you should definitely check it out.
After lunch I attended the “Bad idea, bad idea…..good idea!” workshop with Paul Holland (@PaulHolland_TWN). The workshops theme was brainstorming; how can we make it effective? The whole worskhop was one more or less controlled experiment where the parameters group size, constraints and external stimulis were variated between the different sessions and the results were evaluated. I had great fun during this workshop, Paul is a funny guy to start with and the different topics like “Super villain names” and “Things that have holes” made sure to keep the mood high. That was also one of the takeaways from the workshop, that humour is needed to keep the energy level up. Especially when you have four brain storming sessions in a row like we had. Some other results from the workshop were that:
Bad ideas can trigger good ideas if they are focused on the goal,
Keep the group size small since it is difficult to manage large groups
The scribe was not able to be creative, so the most creative person in the group should probably not be take note all the time
We also discussed the impact of different personalities and the conclusion was that it probably was quite high, although it is not an easily controlled parameter in a session like this. The idea of getting a mix of own time to think and sharing ideas with each other resonated well with me though. Also, a beer or two might be appropriate at some points 🙂
After dinner it was time for yet another session, no time is wasted on Let´s test indeed. I attended ” A tester´s walk in the park” with Illari Henrik Aegerter (@ilarihenrik). I did not know what to expect but I thought that at least I would get some fresh air. Inspired from the old greek philosophers peripatetic school of thinking the group took a walk in the park discussing different topics collected from the old “Tree of questions” (which turned out to be more of a bush really).
The format was really enjoyable and walking around disscussing definitely has its advantages such as: natural flow between different conversations and groups, more comfortable silences and easier to listen to what the person is actually saying since you don´t have to look them in the eyes all the time (the last one is especially true for introverts I would think). I really enjoyed this session due to its relaxed format. It felt like a natural part of the Let´s test experience with some additional setup that sparked interesting conversations. Bonus points for Illaris outfit and the beverage that was provided in the end for thirsty peripatetics.
The evening ended with and some interesting discussions on schools of testing and exploratory testing. More of that tomorrow please.
Let´s say you are recruiting for a position as software tester on your team. You have two final candidates that are very similar in competence. They both have 8 years of experience within software testing, the only difference is how it is distributed:
Person A has worked 8 years within the same company. A company that is within the same business domain as your company.
Person B has worked for 4 different companies, approximately 2 year on each. None of the companies where in the same business domain as your company.
From a pure experience point of view, who would you choose?.
For me, it would come down to what type of tester I needed for my team. If the team was already full of ideas and initiatives on how we could change our ways of working, but lacking of domain knowledge person A would be my pick. But in the opposite situation where the need for fresh thoughts and ideas was needed, person B would get the nod.
Sounds simple enough but it gets more complex than that. Person A has lived through the same organization for a long time and has probably experienced a lot of re-organizations, improvement initiatives etc. This person would probably have learned the hard way what worked or not. Person B might have been involved in the startup of many new initiatives and improvements, but might not have stayed long enough to actually find out which ideas that worked and which did not. On the other hand, just because some initiatives did not work in person A´s workplace, they might just work fine in another environment. Something that person A might not acknowledge.
So you can argue back and forward like this forever trying to find the optimal hire for your team. But basically what I´m trying to say with this post is:
Number of years of experience only tells us a small part of the story. My 10 years is not the same as your 10 years.
What the person has experienced and how the person handled it is much more important than any quantative measure of experience.
Context matters, as usual. What does your team need right now?
The State of Testing survey is back. I liked the idea of the first survey when it hit my Twitter feed a year ago. It offered a chance to provice some kind of state report of the current state of the testing business. This year, I look forward to the results even more since there is something to compare with (i.e. last years´ survey). It is always hard to judge the statistical significance of these kinds of surveys since the samples are not chosen at random. However, the more years this will keep on going, hopefully the participation will increase more and more over time, and we will be able to draw more certain conclusions from it. Also, it will be interesting to monitor the trends over time as our business changes. So, why not take part and make the survey a little bit better than it would have been without your participation?
Disclaimer: This is not a suggestion of a best practice. It´s a heuristic that may be useful in the right context.
Striking the balance between different levels of checks can be tricky when working with automated regression checks. On the lowest level we have the unit check which checks small portions of the code, and on the higher levels the system checks and system integration checks that performs checks on big pieces of the whole system at once. It might be tempting to always go for the higher level checks since they “check that the whole system works”. However, these kind of checks are often brittle and requires a lot of maintenance. Unit checks on the other hand are small and lighweight and fairly easy to maintain. They usually don´t require that much of a complex test environment either. So how can I strike a good balance between different levels of checks? By letting those small and cheap unit checks excersize most of the variations and create a few high level tests that excersizes the main flows of the system. In this way you won´t get overwhelmed by maintaining all your precious checks. There is an existing model called the Test automation pyramid that suggests exactly this. It comes in different flavours with different names on the layers, but the general idea is the same. Read more about it here.
This model tells us something about the relation of the amount of checks between the different levels but not how often they are run. Since high level checks often take longer time to run and often require a more complex test environment, it is probably not a good idea to try to run them as often as the unit checks. If trying to do so, it will probably either make you run your unit checks too seldom or your high level checks too often. How can a check be run too often then? Well, if the checks require a complex test environment you might have to invest in parallel environments which could be very expensive and not worth it. If you have some kind of criteria connected to the checks so that nothing can be committed, you might have created a monster of a slow development process which will frustrate everybody. Also, it takes time to investigate the results of the checks so this might leave you with a big pile of results to investigate and it may be hard to keep up.
So, it might be a good idea to run the unit checks more often and the high level checks a bit more seldom, for instance during a nightly build or over the weekend even. In short we have a heuristic that tells us that “the higher level of check, the more seldom it will be run”. If we want to illustrate this heuristic we can imagine stretching out the test automation pyramid along a time axis, creating a prism. Then imagine inserting holes in the prism illustrating a time period where a check is not run. Now, given the aforementioned heuristic, we will have the biggest holes on the top and no holes in the bottom. So what do we end up with? Something looking like the (delicious) chocolate bar Toblerone.
I find the Toblerone model useful when thinking about balanced automated regression checking, I hope someone else does too. The biggest drawback is the craving for chocolate it always brings.
A place to put my testing thoughts. I'm Hannes Lindblom, a software tester based in Sweden. Twitter handle: @hanneslindblom.