Let´s Test 2014 – day 3

The third day of the conference was also the last day. The day did not start with a keynote speech since there would be a closing keynote at the end of the day. Instead, I jumped into a session with Fiona Charles (@FionaCCharles) named “We can´t know everything – Promoting healthy uncertainty on software projects”.


The session dealt with, as the title implied, uncertainty. How can we relate to it? Should we embrace it? How can we communicate it? In an attempt to answer these interesting and important questions we where divided into small groups and given excercises to think about and discuss.  Some highlights from the discussions from the top of my head:

  • Estimates was something that troubled a lot of people due to the uncertain nature of software projects. Some in the group had worked with a no-estimates approach recently where you just prioritize and do things one at the time. The team is trusted to not overwork or release with bad quality. This approach appealed to me a lot and I would love to not having to spend my time providing uncertain estimates. Although I realize (and so did the group) that this is not feasible in all contexts.
  • Good ways to deal with uncertainty where (among others): communicate with others to ventilate and/or to get help to mitigate , draw a picture with all your dependencies (servers, deliveries from others, etc) to highlight how testing is affected, accept that humans are not perfect and embrace that uncertainty, learn how your brain messes with you (analogy: when wathing a horror movie the most scary part is when you have not seen the monster yet. Once the monster is revealed, it transforms from a scary uncertainty to a known risk).
  • The model of the five orders of ignorance, presented by Eddy Bruin (@eddybruin). It helped me put concretize thoughts I´ve been having like “I don´t know what I don´t know yet”.

All in all a good workshop and like many times before, the major part of the content was created by the participants during the session.

After lunch I chose to attend “Quality leader – the changing role of the software testers” with Anna Royzman (@QA_nna).


The background of this session was that Annas company went “full agile” and therefore she lost her position as QA manager and had to redefine her own role in the company. She presented a skillset needed by this new role much broader than the skillset needed from a classic software tester (if I may use that term). This due to that in the agile context, more testing is performed by the developers. This creates a need for someone who can provide a quality perspective to guide and facillitate the people performing the tests in every team. So this new role needs to both be able to test himself/herself and also to inspire, coach and teach other to do it. Quite a challeging task! Anna also provided an interesting comparison between the agile manifesto and the principles of context driven testing and found a difference between the agile “working software” and the  CDT “the product is a solution”: This difference explains why a tester with the CDT perspective is needed in an agile team: to strive for a satisfied customer, not only working software. This vision about the new roles of software testing reminded of the article from James Bach about test jumpers with the obvious difference that a test jumper jumps between teams, whereas Annas quality lead would work within one team. Anna pointed another difference that test jumper still was more test focused where the quality lead would be more focused on quality overall. I found this session interesting but I think it is hard to tell what the tester of today will transform to. But it is certainly interesting to think about different scenarios and I think that Anna made a lot of good points. My feeling is though that we have not yet found the suitable approach for the testing and quality skillset in the agile context.

The closing keynote was held by Jon Bach in a rare visit abroad (I feel lucky to have caught him while in Europe). He talked about best practices, something that is more or less a swearword in the context driven community since it is always possible to think about a context where a best practise is not the best practice (this is a fun excersize by the way, try to think about a way of working that you like and then find a context in where it would be a bad idea).


Jon had an interesting approach where his keynote was created on a workshop he held earlier in the week where the participants made a list of common best practices and chose their top ten favourites. Jon then showed us numerous examples where these best practices would not be good practice at all. He also told us that he had tried hard to find a general best practice that could not be refuted. but he had not succeeded yet. In other words: there are no best practices that are valid in all contexts. After the presentation there was an open discussion where:

  • the word “practises” where defined
  • it was concluded that the term “best practices” does not exist in every language, but the phenomena is
  • CDT is not based on best practices but personal values such as integrity
  • not even in tic-tac-toe there is a best practice to play in a certain way (sometimes you want to lose)

Jon wrapped up the keynote by identifying the tricky part of delivering an artifact to a community which refutes best practices since that artifact may be misinterpreted as a best practise in itself.

The last thing that happened was that we got to write a letter to ourselves reminding us what we had learned during the conference and what we promised ourself to change when we get back to work (the letter will be delivered in three months from now). I loved this idea since it is way too common to go to a conference, get inspired and then get back to work just to get sucked back into reality forgetting about all the things you wanted to change.

That was the summary of the last day of Let´s Test 2014. A great conference which I hope to come back to next year. My intention is to write a retrospective blog post where I list the good and my ideas on how the conference can improve further. That might take a bit longer though since my family is calling for attention after my three day abscence. Stay tuned 🙂


Let´s Test 2014 – day 2

Day two started off substantially colder than day one did. See picture below for comparison with previous blog post.


That was just the outside temperature though. Inside the heat was on from the very first minute as the second day kicked off with a dance performance from the Let´s Test crew. Impressive moves all over and a nice way to get warmed up for the keynote of the day.


The keynote was also something unparalleled. Steven Smith (@stevenMsmith1) shared the experience with all of us by making us participants. We were split up into groups and given a challenge to solve as a group. The challenge had a few simple rules and the goal was to score as many points as possible. The purpose of the exersize was not revealed and I guess a few of us felt a bit sceptical to why we were doing it. But during the debreifing part more and more insights and small nuggets of wisdom where popping up. Some about group psychology, some about parallels to software testing, some about creativity and some about time management.


Since every group got to present their conclusions on stage the list of insights grew quite big in the end. It was interesting to see how such a small and simple excersize could yield so many insights. This was as Steven pointed out an indicator for how complex reality is.

The first session of the day was named “What can testing learn from social sciences” and was held by Huib Schoots @huibschoots. The session gave a nice overview over social sciences that we testers can learn and improve from. E.g sociology, pedagogy, philosophy etc. It made me really inspired to learn more about every subject presented but also a bit overwhelmed.


Not because of Huibs great presentation but since I already have a big pile of books at home, some which I haven´t started reading and some that I´ve only read parts from. And suddenly I now got even more books on my list. Where to start? How deep should I dig into each subject? I got a good pointer from Kristoffer Nordström (@kristoffer_nord) though about “Secrets of a buccaneer scholar” written by James Bach on the subject of self learning. Sounded like a good meta-read before digging into the ever growing bookpile.

After lunch I decided to join the session on note taking held by Louise Perold. Not because I felt that it was the most interesting subject but rather because I felt that this was an area I could really improve in. Initially there were problems getting the required software (a tape emulator running an old text based rpg) to run under Windows 8. When that was sorted the note taking could begin. The assignment was to play the rpg and make notes along the way. The session was divided into several slots where a new technique of note taking was introduced in the beginning of each slot. A the end of each slot we shared our results and conclusions. The main takeaways from this session was some insights about note taking techniques in general  and that I need to practice my skill in this area a lot more.


I was paired with Richard Bradshaw (@friendlytester) who made really nice scetch-type notes far better than my own. Inspiring!

The scheduled session of the day was a group discussion (debate?) on the context driven school of testing. Chris Blain had made some reflections on other schools of thinking and their philosophies and wanted to discuss if CDT is heading in the right direction and what we can do to make a bigger impact on the software testing performed today.


The discussion is not easily summarized in a short blog post like this but the main discussion points were about marketing compared to other schools of thinking, what examples we can make on the value of CDT, if there really is a contradiction between some of the schools (could they perhaps co-exist?) and how to introduce CDT approaches in your work. An interesting discussion to listen to indeed. And I really liked the initiative from Chris to bring this up, self reflection is essential! On a side note I was really impressed with the facillitating skills of Paul Holland (@PaulHolland_TWN). He managed to keep an intense discussion on track without getting lost in all the threads and subthreads created, good job!

After dinner there was a bonus session on security testing by Bill Mathews (@bill_mathews). We got to try some basic attacks on a web site with a database backend. We got to browse and alter cookie information using Chrome and also got to try some basic sql injection attacks. Not much time to go in depth on anything but it was nice to learn about a few of the tools used and to get some concepts explained.



With my head full of thoughts to process I did not have the energy to participate in the games played during the gaming night. But it sure looked like people had a good time.

Over and out!

Let´s Test 2014 – day 1

The expectations were high when arriving to Runö Conference centre this morning for my first Let´s Test experience.


I have had this conference on my radar since the first one was held in 2012, but have not had the opportunity to attend until this year.

Right when entering the auditorium for the introduction and keynote speeches it was clear that this conference was a bit different. The room was setup with a stage and numerous round tables as opposed to the standard endless rows of seats usually seen at these types of events. The conference kicked off with some rock´n roll making sure that those not yet awake would be paying attention from now on. This was followed by a short introduction speech by Henrik Andersson (@henkeandersson) and then a keynote by Tim Lister.


The keynote was very enjoyable where Tim was telling a personal story about his long career, the people he met and the lessons he learned. I enjoyed it very much even though a lot of the references went over my head. As a (fairly young) Swede I lack some knowledge about the people being namedropped. By the way, I have to mention this wonderful visualization of the keynote made by Zeger Van Hese (@testsidestory).

The rest of the time until dinner was dedicated to various tutorials where I unfortunately was limited to only select one of them. My choice was the session with James Bach (@jamesmarcusbach) and Pradeep Soundararajan (@testertested) named “Review by Testing: Analyzing a specification by testing the product”. In this tutorial we got to explore different specifications and compare them to how the product actually behaved. The key here being that the specification is not perfect and nor is the product. So how will I as a tester cope with these uncertainties? The answer is by doing what testers do: ask questions and raise issues. “I see a difference between the behaviour in the spec and the actual behaviour of the product. Is there a problem here?”. For the first excersize James also showed us his impressive documentation on how he had investigated the spec and the product in question, (including the whole thought process from Skype conversations to model building and refining the results). Hopefully this documentation will be available in some way in the future. I think we all can learn from it.


One main takeaway I had from this tutorial was about how a good specification is designed where “why” is often much more important than “how”. The “whys” of the spec relates to the testing of the product where as the quantative requirements relates more to checking. So when we as testers ask for testable requirements what we get is really checkable requirements (be careful what you wish for). I really liked this connection between requirements and the testing vs checking discussions. All in all this was a great tutorial with a lot of open and rewarding discussions. Not easily summarized though since we explored heuristics and conclusions together as the session went on.

After the tutorial it was time for dinner where I met some nice people from Betware. An impressive amount of people showed up at the lightning talks that followed thereafter.


Of the ten (?) talks I enjoyed Richard Bradshaws´ (@friendlytester) talk about automation as a tool the most. I agree with his conclusions that if we talk about automation as a tool we will see more possibilities what we can do with it in addition to classic regression checking. I also enjoyed the talk about the social tester (appologies for forgetting who gave the talk. Edit: It was Martin Nilsson, @MartinNilsson8). Seems that you can come a long way in your project using coffee and cookies.

So far Let´s Test has been a great experience for me. Keep up the good work!

 Comment: I have edited this post a couple of times since first published due to various spelling errors. Trying to write a blog post in the middle of the night in your second language with your head full of beer and new testing wisdom may have contributed to some of them.