The expectations were high when arriving to Runö Conference centre this morning for my first Let´s Test experience.
I have had this conference on my radar since the first one was held in 2012, but have not had the opportunity to attend until this year.
Right when entering the auditorium for the introduction and keynote speeches it was clear that this conference was a bit different. The room was setup with a stage and numerous round tables as opposed to the standard endless rows of seats usually seen at these types of events. The conference kicked off with some rock´n roll making sure that those not yet awake would be paying attention from now on. This was followed by a short introduction speech by Henrik Andersson (@henkeandersson) and then a keynote by Tim Lister.
The keynote was very enjoyable where Tim was telling a personal story about his long career, the people he met and the lessons he learned. I enjoyed it very much even though a lot of the references went over my head. As a (fairly young) Swede I lack some knowledge about the people being namedropped. By the way, I have to mention this wonderful visualization of the keynote made by Zeger Van Hese (@testsidestory).
The rest of the time until dinner was dedicated to various tutorials where I unfortunately was limited to only select one of them. My choice was the session with James Bach (@jamesmarcusbach) and Pradeep Soundararajan (@testertested) named “Review by Testing: Analyzing a specification by testing the product”. In this tutorial we got to explore different specifications and compare them to how the product actually behaved. The key here being that the specification is not perfect and nor is the product. So how will I as a tester cope with these uncertainties? The answer is by doing what testers do: ask questions and raise issues. “I see a difference between the behaviour in the spec and the actual behaviour of the product. Is there a problem here?”. For the first excersize James also showed us his impressive documentation on how he had investigated the spec and the product in question, (including the whole thought process from Skype conversations to model building and refining the results). Hopefully this documentation will be available in some way in the future. I think we all can learn from it.
One main takeaway I had from this tutorial was about how a good specification is designed where “why” is often much more important than “how”. The “whys” of the spec relates to the testing of the product where as the quantative requirements relates more to checking. So when we as testers ask for testable requirements what we get is really checkable requirements (be careful what you wish for). I really liked this connection between requirements and the testing vs checking discussions. All in all this was a great tutorial with a lot of open and rewarding discussions. Not easily summarized though since we explored heuristics and conclusions together as the session went on.
After the tutorial it was time for dinner where I met some nice people from Betware. An impressive amount of people showed up at the lightning talks that followed thereafter.
Of the ten (?) talks I enjoyed Richard Bradshaws´ (@friendlytester) talk about automation as a tool the most. I agree with his conclusions that if we talk about automation as a tool we will see more possibilities what we can do with it in addition to classic regression checking. I also enjoyed the talk about the social tester (appologies for forgetting who gave the talk. Edit: It was Martin Nilsson, @). Seems that you can come a long way in your project using coffee and cookies.
So far Let´s Test has been a great experience for me. Keep up the good work!
Comment: I have edited this post a couple of times since first published due to various spelling errors. Trying to write a blog post in the middle of the night in your second language with your head full of beer and new testing wisdom may have contributed to some of them.