Monday 8 February 2016

CEWT #2 Abstracts

The abstracts are in and this is what we expect to be talking about at CEWT #2 this month!


Michael Ambrose - What happens after the ball drops

In this presentation, I'd like to explore what happens after the unthinkable occurs - a bug is found in live.  I'll talk about my experiences on these occasions and actions taken both to rectify the issue and to turn the negative into a positive.  I'll also broach the view that sometimes it's ok to find bugs in live.


David Baldwin - Brick wall testing

We went through a period of having bugs being found by the product owner very soon after release. This was a number of years ago, when we were much more waterfall in our approach. Our regression testing was obviously not working, so we worked hard to improve coverage. Still the bugs seemed to get through, each time being found relatively quickly by the product owner. We thought the main issue was the testers having a lack of understanding of the business. The testers were given more training (in the business domain) but the bugs continued to arise. The testers shadowed the product owner and users to gain a deeper understanding. Still the bugs kept coming.

Eventually we worked out what the real problem was, and managed to get through the brick wall that was holding us back.


Claire Banks - Now you C:\ it, now you don't

I'll be talking about a time when released software contained a bug causing data to be deleted. I will explain the actions taken in the aftermath and the effects this had, from my point of view. I will also outline events leading up to the incident that I have identified as potentially paving the way for such an issue to have happened.

Conrad Braam - Why testing fails

When you work for a company that is in the top 100 of many lists, you don't want to admit to having first hand experience of any testing "hashtag" fails. Delivering a Product is just as much about not rushing as it is about not getting out of the door on time to start with. So these are my two key points: Firstly rushed software creation, which gives us no time to actually test the real features customers wanted at all. Where, in a blind rush we gleefully focus testing on all the wrong areas. And number Two, shambolic test planning which means a great test suite might exist but cannot be run. The saying "Fail to plan and plan to fail" comes to mind.

I'll walk you through the darkness of my failure experiences, and share some tips.


James Coombes - A selection of dubious automated tests

This talk will focus on the common mistakes I have seen made within automated tests, why they occurred, what people have done to rectify them and the success of these rectifications.

This will look at: 
  • Tests that really shouldn't have been automated (based upon erroneous top down driven targets).
  • Those which pass erroneously.
  • Those that fail erroneously.
  • Tests that got blamed as flappy but really found bugs.
  • How automated test run results badly reported cause bugs to be missed.


Chris George - Making the testing waters flow in a stagnant pond

I’ve found myself in an environment where introducing change to testing and the testing process is difficult for many different reasons. The development process is entrenched and has been largely unchanged for a long time; it works. Rocking the apple-cart by introducing new approaches to testing does not go down well, and previous attempts to do this has caused seemingly irreparable damage both with the testers and the rest of the development team. What am I trying now? How is that going? What could I try?


Karo Stoltzenburg - When a tester's mindset may stand in the way of testing

I'd like to draw up and discuss scenarios when the supportive testers mindset might be ultimately standing in the way of testing and if, where and how we might want to draw the line.

As a tester you might often find yourself doing tasks that won't necessarily be described as "testing" - as you strive to support your team by every possible means to deliver valuable software to the customer. This could be by picking up tasks or roles that are vacant in your team (scrum master, meeting organiser) or by bridging gaps in the workflow by delivering information or (facilitating) implementation. Although this supportive mindset is often perceived as a specific quality of a tester it also can have its downsides. While you're so busy doing other tasks, when do you have time to focus and plan your core responsibility - testing? How do you ensure you are able to switch perspective after spending so much time focusing on making the happy path work?


James Thomas - Bug-free software? Go for it! 

In a deliberately provocative presentation I will ask us to consider whether we as testers can be too closed-minded in our attitudes, whether there are schools of thought or approaches that, even if we care deeply about context, we are very unlikely even to consider and perhaps that we sometimes favour our reputation over giving ourselves the chance to do the best job that we can.


Alan Wallace - Fighting the last war

I've been in the situation where some very fundamental functionality was changed. The changes resulted in some fundamental failures that were missed in testing. No one on the team had seen this functionality change before (developers or testers) as it hadn't changed in years. None of us understood the implications and were blind to the risks in an area that hadn't previously required any serious testing before. Fortunately it didn't take too long to rectify the problems.  Afterwards we overcompensated and attributed far too much importance towards that functionality in testing over the next 6-12 months, even though it went back to never changing. The mistakes of the past were blinding us to the risks of the present.


Neil Younger - Delivering software has changed, but what about the action of testing the software itself?

I will be exploring the idea that for testing to have gone wrong it implies that it was at one point right. I'll expand this to look back through the history of testing to see how much the act of testing itself has changed. I will also explore the idea that testing hasn't changed much, and perhaps that is where it might have gone wrong.