Michael Ambrose - What happens after the ball drops
In this presentation, I'd like to explore what happens after the
unthinkable occurs - a bug is found in live. I'll talk about my
experiences on these occasions and actions taken both to rectify the
issue and to turn the negative into a positive. I'll also broach the
view that sometimes it's ok to find bugs in live.
Eventually we worked out what the real problem was, and managed to get through the brick wall that was holding us back.
David Baldwin - Brick wall testing
We went through a period of having bugs being found by the product owner very soon after release. This was a number of years ago, when we were much more waterfall in our approach. Our regression testing was obviously not working, so we worked hard to improve coverage. Still the bugs seemed to get through, each time being found relatively quickly by the product owner. We thought the main issue was the testers having a lack of understanding of the business. The testers were given more training (in the business domain) but the bugs continued to arise. The testers shadowed the product owner and users to gain a deeper understanding. Still the bugs kept coming.Eventually we worked out what the real problem was, and managed to get through the brick wall that was holding us back.
Claire Banks - Now you C:\ it, now you don't
I'll be talking about a time when released software contained a bug causing data to be deleted. I will explain the actions taken in the aftermath and the effects this had, from my point of view. I will also outline events leading up to the incident that I have identified as potentially paving the way for such an issue to have happened.Conrad Braam - Why testing fails
When you work for a company that is in the top 100 of many lists, you don't want to admit to having first hand experience of
any testing "hashtag" fails. Delivering a Product is just as much about
not rushing as it is about not getting out of the door on time to start
with. So these are my two key points: Firstly rushed software creation,
which gives us no time to actually test the real features customers
wanted at all. Where, in a blind rush we gleefully focus testing on all
the wrong areas. And number Two, shambolic test planning which means a
great test suite might exist but cannot be run. The saying "Fail to plan
and plan to fail" comes to mind.
I'll walk you through the darkness of my failure experiences, and share some tips.
I'll walk you through the darkness of my failure experiences, and share some tips.
James Coombes - A selection of dubious automated tests
This
talk will focus on the common mistakes I have seen made within
automated tests, why they occurred, what people have done to rectify
them and the success of these rectifications.
This will look at:
- Tests that really shouldn't have been automated (based upon erroneous top down driven targets).
- Those which pass erroneously.
- Those that fail erroneously.
- Tests that got blamed as flappy but really found bugs.
- How automated test run results badly reported cause bugs to be missed.
Chris George - Making the testing waters flow in a stagnant pond
I’ve found myself in an environment where introducing change to testing and the testing process is difficult for many different reasons. The development process is entrenched and has been largely unchanged for a long time; it works. Rocking the apple-cart by introducing new approaches to testing does not go down well, and previous attempts to do this has caused seemingly irreparable damage both with the testers and the rest of the development team. What am I trying now? How is that going? What could I try?Karo Stoltzenburg - When a tester's mindset may stand in the way of testing
I'd like to draw up and discuss scenarios when the supportive testers mindset might be ultimately standing in the way of testing and if, where and how we might want to draw the line.As a tester you might often find yourself doing tasks that won't necessarily be described as "testing" - as you strive to support your team by every possible means to deliver valuable software to the customer. This could be by picking up tasks or roles that are vacant in your team (scrum master, meeting organiser) or by bridging gaps in the workflow by delivering information or (facilitating) implementation. Although this supportive mindset is often perceived as a specific quality of a tester it also can have its downsides. While you're so busy doing other tasks, when do you have time to focus and plan your core responsibility - testing? How do you ensure you are able to switch perspective after spending so much time focusing on making the happy path work?