Here's the posts I know of about CEWT #3:
Sunday, 6 November 2016
Friday, 28 October 2016
CEWT #3 Abstracts
The first abstracts are in and this is some of what we expect to be talking about at CEWT #3.
Michael Ambrose - Teach them to Fish
As I implement my plans to upskill my testers in development capabilities to allow us to reach ever more clever ways of testing, there will be a risk in us becoming a speed bump on the team velocity as we try and juggle old and new ways of testing. I'll be discussing my plans to help mitigate this risk by also upskilling the developers in testing, so they can help pick up any slack, and looking at the possible consequences that can come of this - both good and bad...James Coombes - who should do testing and what can they test?
People test for a variety of different reasons, but mainly it is to prove a hypothesis that something either works or doesn’t. When used in an iterative manner (fixing bugs then retesting) this can be used to improve quality. Indeed without testing it is practically impossible to say whether something works or not.
For me QA is a form of risk management, we test to ensure that our reputation as a company and organization is well respected for producing high quality software. We (the company) all own quality. This talk will focus on the reason for test and who should do the testing. A series of short examples will give insights into who should be doing testing and the key areas a stakeholder can contribute to the overall task of testing. It may or may not be obvious, but a multitude of other people apart from QA can undertake testing, and we will explore who they are.
These are great, don't get me wrong. I like them. But they are also a bit clinical, rational, purposeful. And if I'm being honest, at the end of the day, these might not exactly be the reasons why /I/ test, or what keeps /me/ in testing.
So I'd like to explore what other reasons there might be. I'll be reflecting on which (personal, subjective) value testing brings to me rather than what value it might bring to my team or company. And being true to my craft, I'll be wondering if this experiment will reveal any new information and if yes, what I could be doing with it.
In this talk, I'll instead view testing as a number of different instruments that can be used in an arbitrary number of dimensions. Further, I'll suggest that testing can be applied not only to a system, but to descriptions of that system, to models of that system, to abstractions of that system, to a system which is testing that system, and to a system which is testing the system which is testing that system. And so on. It's testing all the way round.
I'll finish by proposing a definition of testing that I think might capture this wide applicability. (slides)
For me QA is a form of risk management, we test to ensure that our reputation as a company and organization is well respected for producing high quality software. We (the company) all own quality. This talk will focus on the reason for test and who should do the testing. A series of short examples will give insights into who should be doing testing and the key areas a stakeholder can contribute to the overall task of testing. It may or may not be obvious, but a multitude of other people apart from QA can undertake testing, and we will explore who they are.
Lee Hawkins – What is Testing? It depends ...
There are many definitions of what “testing” is – they can come from many sources, such as certification glossaries or the school of testing you align yourself with. But maybe we focus too much on the idea of a definition and focus too little on the wide range of perspectives of what testing means to different stakeholders. Let’s explore a few different perspectives together. (slides)Aleksandar Simic - Testing is ...
An attempt to explain what is testing by telling a two days testing story - a story that can be shared in various ways.Karo Stoltzenburg - I test, therefore I am
When you go on a lookout for answers to why we test, and what testing is (anyway), you often come across very similar explanations. Explanations that managers likely like and that contain terms such as "information", "quality", "state", "decisions", "risk" and similar.These are great, don't get me wrong. I like them. But they are also a bit clinical, rational, purposeful. And if I'm being honest, at the end of the day, these might not exactly be the reasons why /I/ test, or what keeps /me/ in testing.
So I'd like to explore what other reasons there might be. I'll be reflecting on which (personal, subjective) value testing brings to me rather than what value it might bring to my team or company. And being true to my craft, I'll be wondering if this experiment will reveal any new information and if yes, what I could be doing with it.
James Thomas - Testing All the Way Down, and Other Directions
The idea that testing is or can be a recursive activity - or even fractal - has some currency. In that view, a test or experiment generates some data, which suggests new experiments, which generate some data, which suggest new experiments and so on. The kinds of activities being done at each stage will be self-similar and testing is used as a kind of microscope to focus in on some aspect of the system under test. Testing all the way down.In this talk, I'll instead view testing as a number of different instruments that can be used in an arbitrary number of dimensions. Further, I'll suggest that testing can be applied not only to a system, but to descriptions of that system, to models of that system, to abstractions of that system, to a system which is testing that system, and to a system which is testing the system which is testing that system. And so on. It's testing all the way round.
I'll finish by proposing a definition of testing that I think might capture this wide applicability. (slides)
Labels:
CEWT#3
Tuesday, 26 July 2016
CEWT #3 is Coming!
It'll be on 6th November 2016 at Jagex and
the topic is:
Why do we Test, and What is Testing Anyway? Two questions you've probably seen asked, been asked, and are keen to ask. We bet you've read a bunch of standard answers to them too? Yeah, well, CEWT #3 won't be particularly concerned with those answers - unless you can tell us why, how, and where they don't or didn't work for you.We're trying a new format for CEWT #3 motivated by the retrospective we held at the end of CEWT #2. The major change is that we'll have more participants but fewer speakers. You can read about our thought process in Iterate to Accumulate.
So what are we concerned with? As usual, we're all about ideas and our peers. We want the workshop to take the questions and the answers somewhere unexpected, somewhere thought-provoking, somewhere interesting and relevant, somewhere that we might not have been before, somewhere that we might want or need to go in future.
You can think about either or both of the questions in any way you like. Here's a few starting points:
Why do we test? Why do you test? Why does your company have testers? Maybe your company doesn't have testers? Maybe only on certain projects? On certain kinds of project? Who decides? Why? Should this question be fundamental to testers? Or is it OK to just test whatever we're asked to test?
What is testing anyway? Is there one definition that accurately captures what you do in your job? What counts as a testing activity for you? Is your day job only made up of testing activities? Are you testing at other times? Are people in other roles testing alongside you? Is that good or bad? When? Why?
Labels:
CEWT#3
Wednesday, 9 March 2016
CEWT #2: Reflections
Labels:
CEWT#2
Monday, 8 February 2016
CEWT #2 Abstracts
The abstracts are in and this is what we expect to be talking about at CEWT #2 this month!
As a tester you might often find yourself doing tasks that won't necessarily be described as "testing" - as you strive to support your team by every possible means to deliver valuable software to the customer. This could be by picking up tasks or roles that are vacant in your team (scrum master, meeting organiser) or by bridging gaps in the workflow by delivering information or (facilitating) implementation. Although this supportive mindset is often perceived as a specific quality of a tester it also can have its downsides. While you're so busy doing other tasks, when do you have time to focus and plan your core responsibility - testing? How do you ensure you are able to switch perspective after spending so much time focusing on making the happy path work?
Michael Ambrose - What happens after the ball drops
In this presentation, I'd like to explore what happens after the
unthinkable occurs - a bug is found in live. I'll talk about my
experiences on these occasions and actions taken both to rectify the
issue and to turn the negative into a positive. I'll also broach the
view that sometimes it's ok to find bugs in live.
Eventually we worked out what the real problem was, and managed to get through the brick wall that was holding us back.
David Baldwin - Brick wall testing
We went through a period of having bugs being found by the product owner very soon after release. This was a number of years ago, when we were much more waterfall in our approach. Our regression testing was obviously not working, so we worked hard to improve coverage. Still the bugs seemed to get through, each time being found relatively quickly by the product owner. We thought the main issue was the testers having a lack of understanding of the business. The testers were given more training (in the business domain) but the bugs continued to arise. The testers shadowed the product owner and users to gain a deeper understanding. Still the bugs kept coming.Eventually we worked out what the real problem was, and managed to get through the brick wall that was holding us back.
Claire Banks - Now you C:\ it, now you don't
I'll be talking about a time when released software contained a bug causing data to be deleted. I will explain the actions taken in the aftermath and the effects this had, from my point of view. I will also outline events leading up to the incident that I have identified as potentially paving the way for such an issue to have happened.Conrad Braam - Why testing fails
When you work for a company that is in the top 100 of many lists, you don't want to admit to having first hand experience of
any testing "hashtag" fails. Delivering a Product is just as much about
not rushing as it is about not getting out of the door on time to start
with. So these are my two key points: Firstly rushed software creation,
which gives us no time to actually test the real features customers
wanted at all. Where, in a blind rush we gleefully focus testing on all
the wrong areas. And number Two, shambolic test planning which means a
great test suite might exist but cannot be run. The saying "Fail to plan
and plan to fail" comes to mind.
I'll walk you through the darkness of my failure experiences, and share some tips.
I'll walk you through the darkness of my failure experiences, and share some tips.
James Coombes - A selection of dubious automated tests
This
talk will focus on the common mistakes I have seen made within
automated tests, why they occurred, what people have done to rectify
them and the success of these rectifications.
This will look at:
- Tests that really shouldn't have been automated (based upon erroneous top down driven targets).
- Those which pass erroneously.
- Those that fail erroneously.
- Tests that got blamed as flappy but really found bugs.
- How automated test run results badly reported cause bugs to be missed.
Chris George - Making the testing waters flow in a stagnant pond
I’ve found myself in an environment where introducing change to testing and the testing process is difficult for many different reasons. The development process is entrenched and has been largely unchanged for a long time; it works. Rocking the apple-cart by introducing new approaches to testing does not go down well, and previous attempts to do this has caused seemingly irreparable damage both with the testers and the rest of the development team. What am I trying now? How is that going? What could I try?Karo Stoltzenburg - When a tester's mindset may stand in the way of testing
I'd like to draw up and discuss scenarios when the supportive testers mindset might be ultimately standing in the way of testing and if, where and how we might want to draw the line.As a tester you might often find yourself doing tasks that won't necessarily be described as "testing" - as you strive to support your team by every possible means to deliver valuable software to the customer. This could be by picking up tasks or roles that are vacant in your team (scrum master, meeting organiser) or by bridging gaps in the workflow by delivering information or (facilitating) implementation. Although this supportive mindset is often perceived as a specific quality of a tester it also can have its downsides. While you're so busy doing other tasks, when do you have time to focus and plan your core responsibility - testing? How do you ensure you are able to switch perspective after spending so much time focusing on making the happy path work?
James Thomas - Bug-free software? Go for it!
In a deliberately provocative presentation I will ask us to consider whether we as testers can be too closed-minded in our attitudes, whether there are schools of thought or approaches that, even if we care deeply about context, we are very unlikely even to consider and perhaps that we sometimes favour our reputation over giving ourselves the chance to do the best job that we can.Alan Wallace - Fighting the last war
I've been in the situation where some very fundamental functionality was changed. The changes resulted in some fundamental failures that were missed in testing. No one on the team had seen this functionality change before (developers or testers) as it hadn't changed in years. None of us understood the implications and were blind to the risks in an area that hadn't previously required any serious testing before. Fortunately it didn't take too long to rectify the problems. Afterwards we overcompensated and attributed far too much importance towards that functionality in testing over the next 6-12 months, even though it went back to never changing. The mistakes of the past were blinding us to the risks of the present.Neil Younger - Delivering software has changed, but what about the action of testing the software itself?
I will be exploring the idea that for testing to have gone wrong it implies that it was at one point right. I'll expand this to look back through the history of testing to see how much the act of testing itself has changed. I will also explore the idea that testing hasn't changed much, and perhaps that is where it might have gone wrong.
Labels:
CEWT#2
Saturday, 16 January 2016
CEWT #2 is Coming!
CEWT #2 is scheduled for 28th February 2016 and will be hosted by Neil Younger at DisplayLink.
The topic is
The topic is
When Testing Went Wrong: Company war stories would be great; thoughts on the testing community (or communities) are welcome; personal experiences and feelings are on the agenda too. We like open topics here at CEWT, so take this one in whatever direction you like.
To kick things off, here's some potential primers: what happened, from your perspective, when testing went wrong? How did the people involved cope with it? What actions were taken as a result of it? By who? Did they have the intended or hoped-for effect? How was the effect evaluated? What else could have been done? How did the experience make you feel? Were you actively involved, or an observer? Do you think that your own actions could or should have been different?All places are taken, and we have a full reserve list too.
In other news, I'm delighted that Chris George has joined me on the organisational side and will be facilitating this time around.
Labels:
CEWT#2
Subscribe to:
Posts (Atom)