Posts about CEWT #7:
Tuesday, 26 November 2019
Monday, 18 November 2019
CEWT #7 Abstracts
The seventh Cambridge Exploratory Workshop on Testing, CEWT #7, will be on 24th November 2019, hosted by DisplayLink. The topic is
But how do “traditional” practices in software engineering fit with the challenges of delivering this new technology? Mark will discuss his experiences of grappling with this problem, in particular the dirty testing secrets that evolved and why, when introducing a Machine Learning “Cognitive Investigations” solution that sought to replace the human decision making processes in the Anti Money Laundering operation of a global banking organisation.
Dirty Testing SecretsHere's the abstracts:
Mark Bunce, The Secret of Machine Learning
If you believe the hype and media buzz you’d think we have well and truly entered a new evolutionary age of artificial intelligence. Machine Learning is a hot-topic in many business domains and it is often seen as the magic wand that will quickly solve all problems.But how do “traditional” practices in software engineering fit with the challenges of delivering this new technology? Mark will discuss his experiences of grappling with this problem, in particular the dirty testing secrets that evolved and why, when introducing a Machine Learning “Cognitive Investigations” solution that sought to replace the human decision making processes in the Anti Money Laundering operation of a global banking organisation.
Aleksandar Simic, It's Not a Secret
What makes a secret harmful for me, for us testers or for whoever relies on us? How can we reveal the secret? Can the revealed secret become a secret again?Karo Stoltzenburg, Testing in Half the Time
It seems to me that half of the time, at least, we testers really are not needed; half of the time, at least, the testing we do really is unnecessary; and half of the time, at least, a tester in the team can actually risk destabilizing the team. In this talk I'll explain why and suggest that tester hubris is one of our dirty secrets.James Thomas, We Don’t Know
There are terms in our domain, terms that are fundamental to our work, terms like quality, bug, and even testing itself, that many testers would struggle to define. I’d say it’s an open secret within testing, but would it surprise our colleagues?
Labels:
CEWT#7
Sunday, 22 September 2019
CEWT #7 is Coming!
The seventh Cambridge Exploratory Workshop on Testing, CEWT #7, will be on 24th November 2019, hosted by DisplayLink. The topic is
Note: this is not an open call for participants. We try to support the local testing community by inviting those who've attended other Cambridge meetups recently first.
Dirty Testing Secrets
What dirty secrets do you know about in testing? What dirty secrets do you know about in your team’s testing? What are your own testing’s dirty secrets? Come and share them at CEWT #7!As usual, the topic is deliberately open and the discussion we want is open-ended and open-minded.
Practitioners in every industry do things that people outside of it would be surprised about, maybe even horrified by. Different practitioners develop their own ways of working and some of those would upset even their colleagues within their company and industry. We tend to keep quiet about these things. They are our dirty secrets.
We’re interested in exploring what testing’s dirty secrets are perceived to be, what justifications there might be for keeping them secret, and whether they are actually secret rather than visible but not seen or perhaps even deliberately ignored.
We want to discuss occasions when a secret was revealed and what the causes and ramifications of that were. We are curious about attempts to be open that failed, attempts that succeeded, and challenges around openness when trying to fit testing inside a wider software engineering culture.
We want to know what motivates us to perpetuate a secret rather than change things, and what could persuade us to move some existing practice underground, out of view. We are wondering whether secrets are kept from particular kinds of people, at particular times, for particular kinds of practice or data. We are asking whether we ever deceive ourselves in our work.
We are also wondering how we feel about all of the above, and how we deal with those feelings in ways that help us to continue in our role, or perhaps leave it behind and try something else.
Note: this is not an open call for participants. We try to support the local testing community by inviting those who've attended other Cambridge meetups recently first.
Labels:
CEWT#7
Sunday, 23 September 2018
Thursday, 13 September 2018
CEWT #6 Abstracts
The sixth Cambridge Exploratory Workshop on Testing, CEWT #6, will be on 23rd September 2018, hosted by Roku. The topic is
Chris Kelly, Is it possible to measure good testing?
Customer facing teams can use tools like Customer Satisfaction surveys, employing formulae like Net Promoter Scores. But what do testers have?
Aleksander Simic, Good or good-enough testing?
How can we know if we are ready for, we are doing or we did a 'good testing'? What can we do to prepare for 'good testing' and how can we improve it? Who can judge about it?
Helen Stokes and Claire Banks, Different Perspectives
An exploration into the different perspectives of good testing/tester from a test engineer and a test managers. Working through two case studies for both sides, to see where they overlap and where differ.
Karo Stoltzenburg, A Life Less Ordinary
It's a truth universally acknowledged, that a good tester must have attention to detail, is of a curious nature, thinks outside the box, likes to break things (sic!) and is a fiendish asker of questions. Admirable qualities. Must haves. At least that's what the job adverts, our colleagues, social media and the internet tells us. But is this really it? And is this necessarily helpful?
James Thomas, Testing vs Chicken
In this talk I'll assume that we know what good testing is (for our context, at this time) and wonder how we can judge, during recruitment, that a person being interviewed for a role at our company could do that good testing for us.
Neil Younger, What’s so special about a tester anyway?
I'll be talking though my first attempt at expanding what it means to be a senior tester at DisplayLink. I will provide some examples for the following sections; Technical, Your Team, Sharing, and the business. This is my view for my context, it's imperfect and incomplete, but I hope will promote lively discussion around what it could mean to be a good tester.
What Makes Good Testers and/or Testing?Here's the abstracts:
Chris Kelly, Is it possible to measure good testing?
Customer facing teams can use tools like Customer Satisfaction surveys, employing formulae like Net Promoter Scores. But what do testers have?
How can we know if we are ready for, we are doing or we did a 'good testing'? What can we do to prepare for 'good testing' and how can we improve it? Who can judge about it?
Helen Stokes and Claire Banks, Different Perspectives
An exploration into the different perspectives of good testing/tester from a test engineer and a test managers. Working through two case studies for both sides, to see where they overlap and where differ.
Karo Stoltzenburg, A Life Less Ordinary
It's a truth universally acknowledged, that a good tester must have attention to detail, is of a curious nature, thinks outside the box, likes to break things (sic!) and is a fiendish asker of questions. Admirable qualities. Must haves. At least that's what the job adverts, our colleagues, social media and the internet tells us. But is this really it? And is this necessarily helpful?
James Thomas, Testing vs Chicken
In this talk I'll assume that we know what good testing is (for our context, at this time) and wonder how we can judge, during recruitment, that a person being interviewed for a role at our company could do that good testing for us.
Neil Younger, What’s so special about a tester anyway?
I'll be talking though my first attempt at expanding what it means to be a senior tester at DisplayLink. I will provide some examples for the following sections; Technical, Your Team, Sharing, and the business. This is my view for my context, it's imperfect and incomplete, but I hope will promote lively discussion around what it could mean to be a good tester.
Labels:
CEWT#6
Friday, 15 June 2018
CEWT #6 is Coming!
The sixth Cambridge Exploratory Workshop on Testing, CEWT #6, will be on 23rd September 2018, hosted by Roku.
The topic is
Note: this is not an open call for participants. We try to support the local testing community by inviting those who've attended other Cambridge meetups recently first.
What Makes Good Testers and/or Testing?
How do you know when you’ve done a good job? How do you know when others think you’ve done a good job? Whose opinion matters, anyway? When you describe someone, including yourself, as a good tester what do you mean? In what ways good, and good compared to what?
At CEWT #6 we’ll be asking these questions and more, and wondering whether there are any characteristics of goodness that are universal (or even reasonably general) that we can apply to help us to assess ourselves and our testing. (Assuming we want to.)As usual, the topic is deliberately open and the discussion we want is open-ended and open-minded.
Note: this is not an open call for participants. We try to support the local testing community by inviting those who've attended other Cambridge meetups recently first.
Labels:
CEWT#6
Tuesday, 30 January 2018
Sunday, 14 January 2018
CEWT #5 Abstracts
The fifth Cambridge Exploratory Workshop on Testing, CEWT #5, will be on 28th January 2018, hosted by Linguamatics. The topic is
Sneha Bhat and James Thomas, Theoreticus Prime vs Praktikertron
Aleksander Simic, The alternation
Am I more practical or theoretical person? Do I find the theory helpful? Do I know when and how to apply it? How do I learn by doing? These are some of the question I'll try to answer based on the recent events.
Karo Stoltzenburg, Are Your Lights On?
Theory over practice or practice over theory? I won't give you a definite answer to apply (always! in every context!), but rather would like to invite you to explore the question itself with me. We can look into the definitions of theory and practice, wonder what our stakeholders might be, think about analogies in testing activities and question which problem we're trying to solve here. (slides)
Alan Wallace, Practice over training or training over practice?
I’m a competitive Masters swimmer. Swimming is pretty much entirely learnt by doing, most often when we are children. Adults are hard to teach to swim partly because they want to understand the theory, but like riding a bike I can’t really explain to you how to balance your body even if I can tell you the basic mechanics of how a particular swimming stroke works. So, we don’t really spend much time on theory, but we do a lot of time practising skills in the form of training for comparatively brief periods where we are actually competing and try to ensure all that training wasn’t for nothing. Whereas in the work place, my experience has been that we spend most of our time competing. We try to fit in some learning theory, but very rarely do we spend time training. Is this ok? Should we spend more time training?
Milosz Wasilewsk, Theory and practice moving from waterfall to agile
The talk is based on my experience when producing software for mobile phones. It will deal with idea of changing the software development paradigm from waterfall to agile. I will try to compare assumptions and outcome of the change. Social aspect of the change will also be discussed.
Neil Younger, Know one line
I’ll be drawing on my experience learning to ride a unicycle and comparing it against that of teaching others to ride. You might think it’s all about the practice and I’ll be exploring this while posing such questions as ‘does size matter’. This talk won’t give you answers but will aim to make you think about how you learn and how that might not always be the best way.
Theory Over Practice or Practice Over Theory?and here's the abstracts:
Sneha Bhat and James Thomas, Theoreticus Prime vs Praktikertron
There is a perceived tension between theory and practice, and between theorists and practitioners. In this talk, we will propose and illustrate using a practical example that practice generates data and theory is the data which we care about. Rather than focusing on theory over practice or practice over theory, a choice of theory, practice, or both is driven by the data needed for a particular task and contextual factors. (slides)
Aleksander Simic, The alternation
Am I more practical or theoretical person? Do I find the theory helpful? Do I know when and how to apply it? How do I learn by doing? These are some of the question I'll try to answer based on the recent events.
Karo Stoltzenburg, Are Your Lights On?
Theory over practice or practice over theory? I won't give you a definite answer to apply (always! in every context!), but rather would like to invite you to explore the question itself with me. We can look into the definitions of theory and practice, wonder what our stakeholders might be, think about analogies in testing activities and question which problem we're trying to solve here. (slides)
Alan Wallace, Practice over training or training over practice?
I’m a competitive Masters swimmer. Swimming is pretty much entirely learnt by doing, most often when we are children. Adults are hard to teach to swim partly because they want to understand the theory, but like riding a bike I can’t really explain to you how to balance your body even if I can tell you the basic mechanics of how a particular swimming stroke works. So, we don’t really spend much time on theory, but we do a lot of time practising skills in the form of training for comparatively brief periods where we are actually competing and try to ensure all that training wasn’t for nothing. Whereas in the work place, my experience has been that we spend most of our time competing. We try to fit in some learning theory, but very rarely do we spend time training. Is this ok? Should we spend more time training?
Milosz Wasilewsk, Theory and practice moving from waterfall to agile
The talk is based on my experience when producing software for mobile phones. It will deal with idea of changing the software development paradigm from waterfall to agile. I will try to compare assumptions and outcome of the change. Social aspect of the change will also be discussed.
Neil Younger, Know one line
I’ll be drawing on my experience learning to ride a unicycle and comparing it against that of teaching others to ride. You might think it’s all about the practice and I’ll be exploring this while posing such questions as ‘does size matter’. This talk won’t give you answers but will aim to make you think about how you learn and how that might not always be the best way.
Labels:
CEWT#5
Thursday, 30 November 2017
CEWT #5 is Coming!
The fifth Cambridge Exploratory Workshop on Testing, CEWT #5, will be on 28th January 2018, hosted by Linguamatics.
The topic is
The topic is
Theory Over Practice or Practice Over Theory?
Do you prefer experience or expertise? Do you simply dive in or do you first scope out? Do your skills get sharpened on the job or in your head? Do we just need to get along and get on with it or are the semantics worth getting straight? Is the view from the coalface more valuable than the one from the library?
The participants in CEWT #5 will be asked to consider the pros and cons of testing theory compared to testing practice. Perhaps there'll be stories about when one was critical or caused the project to go off the rails. Maybe we'll hear how it's possible to balance the two and what kinds of factors make a difference in doing that.
We might consider whether it's a balance across a team rather than a person. We might wonder whether it's possible to test without any testing theory, and what advantages that might confer. We might define a core set of theoretical concepts that we think are fundamental. Or we might not.As usual, the topic is deliberately open and the discussion we want is open-ended and open-minded.
Note: this is not an open call for participants. We try to support the local testing community by inviting those who've attended other Cambridge meetups recently first.
Labels:
CEWT#5
Monday, 19 June 2017
Saturday, 27 May 2017
CEWT #4 Abstracts
CEWT #4 will be on 11th June 2017 at Roku and the topic is Test Managers, Can't Live With 'Em ...
We're all managers, managed, or both so we've all got experience of people who manage testing work: test managers. But what do these people do? What value do they bring? Who are they for? Some companies are disbanding their test teams and replacing test managers with test or quality coaches. Are test managers an endangered species?
In CEWT #4 we'll be considering test managers and test management and the relationship of testers to both of them. We want to surface ideas and perspectives on those ideas and then explore both in search of insight. We want to propose hypotheses and then challenge them, and try to expose the evidence and assumptions that motivated them. We want to report experiences and then understand them and the conclusions that have been drawn from them. And we want to do these things in a relaxed, friendly, safe, collaborative, supportive, and positive environment.
As usual with us, the topic is a jumping-off point and so here's a few questions to help to get you started: Is test management primarily a management role, that could be done well by any competent manager? Or is it a specialist position that requires experience of software testing? Does that sound wise? What makes a good test manager? Who was the best test manager you ever had? Why? Who wasn't? Why not? What does your test manager do for you that you couldn't live without? And what would you rather they never did again?
If you're a test manager, when did you last test something? How deeply were you able to test it? Are you OK with that? Can you apply your testing skills to management? With what compromises? Are you afraid for your job? Or aspects of your job? Is test management actually a role rather than a position? Would your thoughts on test management differ if you considered line management and project management independently? How?
And here's the abstracts:
Why be a test manager?
Claire Banks
In my talk I will be sharing my experiences of previous test managers (good and bad) and being a test manager (bad). I have about 16 years experience in testing and still want/need a test manager. I'm so very strong in my views that I'll never be a test manager again that I've even left jobs when "force promoted" into that role. I'm looking forward to the discussions this subject brings to figure out if my views are simply outdated or whether my special brand of crazy is justified.
Test Manager - which hat to wear and when?
Sneha Bhat
It has become common to have embedded testers in cross-functional agile teams. I have seen that testers in such teams are more involved with the team members rather than with a Test Manager with respect to
planning, discussing task estimates and communicating progress
discussing strategies about what is needed/not needed for testing
decision making about when a task is complete
As a tester and a Scrum master in an Agile team, I will talk about how I see the role of a Test Manager fits into this model.
The awkward relationship between testers and non technical managers
James Coombes
I have had 7 years now working as a tester and this is a case study of the 9 managers I have had in that period of time. I will look at answering the question "do you have to be a good tester to make a good test manager?". And I will consider why some companies have a culture of hiring technical people for technical manager roles and others don't.
One Year, Two Testers, One Report
Aleksandar Simic and James Thomas
Our talk is an experience report - a two-experience report, or perhaps a shared experience report - about aspects of the relationship between a tester and a test manager in the first 12 months of working together. We'll take several milestones from that year and talk about what we were thinking, hearing, and attempting at each point, and look for commonality, discrepancy, and trends across the year through the prism of one of many communication channels.
Who needs testers anyway?
Neil Younger
Test Managers, and to an extent testers, can be at the sharp end of any company restructure, redundancies, or practices.
In this thought experiment, I'm going to challenge myself, and my biases, to see what a world without testers would look like at my company and how we might even get to that point.
Join me for this journey while I attempt to make my job title redundant!
Labels:
CEWT#4
Monday, 6 March 2017
CEWT #4 is Coming!
CEWT #4 will be on 11th June 2017 at Roku and the topic is:
Note: this is not an open call for participants. We try to support the local testing community by inviting those who've attended other Cambridge meetups recently first.
Test Managers, Can't Live With 'Em ...
We're all managers, managed, or both so we've all got experience of people who manage testing work: test managers. But what do these people do? What value do they bring? Who are they for? Some companies are disbanding their test teams and replacing test managers with test or quality coaches. Are test managers an endangered species?
In CEWT #4 we'll be considering test managers and test management and the relationship of testers to both of them. We want to surface ideas and perspectives on those ideas and then explore both in search of insight. We want to propose hypotheses and then challenge them, and try to expose the evidence and assumptions that motivated them. We want to report experiences and then understand them and the conclusions that have been drawn from them. And we want to do these things in a relaxed, friendly, safe, collaborative, supportive, and positive environment.
As usual with us, the topic is a jumping-off point and so here's a few questions to help to get you started: Is test management primarily a management role, that could be done well by any competent manager? Or is it a specialist position that requires experience of software testing? Does that sound wise? What makes a good test manager? Who was the best test manager you ever had? Why? Who wasn't? Why not? What does your test manager do for you that you couldn't live without? And what would you rather they never did again?
If you're a test manager, when did you last test something? How deeply were you able to test it? Are you OK with that? Can you apply your testing skills to management? With what compromises? Are you afraid for your job? Or aspects of your job? Is test management actually a role rather than a position? Would your thoughts on test management differ if you considered line management and project management independently? How?We were so happy with the changes we made for the format of CEWT #3 that we're not planning on doing anything radically different this time around.
Note: this is not an open call for participants. We try to support the local testing community by inviting those who've attended other Cambridge meetups recently first.
Labels:
CEWT#4
Sunday, 6 November 2016
Friday, 28 October 2016
CEWT #3 Abstracts
The first abstracts are in and this is some of what we expect to be talking about at CEWT #3.
Michael Ambrose - Teach them to Fish
As I implement my plans to upskill my testers in development capabilities to allow us to reach ever more clever ways of testing, there will be a risk in us becoming a speed bump on the team velocity as we try and juggle old and new ways of testing. I'll be discussing my plans to help mitigate this risk by also upskilling the developers in testing, so they can help pick up any slack, and looking at the possible consequences that can come of this - both good and bad...James Coombes - who should do testing and what can they test?
People test for a variety of different reasons, but mainly it is to prove a hypothesis that something either works or doesn’t. When used in an iterative manner (fixing bugs then retesting) this can be used to improve quality. Indeed without testing it is practically impossible to say whether something works or not.
For me QA is a form of risk management, we test to ensure that our reputation as a company and organization is well respected for producing high quality software. We (the company) all own quality. This talk will focus on the reason for test and who should do the testing. A series of short examples will give insights into who should be doing testing and the key areas a stakeholder can contribute to the overall task of testing. It may or may not be obvious, but a multitude of other people apart from QA can undertake testing, and we will explore who they are.
These are great, don't get me wrong. I like them. But they are also a bit clinical, rational, purposeful. And if I'm being honest, at the end of the day, these might not exactly be the reasons why /I/ test, or what keeps /me/ in testing.
So I'd like to explore what other reasons there might be. I'll be reflecting on which (personal, subjective) value testing brings to me rather than what value it might bring to my team or company. And being true to my craft, I'll be wondering if this experiment will reveal any new information and if yes, what I could be doing with it.
In this talk, I'll instead view testing as a number of different instruments that can be used in an arbitrary number of dimensions. Further, I'll suggest that testing can be applied not only to a system, but to descriptions of that system, to models of that system, to abstractions of that system, to a system which is testing that system, and to a system which is testing the system which is testing that system. And so on. It's testing all the way round.
I'll finish by proposing a definition of testing that I think might capture this wide applicability. (slides)
For me QA is a form of risk management, we test to ensure that our reputation as a company and organization is well respected for producing high quality software. We (the company) all own quality. This talk will focus on the reason for test and who should do the testing. A series of short examples will give insights into who should be doing testing and the key areas a stakeholder can contribute to the overall task of testing. It may or may not be obvious, but a multitude of other people apart from QA can undertake testing, and we will explore who they are.
Lee Hawkins – What is Testing? It depends ...
There are many definitions of what “testing” is – they can come from many sources, such as certification glossaries or the school of testing you align yourself with. But maybe we focus too much on the idea of a definition and focus too little on the wide range of perspectives of what testing means to different stakeholders. Let’s explore a few different perspectives together. (slides)Aleksandar Simic - Testing is ...
An attempt to explain what is testing by telling a two days testing story - a story that can be shared in various ways.Karo Stoltzenburg - I test, therefore I am
When you go on a lookout for answers to why we test, and what testing is (anyway), you often come across very similar explanations. Explanations that managers likely like and that contain terms such as "information", "quality", "state", "decisions", "risk" and similar.These are great, don't get me wrong. I like them. But they are also a bit clinical, rational, purposeful. And if I'm being honest, at the end of the day, these might not exactly be the reasons why /I/ test, or what keeps /me/ in testing.
So I'd like to explore what other reasons there might be. I'll be reflecting on which (personal, subjective) value testing brings to me rather than what value it might bring to my team or company. And being true to my craft, I'll be wondering if this experiment will reveal any new information and if yes, what I could be doing with it.
James Thomas - Testing All the Way Down, and Other Directions
The idea that testing is or can be a recursive activity - or even fractal - has some currency. In that view, a test or experiment generates some data, which suggests new experiments, which generate some data, which suggest new experiments and so on. The kinds of activities being done at each stage will be self-similar and testing is used as a kind of microscope to focus in on some aspect of the system under test. Testing all the way down.In this talk, I'll instead view testing as a number of different instruments that can be used in an arbitrary number of dimensions. Further, I'll suggest that testing can be applied not only to a system, but to descriptions of that system, to models of that system, to abstractions of that system, to a system which is testing that system, and to a system which is testing the system which is testing that system. And so on. It's testing all the way round.
I'll finish by proposing a definition of testing that I think might capture this wide applicability. (slides)
Labels:
CEWT#3
Tuesday, 26 July 2016
CEWT #3 is Coming!
It'll be on 6th November 2016 at Jagex and
the topic is:
Why do we Test, and What is Testing Anyway? Two questions you've probably seen asked, been asked, and are keen to ask. We bet you've read a bunch of standard answers to them too? Yeah, well, CEWT #3 won't be particularly concerned with those answers - unless you can tell us why, how, and where they don't or didn't work for you.We're trying a new format for CEWT #3 motivated by the retrospective we held at the end of CEWT #2. The major change is that we'll have more participants but fewer speakers. You can read about our thought process in Iterate to Accumulate.
So what are we concerned with? As usual, we're all about ideas and our peers. We want the workshop to take the questions and the answers somewhere unexpected, somewhere thought-provoking, somewhere interesting and relevant, somewhere that we might not have been before, somewhere that we might want or need to go in future.
You can think about either or both of the questions in any way you like. Here's a few starting points:
Why do we test? Why do you test? Why does your company have testers? Maybe your company doesn't have testers? Maybe only on certain projects? On certain kinds of project? Who decides? Why? Should this question be fundamental to testers? Or is it OK to just test whatever we're asked to test?
What is testing anyway? Is there one definition that accurately captures what you do in your job? What counts as a testing activity for you? Is your day job only made up of testing activities? Are you testing at other times? Are people in other roles testing alongside you? Is that good or bad? When? Why?
Labels:
CEWT#3
Wednesday, 9 March 2016
CEWT #2: Reflections
Labels:
CEWT#2
Monday, 8 February 2016
CEWT #2 Abstracts
The abstracts are in and this is what we expect to be talking about at CEWT #2 this month!
As a tester you might often find yourself doing tasks that won't necessarily be described as "testing" - as you strive to support your team by every possible means to deliver valuable software to the customer. This could be by picking up tasks or roles that are vacant in your team (scrum master, meeting organiser) or by bridging gaps in the workflow by delivering information or (facilitating) implementation. Although this supportive mindset is often perceived as a specific quality of a tester it also can have its downsides. While you're so busy doing other tasks, when do you have time to focus and plan your core responsibility - testing? How do you ensure you are able to switch perspective after spending so much time focusing on making the happy path work?
Michael Ambrose - What happens after the ball drops
In this presentation, I'd like to explore what happens after the
unthinkable occurs - a bug is found in live. I'll talk about my
experiences on these occasions and actions taken both to rectify the
issue and to turn the negative into a positive. I'll also broach the
view that sometimes it's ok to find bugs in live.
Eventually we worked out what the real problem was, and managed to get through the brick wall that was holding us back.
David Baldwin - Brick wall testing
We went through a period of having bugs being found by the product owner very soon after release. This was a number of years ago, when we were much more waterfall in our approach. Our regression testing was obviously not working, so we worked hard to improve coverage. Still the bugs seemed to get through, each time being found relatively quickly by the product owner. We thought the main issue was the testers having a lack of understanding of the business. The testers were given more training (in the business domain) but the bugs continued to arise. The testers shadowed the product owner and users to gain a deeper understanding. Still the bugs kept coming.Eventually we worked out what the real problem was, and managed to get through the brick wall that was holding us back.
Claire Banks - Now you C:\ it, now you don't
I'll be talking about a time when released software contained a bug causing data to be deleted. I will explain the actions taken in the aftermath and the effects this had, from my point of view. I will also outline events leading up to the incident that I have identified as potentially paving the way for such an issue to have happened.Conrad Braam - Why testing fails
When you work for a company that is in the top 100 of many lists, you don't want to admit to having first hand experience of
any testing "hashtag" fails. Delivering a Product is just as much about
not rushing as it is about not getting out of the door on time to start
with. So these are my two key points: Firstly rushed software creation,
which gives us no time to actually test the real features customers
wanted at all. Where, in a blind rush we gleefully focus testing on all
the wrong areas. And number Two, shambolic test planning which means a
great test suite might exist but cannot be run. The saying "Fail to plan
and plan to fail" comes to mind.
I'll walk you through the darkness of my failure experiences, and share some tips.
I'll walk you through the darkness of my failure experiences, and share some tips.
James Coombes - A selection of dubious automated tests
This
talk will focus on the common mistakes I have seen made within
automated tests, why they occurred, what people have done to rectify
them and the success of these rectifications.
This will look at:
- Tests that really shouldn't have been automated (based upon erroneous top down driven targets).
- Those which pass erroneously.
- Those that fail erroneously.
- Tests that got blamed as flappy but really found bugs.
- How automated test run results badly reported cause bugs to be missed.
Chris George - Making the testing waters flow in a stagnant pond
I’ve found myself in an environment where introducing change to testing and the testing process is difficult for many different reasons. The development process is entrenched and has been largely unchanged for a long time; it works. Rocking the apple-cart by introducing new approaches to testing does not go down well, and previous attempts to do this has caused seemingly irreparable damage both with the testers and the rest of the development team. What am I trying now? How is that going? What could I try?Karo Stoltzenburg - When a tester's mindset may stand in the way of testing
I'd like to draw up and discuss scenarios when the supportive testers mindset might be ultimately standing in the way of testing and if, where and how we might want to draw the line.As a tester you might often find yourself doing tasks that won't necessarily be described as "testing" - as you strive to support your team by every possible means to deliver valuable software to the customer. This could be by picking up tasks or roles that are vacant in your team (scrum master, meeting organiser) or by bridging gaps in the workflow by delivering information or (facilitating) implementation. Although this supportive mindset is often perceived as a specific quality of a tester it also can have its downsides. While you're so busy doing other tasks, when do you have time to focus and plan your core responsibility - testing? How do you ensure you are able to switch perspective after spending so much time focusing on making the happy path work?
James Thomas - Bug-free software? Go for it!
In a deliberately provocative presentation I will ask us to consider whether we as testers can be too closed-minded in our attitudes, whether there are schools of thought or approaches that, even if we care deeply about context, we are very unlikely even to consider and perhaps that we sometimes favour our reputation over giving ourselves the chance to do the best job that we can.Alan Wallace - Fighting the last war
I've been in the situation where some very fundamental functionality was changed. The changes resulted in some fundamental failures that were missed in testing. No one on the team had seen this functionality change before (developers or testers) as it hadn't changed in years. None of us understood the implications and were blind to the risks in an area that hadn't previously required any serious testing before. Fortunately it didn't take too long to rectify the problems. Afterwards we overcompensated and attributed far too much importance towards that functionality in testing over the next 6-12 months, even though it went back to never changing. The mistakes of the past were blinding us to the risks of the present.Neil Younger - Delivering software has changed, but what about the action of testing the software itself?
I will be exploring the idea that for testing to have gone wrong it implies that it was at one point right. I'll expand this to look back through the history of testing to see how much the act of testing itself has changed. I will also explore the idea that testing hasn't changed much, and perhaps that is where it might have gone wrong.
Labels:
CEWT#2
Saturday, 16 January 2016
CEWT #2 is Coming!
CEWT #2 is scheduled for 28th February 2016 and will be hosted by Neil Younger at DisplayLink.
The topic is
The topic is
When Testing Went Wrong: Company war stories would be great; thoughts on the testing community (or communities) are welcome; personal experiences and feelings are on the agenda too. We like open topics here at CEWT, so take this one in whatever direction you like.
To kick things off, here's some potential primers: what happened, from your perspective, when testing went wrong? How did the people involved cope with it? What actions were taken as a result of it? By who? Did they have the intended or hoped-for effect? How was the effect evaluated? What else could have been done? How did the experience make you feel? Were you actively involved, or an observer? Do you think that your own actions could or should have been different?All places are taken, and we have a full reserve list too.
In other news, I'm delighted that Chris George has joined me on the organisational side and will be facilitating this time around.
Labels:
CEWT#2
Wednesday, 8 July 2015
CEWT #1: Reflections
Labels:
CEWT#1
Monday, 6 July 2015
CEWT #1: Testing Ideas
CEWT #1 was held at Linguamatics on 4th July 2015. The subject we chose was
When rethinking your test approach, it can be difficult to come up with creative, new test ideas or a fresh angle towards your 'Application Under Test', while being stuck in your own good old mind. To overcome this I often take a role-playing approach to spur new testing ideas; stepping into somebody elses shoes can free whole new thought processes, give you new directions and additional viewpoints. I'd like to talk about a couple of methods you can (mis)use for this, like the 'persona' representation used in user-centered design, Edward de Bono's idea of 'Six Thinking Hats', the four-user model mentioned in James Whittaker's 'How to Break Software' and my own "What would?" approach. (Slides)
Testing user stories and personas, based on experience from recent projects.
Using test techniques to help define the behavior of a system or process while it is still just a concept
From hobbies to testing via travels and life, let's test.
I'll talk about the use of analogy as a device for generating ideas at multiple levels of testing including test activity, methodology and reporting. I'll associate analogy with lateral thinking and give an example of a specific analogy that I'm interested in at the moment. (Slides)
I'll be taking about growing ideas and how some need to be nurtured and guided while others unexpectedly have a life of their own. I'll be using real examples from my work as a tester to highlight how small ideas can have a big impact.
Testing Ideas: take this any way you like, but it could include where do the ideas come from, how do you keep them coming, how do you give yourself the chance of generating the good ones (or the important ones, or some other set) , how you test ideas (specs, proposals, stories etc), what's different about testing of ideas vs software? It can be experience-based, theoretical, future-looking etc ...And these were the topics we discussed:
What would somebody else do? Something else?
Karo StoltzenburgWhen rethinking your test approach, it can be difficult to come up with creative, new test ideas or a fresh angle towards your 'Application Under Test', while being stuck in your own good old mind. To overcome this I often take a role-playing approach to spur new testing ideas; stepping into somebody elses shoes can free whole new thought processes, give you new directions and additional viewpoints. I'd like to talk about a couple of methods you can (mis)use for this, like the 'persona' representation used in user-centered design, Edward de Bono's idea of 'Six Thinking Hats', the four-user model mentioned in James Whittaker's 'How to Break Software' and my own "What would?" approach. (Slides)
Life Before Sprint 0
Liz TattersallTesting user stories and personas, based on experience from recent projects.
Testing the imagination
Michael AmbroseUsing test techniques to help define the behavior of a system or process while it is still just a concept
Testing ideas R' everywhere
Gabrielle KleinFrom hobbies to testing via travels and life, let's test.
It's Like That
James ThomasI'll talk about the use of analogy as a device for generating ideas at multiple levels of testing including test activity, methodology and reporting. I'll associate analogy with lateral thinking and give an example of a specific analogy that I'm interested in at the moment. (Slides)
Mighty oaks from little acorns grow
Neil YoungerI'll be taking about growing ideas and how some need to be nurtured and guided while others unexpectedly have a life of their own. I'll be using real examples from my work as a tester to highlight how small ideas can have a big impact.
Labels:
CEWT#1
Subscribe to:
Comments (Atom)