Wednesday, 12 July 2017

Test Automation Canvas

Test automation frameworks grow incrementally, which means that their design and structure can change over time. As testers learn more about the product that they are testing and improve their automation skills, this can reflect in their code.

Recently I've been working with a group of eight testers who belong to four different agile teams that are all working on the same set of products. Though the testers regularly meet to share ideas, their test automation code had started to diverge. The individual testers had mostly been learning independently.

A manager from the team saw these differences emerging and felt concerned that the automated test coverage was becoming inconsistent between the four teams. The differences they saw in testing made them question whether there were differences in the quality of delivery. They asked me to determine a common approach to automated test coverage by running a one hour workshop.

I am external to the team and have limited understanding of their context. I did not want to change or challenge the existing approach or ideas from this position, particularly given the technical skills that I could see demonstrated by the testers themselves. I suspected that there were good reasons for what they were doing, but perhaps not enough communication.

I decided that a first step would be to create an activity that would get the testers talking to each other, gather information from these conversations, then summarise the results to share with the wider team.

To do this, I thought a bit about the attributes of a test automation framework. The primary reason that I had been engaged was to discuss test coverage. But coverage is a response to risk and constraints, so I wanted to know what those were too. I was curious about the mechanics of the suites: dependencies, test data, source control, and continuous integration. I had also heard varying reports about who was writing and reviewing automation in each team, so I wanted to talk about engagement and maintenance of code.

I settled on a list of nine key areas:

  1. RISKS - What potential problems does this suite mitigate? Why does it exist?
  2. COVERAGE - What does this suite do?
  3. CONSTRAINTS - What has prevented us from implementing this suite in an ideal way? What are our known workarounds?
  4. DEPENDENCIES - What systems or tools have to be functional for this suite to run successfully?
  5. DATA - Do we mock, query, or inject? How is test data managed?
  6. VERSIONING - Is there source control? What is the branching model for this suite?
  7. EXECUTION - Is the suite part of a pipeline? How often does it run? How long does it take? Is it stable?
  8. ENGAGEMENT - Who created the suite? Who contributes to it now? Who is not involved, but should be?
  9. MAINTAINABILITY - What is the code review process? What documentation exists?

I decided to put these prompts into an A3 canvas format, similar to a lean canvas or an opportunity canvas. I thought that this format would create a balance between conversation and written record, as I wanted both to happen simultaneously.

Here is the blank Test Automation Canvas that I created:

A blank Test Automation Canvas

On the day of the workshop, the eight testers identified four separate automation suites under active development. They then self-selected into pairs, with each pair taking a blank canvas to complete.

It took approximately 20 minutes to discuss and record the information in the canvas. I asked them to complete the nine sections in the order that they are numbered in the earlier list: risks, coverage, constraints, dependencies, data, versioning, execution, engagement, and maintainability.

Examples of completed Test Automation Canvas

Then I asked the pairs to stick their completed canvas on the wall. We spent five minutes circling the room, silently reading the information that each pair had provided. As everyone had been thinking deeply about one specific area, this time was to switch to thinking broadly.

In the last 15 minutes, we finished by visiting each canvas in turn as a group. I asked two questions at each canvas to prompt group discussion: is anything unclear and is anything missing. This raised a few new ideas, and some misunderstanding between different teams, so notes were added into the canvas'.

After the workshop, I took the information from the canvas' to create a single A3 summary of all four automation frameworks, plus the exploratory testing that is performed using a separate tool:

Example of Test Automation Summary

In the image above, each row is a different framework. The columns are rationale, coverage, dependencies, mechanics, and improvement opportunities. Within mechanics are versioning, review, pipeline, contributors and data.

I shared this summary image in a group chat channel for the testers to give their feedback. This led to a number of small revisions and uncovered one final misunderstanding. Now I think that we have a reference point that clearly states the collective understanding of test automation among the testers. The next step is to share this information with the wider team.

I hope that having this information recorded in a simple way will create a consistent basis for future iterations of the frameworks. If the testers respect the underlying rationale of the suite and satisfy the high-level coverage categories, then slight differences in technical implementation are less likely to create the perception that there is a problem.

The summary should also support testers to give feedback in their code reviews. I hope that it provides a reference to aid constructive criticism of code that does not adhere to the statements that have been agreed. This should help keep the different teams on a similar path.

Finally, I hope that the summary improves visibility of the test automation frameworks for the developers, business people, and managers who work in these teams. I believe that the testers are doing some amazing work and hope that this reference will promote their efforts.

Friday, 23 June 2017

The Interview Roadshow

Recently I have been part of a recruitment effort for multiple roles. In May we posted two advertisements to the market: automation tester and infrastructure tester. Behind the scenes we had nine vacancies to fill.

This was the first time that I had been involved in recruiting for such a large number of positions simultaneously. Fortunately I was working alongside a very talented person in our recruitment team, Trish Burgess, who had ideas about how to scale our approach.

Our recruitment process for testers usually includes five steps from a candidate perspective:
  1. Application with CV and cover letter
  2. Screening questions
  3. Behavioural interview
  4. Practical interview
  5. Offer
We left the start of the process untouched. There were just over 150 applications for the two advertisements that we posted and, after reading through the information provided, we sent three screening questions to a group of 50 candidates. We asked for responses to these questions by a deadline, at which point we selected who to interview.

Usually we would run the two interviews separately. Each candidate would be requested to attend a behavioural interview first then, depending on the feedback from that, a practical interview as a second step. Scheduling for the interviews would be agreed between the recruiter, the interviewers, and the candidates on a case-by-case basis.

As we were looking to fill nine vacancies, we knew that this approach wouldn't scale to the number of people that we wanted to meet. We decided to trial a different approach.

The Interview Roadshow

Trish proposed that we run six parallel interview streams. To achieve this we would need twelve interviewers available at the same time - six behavioural and six practical - to conduct the interviews in pairs.

The first hurdle was that we didn't have six people who were trained to run our practical interview, as we usually ran them one-by-one. I asked for volunteers to join our interview panel and was fortunate to have a number of testers come forward. I selected a panel of eight where four experienced interviewers were paired with four new interviewers. The extra pair gave us cover in case of unexpected absence or last minute conflicts.

We assembled a larger behavioural interview panel too, which gave us a group of 16 interviewers in total. Several weeks in advance of the interview dates, while the advertisements were still live, Trish booked three half-day placeholder appointments into all their diaries:
  • Friday morning 9.30am - 12pm
  • Monday afternoon 1pm - 3.30pm
  • Wednesday morning 9.30am - 12pm

In the weeks leading up to the interviews themselves, the practical interviewer pairs conducted practice interviews with existing staff as a training exercise for the new interviewers. We also ran a session with all the behavioural interviewers to make sure that there was a consistent understanding of the purpose of the interview and that our questions were aligned.

From the screening responses I selected 18 people to interview. We decided to allocate the candidates by their experience into junior, intermediate, and senior streams, then look to run a consistent interview panel for each group. This meant that the same people met all of the junior candidates, and similarly at other levels.

The easiest way to illustrate the scheduling is through an example.

For the first session on Friday morning we asked the candidates to arrive slightly before 9.30am. Trish and I met them in the lobby, then took them to a shared space in our office to give them a short explanation of how we were running the interviews. I also took a photo of each candidate, which I used later in the process when collating feedback.

Then we delivered the candidates to their interviewers. We gave the interviewers a few minutes together prior to the candidate arriving, for any last minute preparation, so the interviews formally began ten minutes after the start of their appointment (at 9.40am).

Here is a fictitious example schedule for the first set of interviews:



The first interviews finished by 10.40am, at which point the interviewers delivered the candidate back to the shared space. We provided morning tea and they had 20 minutes to relax prior to their next interview at 11am. Trish and I were present through the break and delivered the candidates back to the interviewers.

Here is a fictitious example schedule for the second interviews:



The second interview session finished by 12pm, at which point the interviewers would farewell the candidate and collate their feedback from both sessions.

Retrospective Outcomes

The main benefit to people involved in the interview roadshow was that it happened within a relatively short time frame. Within four working days we conducted 36 interviews. As a candidate, this meant fast feedback on the outcome. As an interviewer, it meant less disruption of my day-to-day work.

We were happily surprised that we had 18 candidates accept the interview offer immediately. We had assumed that some people would be unavailable, as when we schedule individual interviews there is a lot of back-and-forth. Trish had given an indication of the interview schedule when asking the screening questions. The set times seemed to motivate candidates make arrangements so that they could attend.

By running two interviews in succession, the candidate only had to visit our organisation once. In our usual process recruitment process a candidate might visit twice: the first time for a behavioural interview and the second for a practical interview. One trip means fewer logistical concerns around transport, childcare, and leaving their current workplace.

On the flip side, running two interviews in succession meant that people had to take more time away from their current role in order to participate. We had feedback from one candidate that it was a long time for them to spend away from the office.

There were three areas that we may look to improve.

Having six candidates together in the pre-interview briefing and refreshment break was awkward. These were people who didn't know each other, were competing for similar roles, and were in the midst of an intense interview process. The conversation among the group was often stilted or non-existent - though perhaps this is a positive thing for candidates who need silence to recharge?

In our usual process the hiring manager would always meet the person that was applying for the vacancy in their team. In this situation, we had individual hiring managers who were looking for multiple roles at multiple levels - junior, intermediate, senior. With the interview roadshow approach, we had some successful candidates who were proposed to a role where the hiring manager hadn't met them. Though this worked well for us, as there was a high degree of trust among the interviewers, it may not in other situations.

The other thing that became difficult in comparison to our usual approach was dealing with internal applicants. We had multiple applications from within the organisation and it was harder to handle these in a discrete way with such a large panel of interviewers. The roadshow approach to interviewing also made these people more visible in their aspirations, though we tried to place them in rooms that were away from busy areas.

Overall, I don't think that we could have maintained the integrity of our interview process for such a large group of candidates by any other means. The benefits of scaling to an interview roadshow outweigh the drawbacks and it is something that I think we will adopt again in future, as required.

I personally had a lot of fun in collating the candidate feedback, seeing which candidates succeeded, and suggesting how we could allocate people to teams. Though it is always hard to decline the candidates that are unsuccessful, I think we have a great set of testers coming in to join us as a result of this process and I'm looking forward to working with them.

Thursday, 8 June 2017

Using SPIN for persuasive communication

I can recall several occasions early in my career where I became frustrated by my inability to persuade someone to my way of thinking. Reflecting on these conversations now, I can still bring to mind the feelings of agitation as I failed. I thought I had good ideas. I would make my case, sometimes multiple times, to no avail. I was simply not very good at getting my way.

The frustration came from my own failure, but I was also frustrated by seeing others around me succeed. They could persuade people. I couldn't figure out why people were listening to them, but not me. I was unable to spot the differences in our approach, which meant that I didn't know what I should change.

Some years later, in my role as a test consultant, I had the opportunity to attend a workshop on the fundamentals of sales. The trainer shared an acronym, SPIN, which is a well-known sales technique developed in the late 1980s.

SPIN was a revelation to me and I believe that it has significantly improved my ability to persuade. In this post I'll explain what the acronym stands for and give examples of how I apply SPIN in a testing context.

What is SPIN?

SPIN stands for situation, problem, implication, and need.

A SPIN conversation starts with explaining what you see. Describe the situation and ask questions to clarify where you're unsure. Avoid expressing any judgement or feelings - this should be a neutral account of the starting point.

Then discuss the problems that exist in the current state. Where are the pain points? Share the issues that you see and draw out any that you have missed. Try to avoid making the problems personal, as this part of the conversation can be derailed into unproductive ranting.

Next, think about what the problems mean for the business or the team. Consider the wider organisational context and question how these problems impact key measures of your success. Where is the real cost? What is the implication of keeping the status quo.

Finally, describe what you think should happen next. This is the point of the conversation where you present your idea, or ideas, for the way forward. What do you think is needed?

To summarise in simple terms, the parts of SPIN are:
  • Situation - What I see
  • Problem - Why I care
  • Implication - Why you should care
  • Need - What I think we should do

A SPIN example

My first workplace application of SPIN was at a stand-up meeting. I was part of a team that were theoretically running a fortnightly scrum process. In reality it was a water-scrum-fall where testing kept being flooded at the end of each sprint.

I had been trying, unsuccessfully, to change our approach to work. Prior to this particular stand-up I sat down and noted some thoughts against SPIN. With my preparation in mind, at the stand-up I said something like:

"It seems like the work isn't being delivered to testing until really late in the sprint, and then everything arrives at once. This means that we keep running out of time for testing, or we make compromises to finish on time. 

If we run out of time, then we miss our sprint goal. If we compromise on test coverage, then we all doubt what we are delivering. Both of these outcomes have a negative impact on our team morale. At the end of each fortnight I feel like we are all pretty flat. 

I'd like us to try having developers work together on tasks so that we push work through the process, rather than individual developers tackling many tasks in the backlog at once. That way we should see work arrive in testing more regularly through the sprint. What do you think?"

To my amazement, this was the beginning of a conversation where I finally convinced the developers to change how they were allocating work.

Did you spot the SPIN in that example?

  • Situation - What I see - It seems like the work isn't being delivered to testing until really late in the sprint, and then everything arrives at once.

  • Problem - Why I care - This means that we keep running out of time for testing, or we make compromises to finish on time. 

  • Implication - Why you should care - If we run out of time, then we miss our sprint goal. If we compromise on test coverage, then we all doubt what we are delivering. Both of these outcomes have a negative impact on our team morale. At the end of each fortnight I feel like we are all pretty flat. 

  • Need - What I think we should do - I'd like us to try having developers work together on tasks so that we push work through the process, rather than individual developers tackling many tasks in the backlog at once. That way we should see work arrive in testing more regularly through the sprint.

In the first few conversations where I applied SPIN, I had to spend a few minutes preparing. I would write SPIN down the side of a piece of paper and figure out what I wanted to say in each point. This meant that I could confidently deliver my message without feeling like I was citing the different steps of a sales technique.

Preparing for a conversation using SPIN

SPIN in a retrospective

As I became confident with structuring my own conversations using SPIN, I started to observe the patterns of success for others. Retrospectives provided a lot of data points for both successful and unsuccessful attempts at persuasion.

Many retrospective formats encourage participants to write their thoughts on sticky notes. When prompted with a question like "What could we do differently" I noticed that different people would usually note down their ideas using a single piece of SPIN. Where an individual consistently chose the same piece of SPIN in their note taking, they created a perception of their contributions among the audience. 

Let me explain this with an example. Imagine a person who takes the prompt "What could we do differently" and writes three sticky notes:
  1. We all work from home on Wednesday
  2. The air conditioning is too cold
  3. Our product owner was sick this week
All three are observations, the 'situation' of SPIN that describe what they see. Though they might be thinking more deeply about each, without any additional information the wider team are probably thinking "so what?"

Similarly, if your sticky notes are mostly problems, then your team might think that you're whiny. If your sticky notes are mostly solutions, then your team might think that you're demanding. In the absence of a rounded explanation your contribution can be misinterpreted.

I'm not suggesting that you write every retrospective sticky note using the SPIN format!

I use SPIN in a retrospective in two ways. Firstly to remind myself to vary the type of written prompt that I give myself when brainstorming on sticky notes, to prevent the perception that can accompany a consistent approach. Secondly to construct a rounded verbal explanation of the ideas that I have, so I have the best chance of persuading my team.

SPIN with gaps

There may be cases where you cannot construct a whole SPIN.

Generally I consider the points of SPIN with an audience in mind. When I think about implication, I capture reasons that the person, or people, that I am speaking to should care about what I'm saying. If I'm unable to come up with an implication, this is usually an indicator that I've picked the wrong audience. When I can't think of a reason that they should care, then I need to pick someone else to talk to.

Sometimes I can see a problem but I'm not sure what to do about it. When this happens, I use the beginning of SPIN as a way to start a conversation. I can describe the situation, problems, and implications, then ask others what ideas they have for improvement. It can be a useful way to launch a brainstorming activity.

Conclusion

SPIN is one facet of persuasive communication. It offers guidance on what to say, but not how to say it. In addition to using SPIN, I spent a lot of time considering the delivery of my arguments in order to improve the odds of people accepting my ideas.

Though I rarely have to write notes in the SPIN format as I did originally, I still use SPIN as a guide to structure my thinking. SPIN stops me from jumping straight to solutions and helps me to consider whether I have the right audience for my ideas. I've found it a valuable technique to apply in a variety of testing contexts.

Thursday, 11 May 2017

Three styles of automation

At Let's Test next week I have the privilege of presenting a hands-on workshop titled "Three styles of automation". The abstract for the session reads:

A lot of people use Selenium WebDriver to write their UI automation. But the specific implementation language and coding patterns will differ between organisations. Even within the same organisation, a set of front-end tests can look different between different products.

Katrina will share three different approaches to Java-based UI automation using Selenium WebDriver from her organisation. She will explain the implementation patterns, the reasons that differences exist between repositories, and the benefits and drawbacks of each approach.

Participants will download three different suites that each implement a simple test against the same web application. Once they have a high-level understanding of each code base, they will have the opportunity to execute and extend the test suite with targeted hands-on exercises.

In this post I share the code and resources that I'll be using in this workshop. Though you won't get the same level of support or depth of understanding as a workshop participant, I hope you will find something of interest.

Background

These three automated suites are written against a tool provided by the New Zealand tax department -the IRD PAYE and Kiwisaver deductions calculator. Each suite contains a single test that enters the details for a test employee and confirms that PAYE is calculated correctly.

Each suite is reflective of a framework that we use for test automation in my organisation: Pirate, AAS, or WWW. These public versions do not contain any proprietary code; they've been developed as a training tool to provide a safe place for testers to experiment.

Each training suite was created almost a year ago, which means they're showing their age. They still run with Selenium 2 against Firefox 45. We're in the process of upgrading our real automation to Selenium 3, and switching to Chrome, but these training suites haven't been touched yet.

The three suites illustrate the fundamental differences in how we automate for three of our products. Some of these differences are based on conscious design decisions. Some are historic differences that would take a lot of work to change. The high-level implementation information about each suite is:

Pirate
  • Uses Selenium Page Factory to initialise web elements in page objects 
  • Methods that exit a page will return the next page object 
  • Has an Assertion Generator to automatically write assertions 
  • Uses rules to trigger @Before and @After methods for tests 

AAS
  • Uses a fetch pattern to retrieve page objects 
  • Provides WebDriverUtils to safely retrieve elements from the browser 
  • Tests are driven by an HTML concordion specification 
  • Uses inheritance to trigger @Before and @After methods for tests 

WWW
  • Uses Selenide as a wrapper for Selenium to simplify code in page objects 
  • Uses a Selenide Rule to configure Web Driver 
  • Uses @Before and @After methods in the tests directly

Installation

There are pre-requisite installation instructions to help you get the code running on your own machine. To get the tests executing within each framework, you may have to download and install:

  • git
  • Java
  • Firefox 45
  • An IDE e.g. IntelliJ
You can download the three suites from GitHub. If you haven't used GitHub before, you may need to create an account in order to clone the three repositories.

Comparison

The beauty of these training frameworks is that it is easy to compare the three implementations. If you are familiar with the way that one works, you can easily map your understanding to learn the others. 

In each suite you will see different page object implementation. The enterUserAndTaxDetails method in the UserAndTaxYearPage is a good example of the different approaches to finding and using web elements:

The same functionality implemented in three different ways

There are different types of assertions in the tests. Pirate assertions are created by an automated assertion generator, in AAS the English language concordion specification holds the assertions, and WWW make use of simple JUnit asserts.

The navigation through the application varies too. Pirate passes page objects, AAS implements fetch methods, and WWW simply use the Selenide open method. 

These differences are apparent when reading through the code, but to really understand them you are best to get hands-on. As a starting point, try adding to the existing test so that a 3% employee KiwiSaver deduction is selected, then make sure that deduction is reported correctly in the summary.

Conclusion

I don't claim that any of these frameworks are a best practice for UI automation. However, they represent a real approach to tests that are regularly executed in continuous integration pipelines in a large organisation. I wish that more people were able to share this level of detail.

I find the similarities and differences, and the rationale for each, to be fascinating. Given the variety within our team, it makes me wonder about the variety worldwide. There are so many different ways to tackle the same problem.

This is a whistle-stop tour of a three hour workshop. I hope to see some of you at Let's Test to have the opportunity to explain in person! If you cannot attend and have questions, or suggestions for improvements in our approach, please leave a comment below or ask via Twitter.

Saturday, 22 April 2017

Introducing testers to developers

While completing my Computer Science degree, I created a lot of software without testers. At the end of my qualification, I searched for my first role as a developer with a limited understanding of the other roles in IT that I would work alongside. I didn't know what a tester was, what they did, or how they might help me. I don't think this is an unusual position for a graduate developer to be in. 

At my first job we didn't have testers. Had I spent a longer period in that company, I may have become an experienced developer who still didn't understand what testers were. There are a lot of organisations that create software without employing dedicated testers. I don't think that a poor understanding of testers is an unusual position for an experienced developer to be in.

Developers who have never worked with testers are likely to have an understanding of testing, but as an activity rather than a role. Testing happens as part of their work rather than being lead by a separate person. Why would testing be delegated when a developer can successfully write and release quality software on their own?

If you are a developer with this mindset or history, it can be really challenging to encounter a tester for the first time. Allow me to introduce you through analogy.

No Tester

A restaurant chef is a creator. They make a meal that is delivered to a customer. Quality is determined by the skill of the chef, the quality of their ingredients, and the practices that they follow.

In many restaurants the food is made, delivered to the table, and consumed. The earliest feedback that the chef receives is from the customer. In the event that they have a negative opinion of the meal, it can be difficult to fix the problem. The damage is often done.

There are parallels to software development without testers. Quality is determined by the skill of the developers, the quality of their platform, and the practices that they follow. Feedback comes directly from the customer, who can be unforgiving.

Restaurants can succeed in this model, as can software development.

Waterfall Tester

As a student, I worked part-time as a waitress. In one of the restaurants where I worked the restaurant manager used a small workstation that was located directly outside the double doors to the kitchen.

The chef in this restaurant was temperamental and territorial, so the restaurant manager would rarely venture into the kitchen. However, during meal service, she would intercept the wait staff as we moved plates from the kitchen to our customers.

If the meals for a table were ready at different times, she would hold service until they could be delivered together. If the presentation of a dish was sloppy, she would tidy it up. In the rare event that she was unhappy with portion size or the quality of cooking, she would return the plate to the chef. 

The chef didn't like having his food returned. He would often over-correct in spite. You think the sweet and sour pork portion was too small?! Then you'll get double-sized dishes for the next hour! However he would ultimately settle into delivering a more appropriate size of meal.

On reflection, this restaurant manager was my first experience with a separate tester. Once development was complete she examined the final product. With a fresh perspective, she could identify problems that the kitchen had missed. 

The restaurant manager contributed to the quality of the product through small actions of her own or by reporting problems back to the chef so that he could make improvements. He may not have always responded well to these reports, but he did alter the meals, which improved what the customer received.

Testers bring the same contributions to software development - a fresh perspective to identify problems, fast feedback from someone with a customer focus, and the ability to make their own small contributions to the overall quality of the customer experience.

Agile Tester

The restaurant manager was improving quality of a finished dish. What if there was a way to improve the meal as it was being prepared? Then a chef could adapt as they worked, rather than trying to alter their end result.

Sometimes there are multiple elements of a meal cooking at the same time. As a chef, it can be difficult to keep track and sometimes things get burned. What if someone else had set a timer, noticed when it had lapsed, and could intervene?

Sometimes there ingredients in a dish that look similar and might be easily confused. This week I prepared a meal with two spices: tagine spice mix and couscous spice mix. What if someone else had noticed when I opened the wrong packet?

Sometimes there is a requirement for consistency between meals. How can a chef be sure that the crème brûlée they create today is the same as yesterday? What if someone else could check that the recipe, portion size, and presentation met a minimum standard?

Hypothetically, this all sounds reasonable. Realistically, if you're a chef who likes to cook alone, this might sound like a nightmare. Perhaps you would rather have to throw away the occasional plate of food that was burned or tasted strange than accommodate another person in your creative space.

There is a parallel to the role of a tester in agile development. Success relies on collaborative working relationships that can be difficult to negotiate. Dealing with feedback from a tester can be frustrating, particularly when a developer is not used to receiving it. The tester will need to explain and demonstrate how their perspective can contribute to an improved product. 

It can be difficult for the developer to adapt their approach and accommodate feedback. If they find that the information from a tester is not relevant, timely, or delivered in a constructive manner, then they should let them know. It takes time to form a useful working relationship and investment is required from both sides.

Every Tester

Whether a tester is contributing in agile or waterfall, they want to help deliver the best possible product to customers. If you are a developer who is encountering a tester for the first time, the first thing to assume is positive intent.

Testers bring a fresh perspective to identify problems, fast feedback from someone with a customer focus, and the ability to make their own small contributions to the overall quality of the customer experience. Allow them to influence the product. Ultimately they help to create a better outcome than what you could achieve alone.

Wednesday, 5 April 2017

Test Coaching Competency Framework

A few weeks ago I was wondering how to explain the skills required for a Test Coach role. Toby Sinclair introduced me to Lyssa Adkin's Agile Coaching Competency Framework. It's a simple diagram of the core experience and skills that an agile coach might have:

Ref: Hiring or Growing Right Agile Coach by Lyssa Adkins and Michael Spayd

An individual can take this diagram and complete a relative assessment of themselves or others by shading in each slice, for example:

Ref: Hiring or Growing Right Agile Coach by Lyssa Adkins and Michael Spayd

In a presentation about the framework, Lyssa Adkins and Michael Spayd explain how it can be used as a self-assessment tool, for hiring and developing agile coaches, or to help communities of agile coaches to grow.

I like the simplicity of the framework and could see an opportunity to apply it to the testing domain to explain test coaching. As a small disclaimer, I have never heard Lyssa and Michael explain their framework in person, so I'm not entirely sure how true my interpretation is to their original intent.

Test Coaching Competency Framework

The test version of the framework is almost identical to the original:


Starting at the top, testing practitioner reflects the relevant experience that a person has. Different people will have different interpretations of what this means. Shading of this slice might reflect number of years in industry, number of organisations, variety of roles, etc.

If occupying a role is reflected at the top of the framework, the knowledge that you build through that experience is reflected at the bottom as technical, business, and transformation mastery. In simple terms, shading of the practitioner slice shows what you've done, shading in the mastery slices shows what you've learned as a result.

Technical mastery in testing might include:
  • test automation frameworks
  • coding skills
  • continuous integration and delivery tools
  • development practices e.g. source control, code review
  • basic understanding of solution architecture
Business mastery in testing might include:
  • shift-left testing skills e.g. Lean UX, BDD
  • user experience, usability, and accessibility testing
  • targeted exploratory testing and development of test strategy
  • business intelligence and analytics
  • relevant domain knowledge
Transformation mastery in testing might include being a leader or participant in:
  • switching test approach e.g. waterfall to agile
  • continuous improvement in development practices
  • engaging beyond the development team e.g. DevOps
  • change management

On the left of the framework are the skills that are important to communicate mastery to others: training and mentoring. The distinction is in the size of the audience. Training is prepared material that is delivered to a group of people. Mentoring is transfer of knowledge from one individual to another. Both training and mentoring require the coach to have knowledge of the topic that they're helping someone with.

On the right of the framework are the skills that are important to enable others to find their own solutions: facilitating and coaching. The same distinction in size is present: facilitating is to a group of people and coaching is between individuals. A person conducting these activities is guiding a group or an individual towards the discovery of their own solutions. Skill in facilitation and coaching does not come from personal mastery of a topic, but instead in developing the ability to draw the best from people through questioning.

Using the framework

I needed to be able to explain the skills of a Test Coach in order to recruit. I have used my version of this framework as an activity in eight Test Coach interviews so far. I draw the framework on a piece of paper, explain what is included in each slice as above, then ask the candidate to assess themselves and talk about their decisions.

Here's an example of how one candidate responded:



I've found the test coaching competency framework is an excellent alternative to the general prompt of "tell us a bit about yourself". I use it at the start of the interview to give more structure to the opening conversation and clearly illustrate what I'm interested in hearing from the candidate. 

I've found that it empowers people to talk more freely about both their achievements and opportunities for growth. I like that it gives control of the conversation to the candidate for a period of time rather than asking them to respond to specific questions for the entire interview.

What do I do with the results? I haven't been seeking an individual who is strong in every area, but I am looking to build a team of coaches with complementary skills and experiences. The shaded frameworks are helpful in determining how a group of individuals might become a team. Once a team is formed, I imagine that this information could help guide initial allocation of work and allow me to identify some opportunities for cross-training.

I have found the test coaching competency framework useful and would recommend it to others who are looking to build test coaching teams.

Sunday, 12 March 2017

How do you hire a junior tester?

I recently participated in a workshop that was co-hosted by the New Zealand Qualifications Authority (NZQA) and New Zealand IT Professionals (ITP). They're exploring the idea of creating a new qualification for software testing within our tertiary education system.

One of the topics of conversation that I found interesting was the question of how employers currently recruit junior testers. By junior, I mean a person with no experience in testing, regardless of age or other work history. As there are no dedicated testing qualifications* at present, where do new testers come from?

I sat in a discussion group with Aaron Hodder, Adam Howard, and Anna Marshall. We came up with a list of ways that we found the junior testers in our organisations that included, in no particular order:
  • the business - subject matter experts who show aptitude for testing
  • overseas - new arrivals to New Zealand may start in a junior role
  • internships - through programs like Summer of Tech
  • consulting firms - local consultancies who run their own graduate training and placement
  • graduates - those who are fresh from study

In my experience, finding candidates for a junior testing role is not a problem. When I've advertised a role that is suited to the wide audience defined above, I've had a lot of applicants. Testing is seen as a pathway into the IT industry, so there is a lot of interest.

I think that the workshop question is more interesting when it is considered in a slightly different way. As there are no dedicated testing qualifications at present, by what criteria do you recruit junior testers? In other words, how do you decide which person to hire?

I assess the testers who work in our agile delivery teams by six criteria that can be broadly summarised as:
  1. Testing knowledge and experience
  2. Automation knowledge and experience
  3. Agile knowledge and experience
  4. Domain knowledge
  5. People skills
  6. Growth mindset

I'm not looking to hire people who hit all of those criteria. I am looking to create testing teams with complementary individual strengths that mean we are collectively strong in all of those criteria. This applies across all testers, not just juniors.

When I hire a junior, I'm generally looking at the latter criteria in the list. While a junior may have knowledge of testing, automation and agile, it is likely to be entirely theoretical. I probably have strong testing, automation, and agile skills in the existing testing team anyway. The strengths that a junior might bring are their domain knowledge, people skills, and growth mindset.

I've talked previously about why you should hire junior testers. The benefits that I see in making junior testers part of the team, which largely focus on their attitude and behaviour, can be difficult to quantify. Similarly, the attributes that I am looking for in order to realise those benefits can be difficult to quantify, which means that they generally aren't assessed in an IT qualification.

I look for juniors who can communicate and work well in a team. People who are eager to learn, pick up new ideas quickly, and constantly search for a better way of doing things. Perhaps people with these traits are more likely to pursue higher education, but not necessarily. A person can pass a qualification without possessing these particular skills.

So, would it be useful to create a software testing qualification? Perhaps. If the qualification had a strong syllabus that was delivered in a practical manner, then hiring someone with this qualification might save some time when explaining basic concepts.

Would the presence of a software testing qualification change how I hire junior testers? Perhaps. The presence of such a qualification might increase the chance that I interview a candidate as I would spot it when screening CVs, but I'm not sure that it would have a significant bearing on the final criteria by which I hire.

Does New Zealand need a new software testing qualification? Perhaps. When I received the invitation to the workshop I thought it sounded like a great idea. As a trainer I was excited about the challenge of creating a syllabus. But the more that I've thought about it from the perspective of an employer, the less sure I become.

Now I'm curious. How do you hire a junior tester? By what criteria do you choose a candidate? Is there a software testing qualification in your country? Do you think there should be? I welcome your comments below.


*****

* ISTQB is a certification, not a qualification. A certification is an official document attesting to a status or level of achievement. A qualification is a pass of an examination or an official completion of a course, especially one conferring status as a recognized practitioner of a profession or activity.