I led a rapid contextual inquiry project, the objective of which was to develop an in-depth understanding of our users and their attitudes and behaviours towards HCM/payroll products and services. The research program lasted seven weeks from inception to delivery and involved a series of one-hour user interviews/observations carried out within user's offices across Australia and New Zealand.
We had roughly 7 weeks, to recruit, schedule, interview and observe around 20-25 users at different levels of the business in 4 cities across Australia and New Zealand. No sweat.
EXPLORATORY USER RESEARCH
When I joined Fusion5 one of my first initiatives was to establish a contemporary approach to user research that would be used throughout the Jemini project. Because user research was a new technique within Fusion5 I wanted to ensure the whole process end-to-end was transparent and that the results were highly visible around the office, so the entire team could 'come along for the journey' so to speak.
A traditional development-based software shop, there was little interaction between developers and users. One of the key outcomes of my research was to ensure a shared understanding of our users amongst the Jemini team, and to challenge any assumptions that we had about our users before we started designing.
At the beginning of the process I sat down with the broader Jemini team to gather any expectations and get a feel for what people were looking to get out of the research in order to help them in their role on the project.
Because this was the first major research piece for project Jemini, I wanted to cover a lot of ground but also keep it quite high level without getting into discussions around specific functionality. I wanted to focus on identifying user's frustrations, goals, attitudes and behaviours when using our software. I decided that a combination of one-hour interviews and in-context observation would give us a fairly well rounded data-set to analyse. In addition to the interviews and observation I also designed a series of affinity workshops with key clients to validate some of the findings that would come out of the interviews.
WORKING OUT WHAT QUESTIONS TO ASK
In order to determine what interview questions to ask, I used a technique which involves mapping out a series of attributes that I would like to understand about the users and then working backwards to determine which questions would assist in uncovering these insights. It’s not a perfect science but I’ve found it's a great framework for getting new teams around question formulation.
Once we had our interview questions confirmed I created a interview guide and also some supplementary guides for the BAs who would be assisting me as the second interviewers. These guides included a ‘the role of the seond interviewer’ guide and a ‘what to photo’ guide (see pictured left) .
The interviews were recorded using a LiveScribe pen and an iPhone (always have a backup). This freed me up to take more casual notes on things I found interesting and to concentrate on facilitating the interview. The second interviewer would take the majority of notes throughout each interview.
Immediately after the interview the second interviewer and I would have a 10 minute debrief over coffee or on the drive back, this would allow us to discuss anything interesting that came out of the interview that was front of mind. Once back at the office we would manually transcribe the interviewee audio into a Google Doc.
For each interview transcription I would have 3 interviewers: myself, the number 2 interviewer and a blind interviewer (not present at the interview, typically the other BA) cross examine each transcription and highlight any interesting findings. These would then be compared and discussed and the highlights would become post-it notes for our affinity analysis.
BRINGING THE USER BACK TO THE OFFICE
In addition to the interviews, I also like to capture the context in which our users worked. Design teams can easily forget that not all users are not working on retina cinema displays, all too often they are working on dual 17" monitors. Our designs must support that.
Interactions with our product don't occur in a vacuum. The context which surrounds users, the environment in which our users are working and the physical products they use to interact with our designs (screens, keyboards, smartphones, etc) all matter.
I had the second interviewer capture photos of our users' workplace environments so that we could share them with the rest of the team back at the office. We used the photos to not only support findings in the final report but to also bring the users back into the office for the rest of the team.
Our affinity mapping started off quick and dirty, but as we interviewed more users our process evolved and we started colour coordinating post its based on user roles: payroll, HR, managers, etc.
Picking a spare office off the main hallway, and buying the most vibrant post-it notes available were both not done by accident. I wanted to create a space in the office that people wanted to walk past and see what was going on. We had many great conversations just from people dropping by the research room to come and see what all the colours were about. We were the venus fly trap of research war rooms.
After we had finished clustering the post-it notes to a level where we were able to identify individual insights, I begun writing the report by working with the BAs to examine each cluster of insights and try to determine some answers. We asked ourselves: why that was, did it relate to another cluster, where were the correlations, what was similar between AU and NZ, what was different, etc. I encouraged the team to zoom in and zoom out on detail, apply different lenses and explore any potential correlation (even crazy ones). This ensured we didn’t get stuck with a single view of the data.
This method allowed us to quickly convert these findings into a research report.
The report itself was designed to be digested in two ways. Each section of the report highlighted a major finding, the first overview page was always in blue and included a summary of the finding and the impact to the design. The subsequent pages held additional detail and statistics. This was so developers could print out a copy, strip out only the blue pages and skim the report in an hour, whereas those who needed a more detailed understanding could read every page. This approach went along way in convincing a team of developers to read a pretty heavy user research report.
The front cover of the report and A5 zine-style format were chosen delibrately to look enticing, not like a traditional consulting report. My distribution method saw me print out a number of A5 booklets bound with rubber bands and leave them scattered around the offices (think lunch rooms, etc.). I wanted employees (not only within our team) to stumble upon them and flick through to see what we were getting up to.
Remember those attributes I mapped out earlier in order to form the interview questions? Now all of the interviewees were placed upon the attribute scales based on their feedback and a number of repeatable patterns of behaviour had begun to emerge. E.g. Almost all payroll staff worked long days but few worked from home, they all had mobile devices but none used them for work purposes. These patterns were highlighted and would be used to form the basis of our first six personas.
I created a number of persona posters and stuck them up around the office on any free surfaces I could grab. I also created packs of mini persona-decks for people to have on their desk, take out to clients and use in design workshops.
The research was incredibly well received, both within the team, across the broader organisation and amongst customers. The outputs formed a key input into the design process. I have since written an article (see image below) on our approach for Fusion5's AU/NZ trade magazine, and was asked to speak on our user centred approach at a national user conference.