This is the second in a series of posts examining some of the systemic problems that organisations tend to rub up against as they seek to ‘scale’ research activity in their organisation. We are looking particularly at ‘dysfunctions’ that can result in at best, ineffective work and at worst, misleading and risky outcomes. You can start with the first post in this series here.

Here are five common dysfunctions that we are contending with.

  1. Teams are incentivised to move quickly and ship, care less about reliable and valid research
  2. Researching within our silos leads to false positives
  3. Research as a weapon (validate or die)
  4. Bias to certainty standardises dubious research practice
  5. Failure to mature research capability

In this post, we’re looking at the impact of our organisation structure on research outcomes.

Dysfunction #2 – Researching within our silos leads to false positives

The larger the organisation, the more fragmentation and dependencies you tend to get across teams. Teams are organised by product or platform, and then often by the feature set they work on. Occasionally teams are organised by a user type, and very rarely you find some arranged by user journey.

Even in this complex ecosystem of teams where dependencies are rife, the desire for autonomy in teams remains. Between teams, we tend to seek to avoid reliance other teams where possible. We don’t want our own team velocity or ability to ship to be decreased by anyone else. In this environment, collaboration between teams tough. It can be hard to coordinate, there’s no incentive to take this time and trouble. And this leads to greater focus, which, in theory is great, except….

Beware the Query Effect

When it comes to research, we know how critical getting the right research question is. Getting the ‘framing’ of the research right is crucial because, as the Query Effect tells us (and as we know from our own personal experience) you can ask people any question you like and you’ll very likely get data in return.

Whenever you do ask users for their opinions, watch out for the query effect:

People can make up an opinion about anything, and they’ll do so if asked. You can thus get users to comment at great length about something that doesn’t matter, and which they wouldn’t have given a second thought to if left to their own devices. – Jakob Nielsen

By focussing our research around the specific thing our team is responsible for, we increase our vulnerability to the query effect.  That little feature is everything to our product team and we want to understand everything our users might think or feel about it, but are we perhaps less inclined to question our team’s own existence in our research?

Researchers are encouraged to keep the focus tight, to not concern themselves with questions or context that the team cannot control or influence.

I like to use this visual illustration of what that is problematic. Take a quick look at the image below. What strange sea creature do we have here do you think? Looks quite scary, right?

Scary looking shadow in water

Oh but wait, when you pull back just a little more you realise the story is completely different, and all we have here is a little duck, off for a swim, nothing to worry us at all.

Duck swimming in water with shadow (no longer scary) below

How often is our research so tightly framed on the feature our team is interested in that we make this mistake?

We think something is important when in actually, in proper context of the real user need, it is not so important at all? Or conversely, we focus so tightly on something we think is important when what our users care about is just out of frame. Just outside the questions we are asking, that they are so busy now, helpfully answering. Even though it is not the important thing.

I fear this is one of the most common dysfunctions that we see in product teams doing research in the absence of people who are sufficiently experienced and with seniority and confidence to encourage teams to reshape their thinking.

What is the risk?

Research that is focussed too tightly on a product or a feature increases the risk of a false positive result. A false positive is a research result which wrongly indicates that a particular condition or attribute is present.

False positives are problematic for at least two reasons. Firstly they can lead teams to believe that there is a greater success rate or demand for the product or feature they are researching than is actually the case when experienced in a more realistic context. And secondly, they can lead to a lack of trust in research – teams are frustrated because they have done all this research and it didn’t help them to succeed. This is not a good outcome for anyone.

The role of the trained and experienced researcher is to not only have expertise in methodology but also to help guide teams to set focus at the right level, to avoid misleading ourselves with data. To ensure we not only gather data, but we are confident we are gathering data on the things that really matter. Even if that requires us to do research on things our team doesn’t own and cannot fix or to collaborate with others in our organisation. In many cases, the additional scope and effort can be essential to achieving a valid outcome from research that teams can trust to use to move forward.

Original source – disambiguity

Comments closed