The prevention of problematic gambling is a complex issue. We at Playscan know it all too well. But in order to learn about effective prevention initiatives we use the method of validated learning for acquiring new knowledge.
By practising hypothesis-driven development for responsible gambling we see the development of new tools and services as a series of experiments to determine whether an expected outcome will be achieved – or not. With this we challenge the concept of having fixed requirements when we develop new features. Instead, the process is iterated until we reach a desirable outcome.
6 steps toward hypothesis-driven development
1. We make user research and formulate a hypothesis
Let us look at an example: In interviews with users we often ask them to describe their general attitudes toward their risk assessment. We hear players ask themselves: ok, so this is my risk assessment…but what do I do now?
(This is where we get the chance to identify what the user is expecting from us. From this it is our responsibility to design features that address the problem.)
Our hypothesis is:
We believe that if we clearly communicate the answer to the question “what do I do now?”
Will result in more players reducing their risk level.
We will know we have succeeded when we see an X% increase in risk levels.
2. We define targets and points to measure
We base the work on the products Impact Map, a document that help us drive our software development towards effect, meaning delivering the right responsible gambling initiative to the right player.
Example: X% more risk players know what to do in order to lover their risk level. This is measured with an online questionnaire; click through on recommendations and analysis of the gambling behavior.
3. We design an experiment to test the hypothesis
Best practices and research inspire us when we work on a solution. We talk it through with our experts on problematic gambling, write texts and produce real content.
4. We develop the solution
During the process of making the solution alive software developers, UX-designers and copywriters work closely together. Simply because it always gives us the best result. Then we launch it.
5. We validate the use, accept or reject the hypothesis
This is where we collect feedback from the player and can see if the solution delivers the use we expected. Did it work? Or do we need to change anything? Here we learn and iterate and make it even better.
Our most important work: we iterate!
To ensure that we are on the right course, we work in short iterations that are generally two weeks long. We build the system with small additions of user-valued functionality and evolve by adapting to user feedback. Have we stumbled on any mines? Well of course. But it’s a part of the game – we do not even expect to hit the target at the first time. For every experiment we do we always learn something new. Even if we had a great hypothesis (based on good observations or research) sometimes the results are just neutral. But this is why this method is so effective: we can quickly get a hint on what seems to work – and what’s not working.