Tag Archives: HCI

From 10% to 60% click-though to a RG-tool – why user interface design matters


How do players use a responsible gambling tool – why user interface design matters – talk by Natalia Matulewicz. Presented at the SNSUS Conference, Stockholm, June 2-3 2015.

How do we increase the usage of responsible gambling tools? Is mandatory or voluntary the way to go?

This talk means to inspire and give new ideas to how to increase the players interest and usage of responsible gambling tools. With the help of user interface design guidelines and persuasive technology principles we went from 10% players using the tool up to impressive 60%.

Increasing click-through to RG-tools by simple re-design

To make sure you have impact, do quick experiments and redesign things based on usability principles. By doing so, we found out that displaying only one recommendation at the time makes more people click.

A click on a recommendation is a success for Playscan. It means that we’ve provoked a reaction or created interest for taking action.

Sometimes, the little stuff create a big difference. As Thaler and Sunstein writes in their bestseller Nudge: “[S]mall and apparently insignificant details can have major impacts on people’s behaviour. A good rule of thumb is to assume that “everything matters””. With the “everything matters” mindset, the insignificant details can indeed prove fruitful beyond expectation.

Back in the days, Playscan always showed two recommendations to players. The rationale was that this was a trade-off between making sure to give the player more than one choice, to make sure he could find something relevant, but not to overwhelm him with too many. From nothing but our own curiosity, we decided to test whether we were correct.

We randomly divided our visitors into three groups, presenting them with one, two or three recommendations respectively. Next, we measured the click-through rate during a two week period. At the end of it, we realized that we had left a good many clicks on the table.

Our original design with two recommendations proved a 20% click-through, measured as the proportion of players who clicked any tip. The three-tip version showed no significant difference, but our one-tip version did: 36% of players clicked the recommendation. Again: the only change was one vs two recommendations – no other design changes, the same selection of recommendation, no new content – and from this we doubled our click-through!

Lessons learned?

Hindsight is always 20/20, and there is a reasonable explanation for what we found: players are more likely to click through with fewer conflicting and maybe confusing recommendations to choose from. Still, before our test we thought we had an equally good theory of why two recommendations was the way to do things.

So while doubling our click-through on recommendations based only on simplifying things was a big lesson learned, the biggest was without a doubt that “everything matters”. Ideas and hypotheses are a good starting point, but until proven they are just that: hypotheses.

Now, getting people to use our tools is only a first step in having impact. When it comes to recommendations, the next is having relevant ones. How do we make sure that they are? Well, we will test that too.

Making big data actionable by creating user personas

Talk by Natalia Matulewicz at the New Horizons in Responsible Gambling Conference February 2-4, 2015

What do you know about your online player? With anonymous players, customer data is an important factor when making strategic business decisions with limited information. However, big data often becomes a faceless collection of information, rather than a true picture of the players’ wants and needs. One still needs to know how to interpret data and how to combine it with other sources of information.

User Personas brings together big data with qualitative user research such as interviews, field studies and observations to gain an overall picture of a user, their needs, goals and motivation. It also fills the gap between what players claim to act upon compared to their measured actions, which in the context of gambling often differs. Combined with big data, User Personas give the answers to three important questions: what are the main target groups, which target groups should be focused on to make the most impact, and how should communications be designed towards those target groups?

A shorter Self Test: Does not increase completion rate

Summary: Shortening the 16 statement Self Test within Playscan yields negligible improvement in completion rates. The length of the test is not a problem; players either drop off during the first couple of questions or complete the test.

 

A recurring concern about Playscan has been that 16 statements to consider in the Self Test may be too many. The player may grow impatient and abort the test, especially since the questions themselves can be sensitive and draining. We investigated whether a shorter introductory test with “gate questions” would increase the completion rate of tests.

 

When a player clicks into the Self Test, an introductory text is displayed. Here, the player is encouraged to consider all gambling, at all gambling sites, during the past three months. The player is then asked to consider 16 statements, one at a time.

 

playscan_selftest1

 

To investigate the usefulness of gate questions, the results from the Self Tests at Svenska Spel between 2014-07-04 and 2014-10-14 were analyzed. Statistics of these are presented below, showing the completion and drop-off rates.

playscan_selftest_drop_off_rate

 

Looking at the numbers, the completion rate is quite satisfactory; in particular 80% web completion. This high number is likely due to the curiosity that brought the player to Playscan in the first place, and the promise of self-assessment at the end of the process. Self tests in general tend to have a higher completion rate than surveys thanks to the intrinsic motivation behind doing them.

 

The majority of the players who drop off do so at the first question. We also see a difference between channels with a 10% drop-off rate on web and 23% on mobile. The higher drop-off on mobile is hardly surprising, given the users’ attention span in the mobile context.

 

Only 10% of the started tests are dropped between question 2 and 16, regardless of channel. Interesting to note is that the drop-off rate declines as the test continues.

 

This leaves us with a clear answer to the question of gate questions. We would have yielded only 4% more completed tests if the test consisted of four statements. This number is hardly worth chasing at the cost of the players spending less time contemplating their gambling habits or missing out on the nuances that the full 16 statements bring.

 

———

 

 

The research done at Playscan is not academically focused, but aimed at practical application.
We are pragmatists, knee deep in data to explore. Our mission is to help prevent problem gambling rather than to study it, so we spend our time chasing preventive effect wherever we sense it.
We value agility and adaptation.
Where the territory is uncharted, our guiding light is curiosity and making a difference. Our data is local. We sometimes see wildly varying player behavior between operators, not necessarily because the players are different, but because contexts and presentations are. We believe that the research community has lots to learn about the importance of things like wording and design, and what we say will often be framed to show this. Our findings reflect the everyday player experience. This is neither universal nor static. It can change and, more importantly, can be changed.
At the same time, we have the deepest respect for formal research and academics. We welcome critique of our findings, and hope that others find inspiration and ideas to bring into the academic world.  We are happy to help, and love to exchange experience and ideas. Give us a call if you would like to help out!