🗓 Completed in Q1 2022
🏢 Company overview
MrQ is a UK based, online casino acquiring 10,000 new depositing customers month on month at the time of writing.
MrQ is powered by Spark, its in-house CMS built to address the specific needs of the company and industry. The tool allows unparalleled and needed flexibility at a granular level, only possible with a bespoke solution.
In Q1 2022 I was tasked to identify and execute improvements to Spark, which would improve stakeholder productivity and lower operational friction pertaining to legacy decisions.
👥 Team & responsibilities
- Head of operations
- Head of data
- Head of CRM
- Content manager
- 2 front-end developers
- Product designer (myself 🙂)
- Auditing product
- Interviewing stakeholders
- Documenting problems to be solved
- Writing and prioritising user stories
- Identifying opportunities
- Defining project scope
- Interaction design
- UI design
- Development speccing
Address problems having a negative impact on stakeholder workflows
Spark had been a side gig for one in-house developer for years. The lack of focus and UX consideration led to a number of problems impacting stakeholders, namely:
- Product bloat and irrelevant features.
- Missing features which stakeholders needed.
- Unnecessary friction throughout core use cases.
- Stakeholders lacked confidence using the tool.
My task was to identify bottlenecks and frictions, and address them so that stakeholders could be more confident and productive in Spark.
Identify opportunities for new, valuable features
Stakeholders had been working sub-optimally for years due to missing features. One of my responsibilities was to understand stakeholder needs, and discuss new improvements that would drastically improve their workflows and increase business value.
Simplify the user experience in Spark
Spark looked messy and was intimidating to use. The need to re-architect some portions of the product but also run it against usability heuristics was self evident. My responsibility was to put stakeholders and their tasks front-and-centre, and to mask the tool’s complexity behind a user friendly experience.
🎙 Interviewing stakeholders and documenting key findings
My first step was to interview stakeholders who used Spark regularly. Five stakeholders from different functions (Operations, Content, Data, QA) were interviewed to ensure that the biggest problems across unique workflows raise to the surface.
During the interviews, stakeholders were asked to share their screen, jump into Spark, then go through some typical workflows and explain why they were frustrating to execute. Throughout the interviews I tried my best to listen carefully and ask clarifying questions, as opposed to probing for problems which were not raised by stakeholders - and therefore arguably non-existent.
After each interview, I reviewed the recorded call to begin synthesising my findings. Key findings were compiled into Notion for easy reporting and sharing but also, for me to have a backlog which I could prioritise and action.
All findings were framed under four lenses in order to provide holistic and well articulated evidence.
Spark positions newely added games at the bottom of the games list.
6-7 games / day approx
As a QA engineer, I want to position new games at the top of the games list in Spark so that they may be easily discovered by users and tested for their popularity in production.
When I add new games into Spark, those games automatically get positioned at the bottom of the games list. I need to manually drag new games to the top of the list (which is over 1,000 titles long) and the UI doesn’t allow me to do this quickly and easily.
1. Position newely added games at the top of the games list by default. 2. Make it possible to choose a position for new games when they’re being added into Spark.
Games are easily mislabelled with the wrong game provider label.
10-15 games / week approx
As a content manager, I want to ensure that new games are labelled with the correct game provider so that PQ insights relating to game providers are correct.
When I’m setting up a new game, Spark fills the ‘game provider’ field with the wrong provider. If I forget to update the field with the correct game provider, then that game will be mislabelled and PQ will treat that game as one from the wrong provider.
When setting up a new game, instead of having Spark prefill the game provider field, It would be better if I were asked to manually select the correct game provider.
Spark doesn’t allow users to specify a position for new games when they’re being added.
As head of operations, when I’m adding a new game to Spark, I want to be able to choose its target position, so I can ensure the correct order of games across the ‘All games’ and ‘New games’ categories on site.
When I add new games to Spark, they’re placed at the bottom of the games list. Even though I wouldn’t want new games to be placed at the top (as that would make the ‘All games’ and ‘New games’ tabs on site reflect the same games in the same order), I’d still like to give newly added games a fair ranking under the ‘all games’ tab which I could specify.
Allow users to specify a destination for games which are being added to Spark.
Spark doesn’t indicate the reason a game has been disabled.
2 games / week approx
As a content manager, I want to be able to re-enable games that have been disabled so that users can play them and CRM can promote them.
1. Sometimes CRM make plans which involve specific games which are disabled in Spark. When this happens I might need to re-enable them, but this creates overhead as I’d need to find out why they’ve been disabled in the first place and whether they should remain disabled. 2. Sometimes players ask us to add games to our product which we do in fact have but which have been disabled. Figuring out why they’ve been disabled is a tedious process as Spark does not indicate the reason a game has been disabled.
When a game has been disabled, stakeholders should be able to insert information as to why the game has been disabled which will be saved to Spark. It would be useful if stakeholders could get notified about disabled games as this impacts CRMs plans.
Games are missing some essential metadata in Spark.
As head of operations, I want games to be tagged with rich metadata, so that I can manage games on very granular levels.
We’re currently storing lots of metadata in Google sheets. If we had this data shifted to Spark, we’d be able to manage games at granular levels. One example of this would be us having visibility over game volatility, max exposure, game RTP, theme and much more.
1. Allow users to port metadata from Google sheets into Spark. 2. Add a metadata panel in the ‘add new game’ page in Spark with respective input fields. 3. Make games retrievable with metadata under the ‘all games’ list in Spark.
When new games are added into Spark, we need to manually move them to the correct category.
As head of operations, when I’m adding a new game into Spark, I want that game to automatically be added to the correct vertical and category in the product (based on the game’s metadata), so I can avoid doing this manually.
Since Spark is missing some inputs under the ‘add new game’ flow, such as ‘game category’ and ‘game type’, new games need to be dragged to the correct category after they’re added into Spark.
Add metadata fields under the ‘add new game’ page which determine where a game will live in the secure product.
💎 Designing meaningful changes
Adding rich and meaningful metadata to games
Prior to this project, stakeholders were attaching very basic metadata to new games in Spark. The first reason for this was because the operations team were unable to fully leverage metadata in general at the time, rendering it useless. The second reason was that adding new games was already a lengthy process in Spark, therefore, there was hesitation around further lengthening the process.
On the other hand, the data team had sighted a number of benefits around attaching rich metadata to games moving forward, the most notable one being the ability to offer highly personalised experiences to users - a vision which brought the whole organisation in alignment.
Before kicking off, I spent time learning about the company’s vision for a personalised user experience moving forward, and what kind of information we’d need to start attaching to games in order to fulfil this vision. I also made an unexpected discovery during these conversations: we already had metadata for our current 1,000 plus games - the data was stored in a spreadsheet!
My task from here on appeared to be twofold as I needed to:
- Find a way to port all the metadata for our 1,000 plus games from a spread sheet into Spark.
- Find a way to help stakeholders attach more metadata to new games moving forward, without drastically increasing the effort required.
To address porting the metadata from the spreadsheet to Spark, a script was written by the back-end team - voila! problem solved.
For the second issue, a number of new metadata fields were introduced to the process of adding new games in Spark. In order to help stakeholders fill all metadata fields quickly, a few design decisions were made, namely the following.
Fields were sequenced to make form pre-filling possible
Upon inputting a game title for the new game, the respective game ID fields (3 fields) and betting fields (5 fields) would be pre-filled, this decision alone allowed for the pre-filling of 8 out of 22 fields, completing almost 1/3rd of the total form.
Drop down inputs were used for fields that would require one or more pre-defined inputs
An example of this would be the ‘game category’ input field. Since new games could be added to one or more existing game categories in the casino, it made little sense to have stakeholders manually type a game category to add the new game to.
In total, 5 out of 22 fields were made into drop down inputs reducing the need to type in almost 1/4th of the total form. In order to give stakeholders visibility over multiple inputs pertaining to a single field, each individual input was designed to look like a tag - which could also be deleted.
Choice chips were used for inputs which required choosing one option from a few pre-defined options
An example of this would be the ‘game volatility’ settings. Since a game’s volatility could either be high, medium or low, it made little sense to have stakeholders manually fill in the field, or to hide the options in a drop down input.
Keeping team members in the loop when games get disabled
Disabling casino games (making them unavailable to users) is a part of the nature of casino operations. The disabling of games could be in connection with a number of issues such as bugs - in either case, prior to this project, the process around disabling games posed issues to a few teams.
Stakeholders would disable games by entering the respective game pages in Spark and then clicking the ‘disable’ button - sounds straightforward right? The issue was that stakeholders who’d just disabled a game in Spark would have to manually update team members in Slack as Spark provided no automated messaging. This limitation placed added stress on stakeholders as it increased the amount of manual effort required but also, proved to be disruptive to teams who may have been planning promotions around games that had just been disabled.
Here’s how stakeholders’ process around disabling games looked in short:
This issue was addressed by having stakeholders specify their reasons for disabling games whilst doing so in Spark. Following this step, and in order to keep team members in the loop, disabling a game in Spark would trigger an automated message which would be sent to a dedicated Slack channel. The message would contain the game’s title, a ‘disabled’ tag, the editor of the disabled game and a date and time stamp.
Team members were also given visibility over the same details in the respective game pages in Spark, so that they wouldn’t have to bounce between tools to fetch details. To view the details, stakeholders would have to merely hover over the respective game’s ‘disabled’ badge - that interaction would reveal a tool tip housing the specifics.
Implementing the changes shortened stakeholders’ process to the following:
Making it easier to place new games in the right place
When adding new games in Spark, stakeholders would need to specify the game categories in to which the new games belonged and would be found by users. Moreover newly added games would need to be assigned a position number for each game category they’re catalogued under.
In order for games to be easily found by users in the casino, stakeholders would first need to properly catalogue new games in Spark. More specifically, stakeholders would need to specify the game categories which the new game belongs to but also, a position number where the new game will sit in those categories.
Prior to this project, new games were automatically added to the very bottom of the ‘New games’ category, stakeholders would then locate that game at the bottom of the pile and manually drag it up to the intended position. This process was tedious and unnecessary given that new games were typically intended to be placed further to the top to encourage their discoverability by users.
To address this issue, newly added games were placed at the top of the ‘new games’ category by default - this was visualised in Spark’s UI by displaying the full list of new games and also allowed stakeholders to easily edit the new game’s position.
In order to help stakeholders assign newly added games to multiple categories, a drop down was included with a selection of categories to choose from. After choosing categories, the respective game would be assigned position ‘1’ in each category - stakeholders could shuffle between the categories and edit the game’s position under each category.
View the Figma file below to get a clear picture of all the flows detailed in this case study.