The Float Tracker: Eliminating Cash Loss Through Data Vis | Waitrose & Partners

The Float Tracker system is an unofficial piece of software to track physical cash levels (also called “float levels”) and provide insights by graphing the data across time.

It achieves this by taking in data from various parts of the cash process systems, providing holistic ‘snapshots’ of till behaviours, contrasted with real world events.

The insights from the Float Tracker allowed us to reduce the overall levels of physical cash in each of the tills whilst actually increasing their ability to deal with unexpected surges in trade.

This post is part of a wider series discussing my various activities as Senior Cash Partner at my retail branch to overhaul operations, eliminating losses, and designing & implementing new processes.

This is a long and context heavy post. You may want to skip to the bit about the outcome.



This post concerns my experiences working in a retail branch and includes discussions of sensitive data and branch operations in order to contextualise the featured project. All information discussed has been carefully sanitised to remove identifying details as much as possible, to completely anonymise individuals and to change some details to appear authentic while being fake and representative. This post written in good faith and should not be taken as representative of real branches, people or organisations.

When I first took on the role of Senior Cash Partner at my branch, I inherited a cash office run by one person (who had just left), 13 cash-accepting tills (5 staffed tills and 8 self checkouts) with weekly cash losses in the hundreds, sometimes thousands of pounds per till.

We would loose hundreds of pounds worth of National Lottery scratch cards every week, numerous prize-draw tickets. Some of our old-style self checkouts (SCO’s) would “eat” notes, sheading them to pieces, cash would be swapped between tills casually by users without the system knowing.

Paperwork was disorganised, investigations were near impossible to conduct or even know where to start, and it turned out that till operator training was more a game of Telephone than a formal process.

The person I was replacing was highly skilled and intelligent, but they were a jack of all trades across the branch and so unable to dedicate focus to the issues listed. It needed someone to focus in and take ownership, making the section “theirs”. The problem was just assumed to be unsolvable.

Lets employ a bit of Socratic method to delve into the issue.

What is A Till?

To understand how a retail unit tracks and deals with cash losses / gains (called simply ‘discrepancies’) its useful to understand a few key bits of jargon and processes.

Once again, the information here is provided to give context and has been generalised to remove sensitive data and organisation specific details. Other retail units and shops will differ but most supermarket chains in the UK can be assumed to have generally similar processes.

Tills come in different forms, these are as follows:

  • Mainline till staffed tills with a conveyor belt and dedicated packing area. Suitable to use with shopping trolleys and only found in the larger branches.
  • Kiosk till which are smaller, more condensed staffed units, with space for a basket and bag, usually serving restricted items like cigarettes, Lottery services, and Spirits.
  • Self Checkouts of which there are a few variants:
    • Security Scale SCO’s At our organisation these were the standard and are now being phased out in favour of more modern designs, the “Skinny SCO’s”. These are old noisy boxes, unintuitive to use with insensitive scanning beds (the bit that goes “boop”). Bing early generation self checkouts they are large, clunky, prone to breakdown and their bagging area security scales slow customers down, cause frustration and don’t have any real benefits.
    • Cash Accepting SCO’s The same as the Security Scale machines but with an extra unit added on the side to accept cash. These can still be found in some branches but are mostly being phased out due to unreliability.
    • Skinny Checkouts Newer units which have no security scale and do not accept cash. These are more streamlined and take up much less space both in terms of breadth and depth. The screen and scanner are mounted on the front casing which can swing upwards to access the computer and a very small receipt printer. These are mounted in line on a continual platform that is simply cut to length.

The COVID 19 pandemic accelerated the phase out of the old style SCO’s not just because of the unhygienic nature of physical cash, but also the sheer amount of time the attendants have to spend maintaining them; unsticking stuck notes and coins, resetting the security scale, all of which restricted customer flow and inhibited social distancing.

I could write a whole post on the psychology of SCO’s and insights based on observation from a couple of years of working with them. If you’re interested in the psychology of checkout design, have a look at this report by my former colleagues at the Design Against Crime Research Center.

We went from having 5 staffed Kiosk Tills and 8 large Cash Accepting Self Checkouts to an array of 12 cashless Skinny Self Checkouts (Skinny SCO’s) and three cash-accepting Kiosk Tills. Once COVID-19 hit, the middle kiosk was taken out of service and remained so until I left the partnership one and a half years later.

The new layout diminished the amount of cash payments slightly, but a compensating effect meant that the remaining cash activity was intensified and concentrated on the remaining kiosks.

How Do Tills Deal With Physical Cash?

Where cash accepting tills are concerned there three ‘types’ of cash held within the unit:

  1. Loose Coins: Held in the cash drawer (the bit that pops open for the operator to give change), separated by denomination.
  2. Notes: Usually adjacent to the loose coins in the cash drawer are the notes used. Typically these are separated by denomination but depending on the configuration of the drawer they may just be bunched together.
  3. Bagged Coin: A bulk reserve of bagged change kept in a secure location allocated to each till. Each bag holds a standardised amount of each denomination.

Should the till run out of a denomination in it’s loose coins drawer, it can be topped up from the bagged coin. Because the change is a reserve, the digital system draws no real distinction between Bagged and Loose Coins; it doesn’t know or particularly ‘care’ what the denomination levels are or how much is still bagged vs loose; its all part of the same lump sum. Remember this, it will become important later.

The digital system not only doesn’t discriminate between Bagged Coin and Loose Coin, but it doesn’t track individual denominations at all, it sees all cash movement as being an equivalent in 1 pence (the smallest GBP denomination).

Factors influencing cash flow in and out

This disregard of individual denominations seems counter-intuitive at first but makes sense when you think about it from a system design point of view. How could a digital system possibly know what combination of change each specific transaction is using? How could it know the breakdown of the cash handed over by the customer?

Perhaps I gave them a 50p piece, or perhaps I gave them two 20s and a 10, perhaps the customer found a 50p, gave it to me, and was given a £1 (an action you’re not really supposed to do but happens often).

There’s ways to calculate what the optimal change should be, but so many factors come into play, which frequently result in ‘sub optimal’ change being given, including but not limited to the following:

  • Customers requesting specific change (e.g. £1 coins instead of an equivalent note)
  • A particular denomination runs out
  • Incorrect change being given accidentality or by malicious actions by staff
  • Customers regularly not accepting small change which then gets placed back in the till
  • Found change on the ground being placed inside the till
  • People performing scams which results in them tricking the cashier into giving them more change.

When accepting cash, the operator inputs the value given to them by the customer, which in turn is used to calculate the value they’re told to give back to the customer, serving record keeping purposes. For an example, if the total is £17.32 and a customer gives me a £20 note, I’ll select £20 from the option list to tell the till what I’m placing into it. The till will pop open the drawer and request I give back £2.68. It doesn’t have any way of knowing how I made that value up.

It’s not uncommon for input errors to occur, meaning that these records are wrong, even though the till balances out in the end.

For instance, imagine I’m a customer purchasing something which totals £5 and I give you, the till operator, a £10 note. The input screen will let you type in the value or select from a list of “suggested” inputs, generated to save time typing “10.00”. You now accidentality hit £20 on the input screen suggested values. The terminal will tell you to give me £15 but you know this is incorrect and instead give me £5.

The record will show the wrong figures but the till is still balanced in the end and I’ve still received the change I expect.


Discrepancies describe the difference between the expected value of a till and its actual contents.

The digital system tracks cash issued to each till, through it’s transaction records it knows how much cash it expects to find once totalled. By manually counting the contents of the till, we can produce a snapshot of this amount and compare them.

Actual Value - Expected Value = Discrepancy

Some amount of discrepancies are inevitable and often self balance; a little bit of change falls behind the till, a customer leaves their change instead of accepting it. A partner makes a mistake and gives too little change, a scammer makes off with some petty cash, some change is found on the floor and so on.

Typical guidelines state that a discrepancy in the range of -£10 to +£10 is acceptable, and that the +-£5 range is very good.

There are a couple of factors which contribute to discrepancies and compound issues relevant to cash processing.

1. A certain amount of discrepancy is unavoidable. Even with perfect actions from the operator (no human error), customers leave a proportion of change behind choosing not to take it.

Sometimes (not often) coins come in that are either damaged or are actually a different currency that slipped through. We got a lot of international traffic from customers coming in from the airports and off of Eurostar so both customer and staff mistaking a euro for a pound was a common occurrence.

Then there is the fact we do not have any operators who don’t make human errors, given that they are humans. Change falls between cracks, sometimes incorrect change is accidentality given, even the old SCO’s would sometimes create errors, this is the imperfect nature of cash.

2. There is no reliable way for the tills to autonomously report on their actual contents. Even if you fitted the cash drawers with scales, cameras, and other features which could auto-measure the amount of cash at any given time, and even if you could guarantee the accuracy and calibration of such technologies, a bad actor could trick the system easily using random objects, fake coins, and slight-of hand.

Then there’s the fact that such a magical self-reporting till would probably last a total of five and a half minutes in active use; despite appearances, a shop-floor is a highly abrasive ware-and-tear environment. Equipment needs to be durable, it needs to withstand constant use, hits and heavy loads all day every day, sensitive reporting equipment would not last.

There’s a reason that consumer printers are still, after so many years, notoriously unreliable. Machines that deal with physical mediums and lots of moving parts in this way require a lot of maintenance and consistency of the medium.

Think of the uniformity and quality assurance systems of an industrial packing machine, churning out tens of thousands of units a day, that require full time technicians to always be present, monitoring vital signs and performing repairs. The little coin scale we used to perform manual spot-checks will frequently de-calibrate and is susceptible to errors if something is touching it or leaning on it.

Considering how customers crumple, un-crumple, fold, exchange, sit on, tear, write on, warp in to non-Euclidean geometry, lick, drop, and rub physical cash over lifetimes that can span decades, the fact that self checkouts are able to run autonomously at all is a wonder. And autonomous, accurate reporting of cash levels is completely infeasible.

Reconciling Discrepancies

Now that we understand how to find a discrepancy, what can we do with the information?

Sessions are digital records which denote a period of time to which transactions are assigned. Typically done per week, sessions allow us to cordon off a start and end date in which to look at transactions in order to defined periods to perform investigations in, explain discrepancies and then draw a line under them providing a “clean slate” for the next block of time.

While it’s possible to close and open new sessions at any point during the week, a branch like ours will typically keep one session open for the entire week, reconciling the result at the end of the week.

This performs the basis of investigations (if necessary) and plans for improvements to hone in on issues, but it also provides a time frame for issuing new change to the tills.

In our efforts to tackle losses, our branch employed an optional operational policy, to perform a ‘spot check’ every day before trade opens. This provides a consistent data stream through the week showing discrepancy change day-to-day. Random Daily Spot Checks (RDSC) can be employed on-top of this to target problem times / days / individuals.

Change demands can be done at any time but are best utilised when they are spaced apart; a till is unlikely to change much between 1430 and 1500 but may be quite different between the hours of 0930 and 1500. The time taken to perform the check and the disruption of closing the till are the determining factor that restrict how often spot checks can happen.

Spot Checks

Spot checks allow partners to gain a snapshot of a tills state at any given time.

Spot checks involve manually measuring the contents of a till and producing a readout from the digital system including, amongst other things, the total expected value of the till’s cash levels. If done correctly, you have a verified data point for a specific timestamp and a discrepancy.

Spot checks involve shutting down the till until the count is completed and the readout is printed. If the till is being used while the digital system is being consulted then the expected readout will not match the actual count. Likewise, if the count has any sort of error, the entire spot check has to be re-done.

Given that spot checks can be done during trade, shutting down a till while customers are potentially queueing is not ideal. Spot checks have to be done quickly and efficiently, and if the count is bad then that time is considered a waste.

Spot Check Record Keeping

Spot checks vary between contexts and branches in terms of how they are performed. However there are consistencies in terms of paperwork which factored into the design of the float tracker.

Values are sometimes recorded on specific float record forms (or just on a strip of receipt roll paper) then the total values and discrepancy are recorded in columns on a spot check form. This form is usually refreshed weekly, with enough columns for each day, however it can be used flexibly using multiple sheets as needed. Recording values sequentially in columns allows managers to compare results and their changes day to day.

These forms are held for a month or so, tucked away in a file, only looked at during investigations and with no long-term records kept.

If you think of an individual spot check being a snapshot in time like a plot on a graph, a group of spot checks together form a picture of changes over time, and the more checks you have in a given period the clearer an image you have of what is going on at any given point.

Of course this assumes you have time to actually look at the data, compare it to other relevant datasets, and make sense of it all.

Change Issuance

Branches will store the majority of their bulk change in their cash office, a highly secure (typically reinforced) part of the building with restricted access to even managers. Our organisation’s policy even states that the branch manager cannot be given independent access to the safe, such is the security of these rooms.

On the tills, bagged coin is held physically closer to the till to which it has been assigned, the digital system having been informed of which till got what. This is typically a very limited amount, enough to cover a single day.

Change Issuance is the process by which bagged coin is assigned to a till from the cash office.

So now we come a little closer to the problem on which this whole project is based, a single question that gets more complex as considerations pile up:

How much change should each till be issued?

And how frequently?

There are so many variables which influence the decision of how much to issue to each till at any given time. The complexity of the interplay between these is the original motivation for the Float Tracker.

First let consider the stakes; Why does it matter? Why do we care? Why not just hold all the cash reserves at the till and keep nothing in the cash office?

If there is too much cash in any given till, it increases the risk of being targeted for theft. This is less about the value of the bags which, relative to the weight, isn’t actually that high, but more about the psychological appeal of a potential thief seeing a large pile of cash and getting an idea in their head.

People can be motivated into stealing by a number of factors, most notably the desperation of the individual and the perceived ease of pulling off said theft. We like to think ourselves moral and “Theifs” and “Criminals” as being a morally compromised other, pre-disposed to evil deeds, but the fact of the matter is that crime is a complicated matter and that any of us, under the right circumstances, can do things we wouldn’t have thought possible.

Risk mitigation factors into the dynamic of how motivations are formed; it sees that once an individual has an idea in their head, the likelihood of pursuing extra-legal activities increases by bounds. It is therefore optimal where possible to stop ‘ideas’ forming to begin with.

While the bagged coin may not be worth much relatively speaking, consisting of a lot of low value denominations, to the right person in the right circumstances, its just a big pile of attractive coin, in a drawer. The weight and bulk of it makes theft ergonomically difficult but this actually doesn’t matter to someone with theft in their mind.

Additionally, having more cash will make the container heavier which will increase the time taken to fill up a till, which very often has to happen during trade when there’s a bustle of customers about. Even if a potential thief figures out that stealing bagged coin is widely impractical, they may still be desperate enough to try and steal it anyway, or they may turn their attention to other aspects of cash processing, wondering where other vulnerabilities lie.

So in short, having too much bagged coin is a very bad idea. The less bagged coin the better.

Having too little bagged coin poses a different obvious risk, that a till will run out of particular denomination during trade. The effect of this happening and the risk factor is, as with everything else, highly variable.

In a basic scenario, a till runs out of a single denomination, and other denominations can be used more heavily to compensate which usually isn’t that big of a deal. However things get more detrimental as more denominations run out. Two 50ps instead of a £1 will not negatively affect a customer to a great degree, but receiving five 20ps or even ten 10ps would be viewed unfavourably by most customers. As some denominations are used more heavily, others will run out faster.

As this situation gets worse, the till may be forced to close which can put real pressure on busier stores at peak times. Alternatively a mid-day cash issuance could be performed but this poses obvious security risks, trailing bags of cash across the shop floor.

So What is The Optimal Amount of Cash to Issue?

As you might expect, the individual denominations do not go down at the same rate. But more than that, the relative rate at which each denomination increases (or decreases) changes based on a range of factors that are difficult to pin down:

  • How busy is the shop?
  • Are there local events on such as a football match which might increase the proportion of cash-users (what proportion of pasangers coming through are on commute and how many are recreational)?
  • How familiar are the local clientele with Self Check Outs?
  • How willing is the local clientele to use card?
  • What is the physical position of the checkout?
  • What is the staff preference concerning this checkout?

Its a very difficult question to answer with no fixed way of deciding, relying largely on the intuition and experiences of a given cash partner to know what worked before and make a judgment.

An Example

One of our tills, numbered 301 is favoured by partners versus 303 because 301 is closer to the door while 303 is tucked away in the corner meaning that in a pinch, 301 is more accessible than 303.

During the deepest Covid lockdowns when we were the only retail unit in the station open, basically only to serve Network Rail and the train crews, the £1 coins would deplete at a vanishing rate on 301, and slightly slower on 303.

As trade increased, this began to slow until the tills would both break even, hardly using their reserves of £1 at all, because the number of customers paying with £1 coins would equal the amount given in change.

Then, as trade continued to increase, 301 would end up depleting its supply of £1 again while 303 would toggle back and forth between staying level and increasing its supply.

If we charted this be behaviour it would look something like this. Not real data. Not to scale. Hopefully obvious.

The Outcome


Data Driven Actions

During my work as senior cash partner it became quickly apparent that improving cash processing and solving operational issues for staff on the tills was an issue of data visualisation. We knew what the symptoms of the problems were but could not effectively pin down when they were happening, what factors were influencing them, what timescales and processes to investigate, and what to make of the various sources of data we had to work with.

Once a few big easy-target issues had been dealt with and the dust settled a little, patterns emerged which pointed to processes which could be improved.

Its worth re-iterating at this point that malicious theft is only one portion of any problem and usually quite easy to track; practised thieves operate in a pattern, their actions align with a given shift pattern (scammers will target individuals), and they tend to get braver as more time goes on and they believe they’re going unnoticed. In terms of data-visualisation they tend to be quite a “noisy” data stream.

More difficult to track down are the common mistakes made innocently which are far more common, areas where partners do not realise that their actions are causing issues, tracking scam runners and conditions that cause people to rush and make mistakes. This can involve process changes but can also involve talking to people one-on-one, re-training them if needed, while being sensitive to not make the individual feel infantilised or like they’re in trouble. Having a good “case” built and plan thought through is a pre-requisite.

Manual Tracking

Never underestimate the power of a spreadsheet.

The first iterations of the project involved filling a Google Docs spreadsheet with data including the total float levels and discrepancies. The immediate effect was being able to see, for the first time, the long-term effects of new processes & changes and enabled me to convince management to invest the extra time and effort in more frequent checks.

However this required a lot of manual work from entering the data, transforming it, updating the charts, changing the titles, exporting the PDF’s and stitching the PDF’s together in a format that could be used in the end of week report to the managers.

It was also difficult to sample specific date ranges and to jump between time periods because of having to manually change the charts ranges. The whole process is very manual start to end.

Data Clumping & Scale

Spreadsheet data is quite flat; even if you have a large dataset it is difficult to get a sense of scale from a list of entries, compounded by the issue of charts showing one date range needing an awkward update to its settings to change.

With a linear list of data, it is often difficult to get a sense of scale; if you have 6 data points for Monday, one for each other day except for Thursday where you have a cluster of 3, it’s difficult to get a sense of till behaviour at a glance when they’re all just in a list.

Charts generated can, with configuration, display time-adjusted series but this adds time to setup and can sometimes complicate the issue of exploring specific ranges and mixing series’.

I squeezed as much as I could out of the spreadsheet but the more weeks passed, the more tedious the process of imputing the data, updating the relevant charts, exporting the PDF’s and compiling the reports became. The the data itself was stored in the sheet which got to be unwieldy and wasn’t a long term solution.


I decided that a small but versatile application could not only eliminate the hassle of this multi-step process but could enable users to quickly explore long term data and speed up the process of Random Daily Spot Checks.

The basic brief was to design something to be built quickly and achieve the following aims:

  • To be a central repository to track a branch’s spot check and float level data.
  • To give insights into till behaviour over time and with reference to recent events (like station closures, football matches etc).
  • To allow cash office partners to reduce overall till float values to optimal levels by observing recent behaviour and comparable contexts.
  • To follow a principle of data abundance and the adage “more good data overrides bad data”; users can quickly input counts with no authentication, the occasional mistake or bad count overwritten by more counts and an authorisation process by an authorised user later.
  • To look and feel like any other digital system the staff might use, to share a common visual language and design patterns.

The system should be robust, designed for non-technical users and offering no impediments or blocks in the high-paced, pressurised environment of the tills. When you have to shut down a till and occupy another partner’s time to supervise, you can’t wait for redundant animations to finish, for a lag on every enter-key press, or for the app to hang because the connection dropped.

Count Classifications

The application centres around it’s main data type: Counts. A count represents the result of a single spot check, either a full check or a partial, and can be saved in any state of completion. The server uses a classification system to sort incoming counts and present them in different ways on the front end.

  • Unverified: A count which is submitted with expired authentication or a low-access-privilege user which must be authorised by a higher-privileged user.
  • Incomplete: This is a count which does not have enough data to be included in the data-visualisation areas and analytics sets.
  • Partial: Some counts include sufficient data to render Bagged Coin can be used for some of the data visualisation but not for the full count. Counts with complete Bagged Coins but missing other details are classed as “partial”.
  • Complete: Counts with all values present and submitted under sufficient authentication.

Authentication Levels

To facilitate this data input system, a privilege system was designed to control which actions a given user is able to utilise. For my branch’s use I defined three roles using these privileges, ‘general user’, ‘cash partner’, ‘managers’, but in theory any number of users could be defined, one per person if you liked.

  • Standard User: This role has an empty string for a password that only has basic write abilities; counts submitted are saved as “unauthorised” (which really means ‘unverified’) requiring a cash partner or above to approve them. The lack of authentication is a strategic choice based on evaluation of how much damage a malicious user could do.
  • Cash Partners: This role is for any trusted partner, someone who regularly performs spot checks. This role allows the submission of fully authorised counts but still cannot remove data, only mark it for review.
  • Managers: Includes senior cash partners and management who oversee cash. This role has all write, update and delete capabilities for all data types. This role has a much longer password and quick session timeouts.

Design Prototyping with Figma

Learning from a couple of projects which preceded this one, I was adamant that this project would be designed upfront with prototyping software, fully thought out and with strict adherence to brand design given it’s context within the shop.

My previous projects immediately before this one involved building a clone of an old Microsoft Windows game in which I focused on building the engine and algorithmic structure first, thinking about its layout, interface options, look and feel, use context and other details as I went along. After all, the engine and mechanics were the thing.

This wasn’t necessarily a bad ideas but as time went on this lack of direction towards a single, clear end result (beyond replicating the main functionality of the game it was based on) created problems given that some of this functionality is tied to, or influenced by, the desired look & feel. Continually stopping a train of algorithmic work to come up with an interface idea which will potentially be rehashed later is an impediment that slows progress a lot.

Figma is a fantastic tool for design and prototyping, I had already used it for numerous pieces of work including Tailored Nutrition, Mosaic, lots of client work, and the redesign of this site. It allows you to pour thoughts out onto a page like a digital scrapbook but also to create mock-ups which look identical to the equivalent HTML and CSS. Its fast and versatile with a very low learning curve so check it out if you haven’t already.

Brand Analysis

I began by collecting material relating to the branding of Waitrose & Partners, first looking at the official brand guidelines, core colours, and examples of where these are used in existing digital sources such as their websites and internal applications (not pictured here for security reasons).

Beyond the core colours I wanted to look at the other pieces of tertiary branding such as the “Give a Little Love” campaign which was in full swing at the time, and other commonly used colours. I was particularly interested in how white-space was filled and how contrast is used.

It was important to me that the app feel a part of the eco-system, that it read to partners as a Waitrose app but I didn’t need to stick to the brand too exactingly; other sources take liberty with specific colours.

I had a rough idea of the layout that I wanted, it had to focus attention on the man content but also allow the user to jump between areas quickly so I decided on a side-bar based navigation layout with page titles and page-specific details on a bar running along the top.

These decisions were decided in response to one of the main applications I dealt with in my cash-office duties. This application, which we will call Flint here, presented users with an empty page and left-aligned top-level menus. On click, these presented cascading drop-down menus to get to the individual section you needed. This section would then open up in a new window, meaning that the main page wasted 90% of its space while still requiring the user to navigate tricky cascading menus.

I knew that the app would involve a fair amount of lists and large vertical input forms so a side-bar layout eliminated the need to handle a top-level navigation.

It was imperative that the application stack and / or hide its complexity behind intelligent design as much as possible, it had to focus the user on one “thing”. As discussed in the context section, most of the users are non-technical and the use-case environment may be high paced so distraction, confusion & clutter is completely unacceptable.

The diagram bellow shows an approximation of how Flint’s main page looked, although in reality the menus are about half the size shown here making them difficult for even able-bodied users to interact with.

Bellow is a hypothetical refactor of Flint with the exact same content. A clear separation of concerns focuses the user on the main page content, tabs are used to keep everything on show at once but these could easily be cascading menus or even a drop down / sliding drawer, just so long as the content is larger.

Not only is space used more efficiently but large amounts of overflowing content can ba handled far more effectively than before. This basic layout shows what the original design for the Float Tracker based on.

I then iterated some layouts and colour combinations to settle on the look and feel of the application.

After deciding on the layout system, I designed the main pages and user-interactions. I wanted the user to land on a list of graphs to communicate recent levels across all repositories and then click into each graph to show the inspector view for that repository. We also needed a central list of counts which should be filterable, and a reusable component for entering a new count and editing an existing count.

On top of these there would also be pages for managing the data types; users, repositories and partners. A User is local to the application and controls authentication and authorisation while Partner represents the till login number and name of am actual person and is used to sign counts. These pages were low priority given that the application could run with manually updated seed data.

A persistent “Add New” button was added to most screens to allow users to quickly jump into a new count irregardless of the current state of the app, the button was placed and styled to create an association with a common design pattern found on platforms like Twitter and Tumblr which have similar features.

Modals were selectively used for many Create Update and Destroy (CRUD) operations in order to keep the user ‘in place’ and to help with component reuse. With actions like deleting a count, a reason must be recorded and the user must identify themselves by selecting their till number, creating a compounding complexity of forms on top of forms so modal pop-ups are used to simplify these interactions and to bring the user back to their starting point on exit.

Theme and Component Design

Once I had hashed out the layouts and user-flow, I gave a pass over each frame to ensure consistency between colours combinations, text sizes, margins, drop-shadows and so on. From this it was possible to define named colours and a set of “basics” colours which replace the native browser named colours like “red” and “green”.

Another feature I wanted to try for the first time was the implementation of a “dark mode” and “light mode” theme, up until now all of my applications picked one style and pretty much stuck to it.

I took my primary inspiration from GitHub’s dark mode which appealed to me with it’s use of neon colours; at the time I was still not much of a fan of dark modes. One of my main issues with the way that many applications / sites implement dark mode is that often there are contrast issues, the contrast levels either being too low and difficult to see, or detailed elements like text being too bright and sharp, straining the eyes. Sites like GitHub that get it right largely do so because they solve this issue in particular.

I was keen to find areas where the same colours could be used for both light and dark modes such as the gray used to outlines and borders on inputs, the red warning text, some buttons etc.

I collected the common elements into one place and added other components which might be used later and collated them into a component library.

Tech Stack

The base of the font end was a React-Redux application for performance and data sharing between different elements of the state. React context was used heavily for local shared state for things like count editing and the inspector views. The application is a single page app using React-Router’s hash router given it’s ability to port to React Native if that option became desired later.

This project was also my first big typescript project, using the typescript create-react-app template for the front-end and a custom typescript setup for the back-end.

The back end is an express app secured with passport.js for authentication. There was no hard plan for the application to be used outside my branch however the idea that other branches could hypothetically spin up their own instance was important and so there was a plan to dockerise the app.

The app used a component and CSS-in-JS framework, unusual for me, with Emotion and Chakra. I was not amused by it. See the later sections for my thoughts.

For the back end I was keen that to use a SQL database, moving away from MongoDB; I was happy with my Mongo knowledge and realised I was lacking practice with SQL which is still more flexible and widely used in industry. I designed the data tables in Lucidchart’s diagramming tool with the entity relationship package, which you can view here.

To build the database, handle migrations, and build SQL queries Knex.js excels, allowing you to chain functions together representing the components of a SQL query in nice familiar JavaScript. Objection.js is an ORM (object relational modeller) is built on-top of Knex and allows you to build models and define database entities to interact with your database. It provides a comfortable middle ground / bridge for developers more used to things like MongoDB.

Chakra & Emotion

This could also be an entire post in itself, and may be some day.

This project was an opportunity for me to push myself to use new technologies and deviate from my typical tech stack. One such area I had previously been staunchly against was the idea of using a component library framework. I’ve used component libraries in the past but only for specific needs like date-pickers, the majority of the components would be custom made and the main layouts and styles implemented with CSS / SASS.

I was inspired to use Chakra.js, a component library built on-top of Emotion.js, one of the most popular CSS-in-JS libraries.

Chakra lets you pass props representing CSS attributes to your components and is touted to be a convenient way of quickly writing CSS. I found it to be a somehow even worse implementation of the already misguided utility-class method exemplified by Tailwind. This is definitely getting its own article at some point.

However this alone is not enough for me to not give Chakra a fair shake, what killed the experience for me was that a) Chakra is inconsistent on how controlled components (like text inputs) handle change events. Never before have I spent as much time fighting with an input to behave in the way I expect or trying to figure out if I should use <NumberInput /> or <NumberInputField /> .

The second thing was b) There was no date-picker. What’s a component library without a date-picker? Jokes aside, Chakra will frequently lack components that you would expect to be featured by default, and are the reason you use a component library in the first place.

You’ll be getting along fine, using <Box /> after <Box /> wondering what the functional difference is between just writing a regular <div /> and writing some CSS like a normal person, and then Boom! Stopped in your tracks and searching through another article listing component libraries which will need to be stylised and add bloat to your package size or through inscrutable documentation.

You become a designer for the framework, focusing on it rather than benefiting from it working for you.

React Vis

There are a number of chart options when it comes to a React application, I won’t go through a full breakdown as that could also be a post in itself.

D3 is still my preferred option for data visualisation but it is tricky to use alongside React given that it works by injecting data into the DOM and performing transformations progressively to said data. This poses a problem given that, strictly speaking, any manual DOM manipulation within the React app is an anti-pattern.

If you just search for “react d3” you’ll find many articles and an unmaintained ‘React-D3’ library which attempt to integrate D3 with react. Almost all of these will make the mistake of simply placing some amount of D3 calculation logic in useEffect’s and then simply placing the output in the render method, violating the basic benefit of using React and frequently causing unnecessary re-renders.

Some libraries such as chart.js calculate the output of graphs behind the scenes and then render the output to a canvas element. This eliminates the DOM manipulation issue as the library can track changes and is effectively providing the framework with an image to paint on each render.

The drawback is that you loose out on the flexibility of D3 and are locked into preset graph options and whichever configuration settings your chosen library supports.

Then there are libraries like React-Vis which lay somewhere in-between; they provide wrappers for pieces of D3 functionality that are designed to be used in a compossible manor.

React Vis was made by Uber and was the library I chose to go with after research as I felt it was the best all-round compromise. Its compossible components are rendered intuitively in the JSX and can be configured by passing props, which I feel is much more in keeping with the style of writing React components than calculating and building a configuration object and passing it to a single <Chart /> component, but this is of course personal preference.

Good chart libraries usually already provide combination charts but what’s nice about the React Vis method is the ability to simply bring in a LineSeries or PointSeries and place it in the JSX. Want to get rid of the Y-axis? Just delete it! No need to scour the docs to find the right option to turn it off.

This is still a compromise, far from perfect, and I would still consider all available options per project but until someone can properly integrate React & D3, a good compromise is the best you can try for.

React Context and Reducer Hooks

This type of application has a number of areas where a large amount of shared state is used by several components but is still too localised to be put in the global Redux state, the app can be thought of as a collection of mini localised unit-applications that communicate with one another. The two most prominent examples of this are the count edit / add form and the inspection viewer.

With this type of situation the temptation would be to hold all the state logic in a top-level component and pass the necessary data and callbacks to each child, which can then pass them down as needed. This introduces a number of issues and inconveniences not least that these types of prop-drilling and callback trees are the problem that Redux is supposed to solve.

In addition, when child components need to perform data computation and complex algorithmic work, the top level component becomes bloated, loading and perform data parsing at inconvenient times and when the consuming component is unmounted.

By combining the React context and the hook useReducer we can create a Redux-like experience without exposing unnecessary data like mouse positions, series highlights and tooltips to the Redux store unnecessarily, and does so in such a way which compliments typescript.

useReducer allows us to dispatch actions to a reducer, just like with Redux, and have the reducers make immutable changes to a state object. Placed within context, each component can access the state object and access the sections it needs and dispatch modifications as needed.

By using enums for the action types, and action creator functions, enforcing type conformity is easy. By extracting the reducer, initial state, action creators and action types to their own Utils file you can create a separation of concerns and import what you need in each component.

Notification System

The application has a notification system built in, initially intended to provide the user visual feedback on a save action.

The implementation is fairly simple; an invisible layout is rendered over every page, it subscribes to a list of queued notifications in the Redux state. Each notification is represented by a standard object with options that specify how the item should render and a time property which, unless the notification is specified to be persistent, will define the amount of time before the notification self-dismisses.

Some notifications have links or actions which will dispatch events to change the state of the application (like redirecting the user to the inspector view with preset options). There was provision to have some notifications stash themselves in a drawer for the user to view again, similar to how many operating systems implement notifications.

The advantage of this system is that the notification can contain anything, it can render with a particular colour scheme such as a blue “information” for external events that the user may want to be aware of, a green “confirmation” for things like confirming a successful save, and a red “warning” for negative side effects like a network error.

This means that anywhere in the app I can write a simple dispatch function, pass in the required and optional props and have a notification appear. Notifications slide in from the side on top of any which are currently rendered, as others are dismissed the stack drops down. When the user mouses over a notification the expiry timer is paused.

Inspection view

While the main landing page graphs provide a brief comparative view at a glance, it is the Inspector view that provides users the ability to pull the majority of insights.

The inspector view allows users to quickly jump through date ranges to get a snapshot of the tills state and judge it’s behaviour but also to drill into specific counts and compare the change over time.

Date ranges

The date ranges allow users to filter the queried data and regenerate the graph seamlessly, with a minimal load time. It defaults to showing the last four weeks of counts but can theoretically show any time frame, given that the application has no gauge of how many counts exist in a given stretch.

Data Inspector and Series Selection

Two tooltips allow the user to interact with the graph, the first showing overall details for each count such as the date and float total (although the version in these screenshots here only show the timestamp).

The ‘local’ tooltip shows data for the specific data point and can be clicked to highlight the series to make it stand out.

The information is already distinguishable from other elements on the graph including the time series, the height of the point, the series colour guide etc. The the tooltips allow users to focus their attention and explore from point-to-point, without having to jump back to the series labels. This helps simplify what can be a complex array of data.

Series Adjustments

The inspector has two modes, one which shows just the Bagged Coin and another which merges Bagged Coin and Loose Coin by calculating the effective amounts.

Bagged coin has the disadvantage of being uniform in value; you’re never going to have more than say 10 bags, the bagged coin will usually be somewhere between 3 and 6 bags. This poses the problem of overlapping points and series, which are difficult to read.

The “adjustment” feature calculates the relative amount of each value and changes them to offset by a set amount. Each series will receive an “offset”, a value which it uses to change the vertical placement of each point, the series are then clearly visible as being in the same position but distinct.

Of course, the series could swap positions every step between counts creating a “leap frog” effect, but chances are such that a given series will remain above / bellow another for a period of time.

The following examples show situations in which an ‘adjustment’ is useful, and where it can conflate issues (see the left side in the third image, the series begin to look as though they are on the next line). These images show the same datasets, the first has no offset, the next has a small offset, the last has a large offset.

The series algorithm will alternate between offsetting positive values and negative values, for example, the first item will be offset by +5%, the next by -5%, the next by +10%, the next by -10%.

Shadow Points

Given that there will be between 8 and 20 values, it makes sense that the series lines and points would be small, this poses an issue given that small points are difficult to focus on with a cursor.

The app renders “shadow points” behind every actual point which provides a larger target area for pointer events, meaning that its easier for users to hover and see specific details on the tooltip.

Count Input

The component used to edit and add counts is the other heavily used section of the app. The same large component makes up the majority of the page and so the distinction between the ‘edit’ page and the ‘add new’ page is whether data is pre-loaded into the state.

In fact, when adding a count, the first save will redirect the user to the edit page with minimal perceptible effect to the user.

Count Validations

The validation functionality is used to perform two main functions, certifying that a count has enough data to be displayed alongside other counts, and to categorise the count into the completion states; “incomplete”, “partial”, “complete” and “unverified”.

Both the front end and the back end use a validation function, and unfortunately for me that meant keeping the two in sync manually, copying changes back and forth depending on what changes I needed to make. The front end uses the validator function to warn the user of the current state of the count and to attempt to reduce re-saving.

The back end is more strict and must use the validation to change how it saves each item (or perhaps not, should it fail certain checks).

Number Steps

The input components for bagged coin, loose coin and notes are all reusable, they normalise values into their pence value given that 1p is the smallest possible denomination. All calculations are performed on the pence value and validated by calculating valid “steps”, e.g. a 20p input will go into an error state if it is given the value “£0.30”.

The first image shows how the reuse of the single input for a loose coin series, the second shows the logic used to add trailing zeros to the display value when the user tabs or clicks away.

User Selector

By the rules of the partnership, each count should be performed by at least two people, the ‘counter’ who performs the count, and the ‘supervisor’ who observes the count without interacting (they may write down the values or help out in another way).

The input allows users to select their name from a drop down list or type to filter the results. This matches not only the user’s first and second names but also their till number and initials meaning that however the user tries to identify themselves the system will try to approximate who they are. This is helped by the fact that there are a limited number of users per branch so the result matching can be quite loose.

The user can then click from the drop down or hit enter to select the top result.

Submission Panel

The form presents users with three buttons to submit the count, “Save”, “Save and Close” and “Save and Next Count”. This form is set to a sticky positioning at the bottom of the page meaning that it will display at the end of the form if the form’s end is on screen, otherwise it will pin itself to the bottom of the screen.

“Save” saves the count and displays a notification while allowing the user to continue entering data, “Save and Close” does the same and re-directs the user to the list of counts or back to the page they were on (only available for some locations), “Save and Next Count” saves and redirects to a new count.

In the Real World

You’ll notice if you view the app that its a bit rough around the edges and even missing some sections, not least because the branch on deployment is behind the main branch due to embedded sensitive data that I don’t want to put on GitHub.

Perfectionism was a real impediment working on this project, it was never finished and I was admittedly shy to show it to my colleagues before it was “done”. Ignoring that it will never be ‘done’. I started using it in March of 2021 and kept it mostly to myself at first.

I cant remember what prompted me to show my colleagues but I’m glad it happened so soon, the best antidote for perfectionism, and the assumptions and lies it plants in your head, is to expose it to other people. My colleagues were impressed and eager to get their hands on the app.

We never got to the stage of actually imputing numbers on location at the tills during a count, but we were able to input the data into the shonky, half-built graphs, and pull a number of key insights despite it’s half finished state.

The app made the end of week reporting so much faster for me and was used to track down a new scam which had just cropped up and was evading us at the time.

It allowed a massive reduction in float sizes and for us to find ways to build resilience into each till preceding events which were happening around that time. This included the run up to 17th July 2021, the day in which all Covid protections, including basic things like mask-wearing in indoor public places, were to be prematurely removed in England (but not the whole UK). This was a period of rising trade, beyond pre-covid levels, and further uncertainty about what the developing behaviour from the public would be as we moved into the next phase of the pandemic.

By the time I left the partnership in September 2021, the tills had held steady at a discrepancy of +-£10, the “very good” band of operations, something we had all assumed was a goal achievable by any branch except ours, an exceptional outcome for the project.

Still a little bummed I didn’t get my consistent +-£5 though.


App professionalism comes at a time cost

As pleased as I am now by the way this project turned out, I cannot deny that it became a monster incarnation of something which in all rights should have been smaller and faster to produce.

I felt that this project took an inordinate amount of effort considering the relative simplicity of the outcome; “how come this app is not really more complex than my past projects but took longer?”.

Well, one explanation is that being an essential worker during a time of global crisis an having to interact with costumers every day who seem to flip-flop week-to-week between treating you as “the hero’s keeping the work running” and “just shelve stackers who are not bringing me service fast enough, Covid is over now don’t you know?” has a detrimental effect on one’s mental health and creates burnout.

Another explanation is that this application brought a level of production maturity that pervious works such as the chess app just did not have. A project like this had to be a showcase of skills built up until this point and deigned as though it were a production app to be maintained by the organisation after I left.

Many of my projects until now had had a focus on a particular core aspect like algorithms, a game engine, CMS functionality, etc. This one had to do everything from brand identity to a user and consumer focus, to new tech stack changes, to security concerns and so on.

In retrospect, a lighter, more stripped down version of the app could have been produced faster to begin generating insights long before it got to its current state.

Typescript should be top-down

My next project, following on from immediately from this one, was the Scratch Card Tracker app and re-used much of the tech stack from this project, with a number of improvements.

Given that this project was my first big typescript project, the most noticeable improvement in the Scratch Card Tracker was the quality of typescript used.

The Float Tracker suffered from inconsistent use of typescript standards with many instances where TS wasn’t used or where there was an abundance of any and unknown declarations to get the project moving again. This is somewhat to be expected from people new to typescript or new to strongly typed languages entirely. Unfortunately it creates a feedback loop as the more weakly typed a section of code is, the more brittle connected sections of code become, and the more you feel the need to just drop an any and move on.

Starting The Scratch Card Tracker I defined data-types from the beginning such as Actions, State types, ColourSets, Counts, Games (series), and stored them in a central file (although I must acknowledge that there are opinions about how to handle global typedefs but who cares frankly given how closely tied all the functionality of the app is, I saw no reason to not just import them from a types.d.ts file.

Having core data types right from the beginning meant that sub-components inferred stronger types (where TS “figures out” the type of a variable from its source) and so made more strongly-typed bindings, meaning the incentive to use ‘cheats’ like any were reduced. Strong typing cascaded from the top of the app downwards instead of having to be retroactively applied by adding them lower down and following the error stack slowly upwards.

This is not to say that having set data types is the answer to all the woes you might encounter with typescript, and there are certainly times when it is useful to just remove type definitions, but it’s worth thinking about areas where you can introduce strong type definitions from the beginning and cascade them downwards rather than getting caught out later on in the development process.

As a general rule, if your weak type bindings are lower in the component tree / call stack then you can probably get away with it. If there are weak type bindings at the top but constantly strong bindings later on then you’ll typically be alright but may run into some issues. If the type binding is chequered or has an area of weak bindings ‘half way’ all hell breaks loose. Lastly, if you want strong type bindings later on but none higher up you might as well walk away and go a bike ride or do some painting instead and never touch the project again.

This diagram communicates my general thoughts on type-bindings; consistency is key, and it is easier to put strong typing up front and become loose later than to try to introduce strong typings later.

Proper prototyping is essential but don’t get bogged down

Beginning this project I had a good idea of the necessity of prototyping, but I think I had slightly inflated ideas about how absolute a design could / should be before actual work can begin.

In an ideal world it would be possible to design every component, every component combination, every colour, every modal and every page / state and so on. In reality, not only is it not worth your time to mock every single modal, so long as every base piece can be found elsewhere and the general user flow is thought through.

In addition, there will inevitably be changes that come up through the built and UAT feedback process, its just what happens. You’ll find yourself going back and retroactively adding features to the design from the real deployment and diverging from the design.

Going forward I considered the prototyping phase to be essential but also sought to view it as a starting point for the project, to eventually be jettisoned and to grow beyond, not a strict rulebook to be bound to.

Stop trying to find a way around it and just write the CSS

Alright, I admit it, I am biased against component frameworks.

However I do still maintain that there is a right way to build a component library / framework and there are about 12 wrong ways. Perhaps a framework is inherently a compromise, that there will never be a “perfect” one because each will invariably have to choose from a set of trade-offs.

I’ve since worked on teams using Material UI, an implementation of Google’s Material Design system which has a range of it’s own problems but is rarely as blocking as Chakra was to work with.

MUI has a wide-ranging set of components out the box, very often we were surprised at how often a desired feature was present. It’s various methods of writing CSS-in-JS are still sometimes a pain in the backside, but I do appreciate that they’re trying to cater to a wide range of audiences and seem to do so well.

Ultimately though the source of much of my frustration comes from the fact that much of these technologies seem to be convoluted 5-dimensional-chess manoeuvrers to avoid just writing some CSS.

I understand the drive to encapsulate pieces of styling instead of having enumerate SASS files compiling into a global CSS sheet which then causes unexpected conflicts between components but, for fear of sounding like a stereotypical pensioner whining about “back in my day”, I can’t help but feel the solution is to just write better CSS?

We ended up having to define quite strict standards for setting up the ‘theme’ in our projects in order to try to standardise styles between components, and encountered situations where two unrelated components should look alike but diverge slightly, because at some point a chunk of CSS was copied and pasted and then unknowingly conflicted with something else in the page. Encapsulation moved from being the thing that enforces consistency to an impediment to consistency.

Every time I find myself tracking one of these issues down I cannot help but think to myself “what if this was just a SASS variable, wouldn’t that get rid of all this? Is this not a solved problem already?”

Anyway… get of my lawn you darn kids! And take yer no-good gadgets and googas wicha! I got my SASS runner and I like it! *harumph noise*

Trying to prove a point with code.

There is no project which exists outside of a context and a human story and this was no different, there’s also a reason I’ve struggled to get round to writing it until now.

I’ve joked at several points about the mental and physical stain and burnout of being an essential worker, in a retail unit, in a station, across the Covid pandemic (at least until I left in September 2021). Putting the jokes aside it has had long lasting effects and I’m not sure any of my colleagues or myself are quite the same people we were before it all started.

At least through much of 2020 we had the sense of urgency and emergency which stood to validate the stress of the work; it was tough but we had a duty to perform and were up to the task. By 2021 we were all severely burned out and it was showing, the camaraderie and “we’re all in this together” of the past was gone and the continual flip-flop between people taking Covid seriously and acting as though it was all over (and definitely not coming back, this definitely isn’t the 4th or 5th time we’ve said this) was compounding the issue.

We can’t disconnect ourselves form the broader context from our work and it is clear to me now that this project, and the ones to follow were a way of dealing with the trauma, frankly speaking. I could work on something which gave me a sense of achievement, something to focus on rather than the outside world, and something which I believed was a logical step up to the next phase of life, Covid or no Covid.

I believed in the partnerships mission and had the idea that I could use this and the Scratch Card tracker to get my foot in the door at the partnerships head office, driven in part by the belief that I was not qualified enough to just search for a development job outright (a strong case of imposter syndrome).

This would also helped create a bit of narrative consistency; I would use my position in the partnership to improve the branch, leave behind a lasting legacy, and then move into a developer job within the same organisation which could not have been obtained otherwise. This, it was felt, would justify the hard work, the sometimes back breaking effort, and the lack of any meaningful break in over a year, and ‘make it all make sense’.

In reality, while my skills were benefited by the project greatly, this project and the Scratch Card tracker were not even on my portfolio when I applied to my current position, and the partnerships head office rejected my application outright; they needed the team leading experience more than the specific skill-sets.

I was so deep in my own little world, head down, just trying to get through each week that there was no time to stop, take a breath, and think rationally.

The thing that eventually ‘snapped me out of it’ was my living situation falling apart; my then flatmates turned abusive and threatening before trying to kick me out and I had to pack up and move back to Glasgow to bunk with my parents. I was out of options, and probably on the verge of a nervous breakdown (yeah that was a fun couple of years, I love London really…).

It feels as though every one of my posts in recent years has ended with a section about burn out, but this had an air of finality about it. The reason I was so engrossed in this uni-linear career / escape plan / escapism, and the reason I was so attached to the partnership was because it created that narrative consistency; it meant I was working towards something when, in reality, I had all that I needed to get my current position from about mid-2020 onwards.

In retrospect it is easy to say that we all (myself and my team) did good work irregardless of context and irregardless of appreciation, that the time spent making these improvements and pushing each other was not wasted, and has had lasting effects but that thought doesn’t help when you’re in the midst of it.

Still, if there is anything in this which you think resonates with you or someone you know, take this as the ‘red flag’ sign you’ve been waiting for.

Tailored Nutrition | Boots, Bow & Arrow

Tailored Nutrition is a piece of service and systems design for Boots, made as a speculative project on behalf of the white space design agency Bow&Arrow.

Tailored Nutrition is designed to promote Boots brand products, namely their nutritional wellbeing items, and build brand integrity by giving customers insight into, and control over their base nutritional levels.

It does this through a new type of home blood testing kit which is integrated with a digital system and integrating with existing Boots consultancy and advice services.

Content Warning

This post discusses details of blood sampling and other medical details.

A note about COVID-19

This project started in early March 2020, there was public knowledge of the coronavirus but the extent was not yet known. The Project was extended twice and eventually concluded on the 9th of June, almost two months after the original deadline.

The project continued as much as was possible but obviously, it was started and finished in widely different contexts. This article is written from that perspective, talking about the future in the same terms as March, despite the fact that the entire premise would be changed were it to happen now.

Video Presentation

Watch the video bellow as presented to Bow&Arrow or continue on to read the text version.

Native Mobile App Codebase & Demo

Boots & Wellbeing Futures

A concluding page from the Health & Wellness Futures Report 2019

As part of the final unit of my university course, the brief given by Bow&Arrow was an open one, asking us to consider the future of the wellness industry and to design a product which would give Boots an edge in five years time as they move further into this industry.

Extensive primary and secondary research was carried out across a target demographic which I was assigned, the “Course Correctors”; busy individuals between 25 and 35 who’s lives are consumed by either work or social obligations or both, and who’s schedules are sporadic.

This demographic, as it turns out, are particularly good at seeing though vacuous marketing schemes, and generally sited a lack of time and money to invest in new “stuff”, of which the wellness industry appears full of.

This is a difficult perception to work around because it is entirely correct.


The wellness industry churns out repacked products with higher price tags at an alarming rate, it makes bold statements and often fails to deliver, it relies on the fact that “wellness” has no set definition to let the consumer fill in any perceived holes in the premise of products they encounter.

Even when a ‘wellness’ product is coming form a place of genuine desire to do good, it can fall into a category we dubbed “faster better stronger”. The offer of self-improvement offered becomes yet another standard to hold oneself to, another unobtainable image to aspire to and feel inferior to.

Products which seek to offer empowerment end up having the opposite effect, enforcing standards and exploiting vulnerabilities. We all remember the infamous “Are you Beech Body Ready” campaign.


My research also came up with some critiques of Boots, which seemed to be looking for a brand pivot for fear of shrinking market share, an believed wellbeing was the ticket.

Yet when we examine Boots competitors we find the opposite is true, said competitors are thriving for the exact reasons that are supposed to be Boots’s founding principles, for instance accessible healthcare and beauty.

The largest competitor is Superdrug which saw massive market share increase by pivoting more into healthcare, from a 60 / 40 split between healthcare and beauty to 70 / 30.

Public perception research showed that Boots does maintain its image as the quint-essential highstreet pharmacy, but that it is view as trying unsuccessful to be an ‘everything store’, one comment succinctly summarised this; “its like a supermarket, only I cant get coffee”. In store, this is obvious as you must battle your way through the beauty stands to get to the one thing you came in for, uninterested in browsing because why would you? In the flagship store you see the dedicated Wellbeing section, one third of which is taken up by a desk, another third by the same items you could find elsewhere in the pharmacy, and the last third by an Innocent Smoothies cart stall.


The wellness industry is expected to shift to a more 21st century mindset that, in short, refocuses product and service offerings away from simple pre-packaged solutions and embraces the complexity of real people, their experiences, their non-linear journeys and desires.

This poses a whole new set of challenges on an industry that is too often quite happy to slap gold embroidery on smelly soap and call it aromatherapy at 300% typical price.

Service providers must, if they are to survive, pivot to providing flexible, ongoing, bespoke yet accessible solutions to real problems, rather than arbitrarily plucked challenge vectors for someone else’s idea of aspiration.

Nutrition & Britons

During interviews with Wellness Ambassadors at Boots Covent Garden, it was discussed that over 50% or people in the UK do not get the recommended nutritional intake, largely due to our diets.

This involves not just what the diets consist of (i.e. which foods) but the conditions of the ingredients and food production methods. This nutritional deficit is in stark contrast to even our European neighbours and is a complex yet fascinating topic.

This Wellness Ambassador was specifically there that day to promote a new line of nutritional supplement products, stating that, those who already take supplements are doing so out of habit, and those who are not are unlikely to take up this habit. This campaign was aimed at normalising the casual use of supplements alongside healthy diets.

Further research corroborated these claims and added that one of the biggest barriers to uptake of supplements (where necessary) is that most people believe themselves to be healthy. Why pickup a habit taking supplements just because the average intake is sub-optimal?

Wellness Baselines

Another premise occurred; what is the point of enumerate products to improve our health, make us happier, give us more energy etc. if we are starting from a point of nutritional deficiency? How can people feel empowered to improve on their wellbeing if a baseline of wellness doe snot already exist?

It was from this starting point that Tailored Nutrition was born, the idea of finding ways to give people insight into their base nutritional levels, as well as ways to action changes they wish to affect. This would involve several service hooks giving customers option over their level of engagement and creating multiple monetisation vectors for Boots.

Councillors often talk about the concept of baselines with regards to mental health, the basic premise being that life’s ups and downs are best navigated with an emotional ‘status quo’ to return to. If we, as individuals, know that we can and will return to this baseline, we can deal with situations by effectively having confidence that “this will pass”.

Tailored Nutrition Overview

Tailored nutrition was designed to have several customer hooks but three core user journeys. The user journeys envision different levels of this engagement and seek to ‘capture’ casual yet loyal customers while generating new ones.

  • Customers motivated by a deficiency or suspected deficiency they wish to address.
  • Customers already engaging in supplement-taking or some dietary action related to health they wish to audit.
  • Customers interested in a casual audit or ongoing programme of food / intake tracking, who would benefit from insight into this diet.

Tailored Nutrition was a system with three main deliverable touch points, these were:

  • Home Blood Test Kit: A lancet device to for the core of a new testing kit, aimed at solving critical turn-off-issues and accessibility issues with standard kits.
  • A React Native App: A delivery system for test results and platform to perform non-invasive diet queries. This is the main implantation of the user actions; means to explore methods to action deficiencies.
  • The Boots Nutritional Model: A data model which would power the exploratory aspects of the app in addition to in-person guidance within consultation services.

Lancing Device

Standard home blood testing kits centre around small devices called Lancets, which are used to pierce the skin. The user then allows the target area to bleed openly into a small vial which is posted to the processing centre.

This is a clumsy procedure but also a psychologically taxing one. The vial is usually small and perched on a flimsy tray, often blood is wasted and a second, third, or even fourth puncture is needed. Lancets are often pressure activated, requiring the user to push in until they ‘click’ and the needle extends.

This is like deliberately stretching an elastic band to snap it, the pain is relatively minor, but the anticipation is agonising.

From personal experience, the last time I used on I threw up, another person I talked to said they fainted.

This new device, attaches itself to the users finger and holds the vial internally. The device applies pressure, removing the requirement for pressure-activated lancets only (for example a reusable one could be used). Using the lancet through the device adds a psychological separation of actions which eases the anticipation.

The forward component is a stiff medical-grade silicon meaning that it will retain its shape with minimal blood spillage. The user can then immediately drop their arm and allow the vial to fill.

In addition to this device, the kit would include moisturising wipes laced with a mild sedative to follow the alcohol based wipes. This would ensure the contact area is not dry and ease the pain of puncture even more.

Native App

The app serves the purpose of delivering data from the blood tests but also for auditing diets and symptoms of users who haven’t used the kits to estimate their levels.

The app then offers suggestions in the form of Boots products, recipes, and diet alterations as suggested actions for desired effects.

Traffic light system

Instead of simply throwing data at the user, the app employs a jokingly named traffic light system, given that its intended effect is the opposite of the common UI pattern where users are presented with ‘good, medium, bad’ scales and coerced to take action.

The app splits data into “Insights”, “Changes”, and “Actionable” categories. Insights are auto generated suggestions or noteworthy information created based on data provided by the user. Changes are created when a trend alteration from the user is noted, for example a drastic change in a particular level that is neither good or bad. Lastly Actionable is the one area where users are given dietary interventions to tackle levels which are out of recommended bounds. It should be noted that any serious threat to the user’s health would trigger an alert outside of this system, with the potential to subscribe a customer’s healthcare provider.

Visual Design

The aesthetic style was inspired by and extended from the Boot’s recent brand overhaul, some of their physical promotional material, and a set of aesthetic developments shown here on their temporary landing page during the beginning of the COVID 19 pandemic.

The site reverted these changes sometime in mid May but has been slowly introducing elements to the main site design.

This style is characterised by large block elements, overlapping block highlights, “shadow” elements adding variable emphasis, and a simpler colour scheme.

This was combines with the ‘standard’ Boots design characteristics such as lower case text, calming colours and soft blues at the forefront, high contrast, large touch areas.

Boots Nutritional Model

To inform the way data relating to nutrition sources is presented to the customer, I found it necessary to define a model to split items into hierarchical taxonomies.

  • Nutrient: A vitamin or mineral
  • Super Category: Encompassing groups such as “Vegetables” or “Fish”
  • Category: More specific typology of food such as “whole grains” or “root vegetables”
  • Sub-category: Used to give categories more flexibility, for instance distinguishing between “dark leafy veg” and “leafy veg”
  • Food Instance: A raw item or foodstuff
  • Variant: Used to give instances more flexibility, for example “chicken egg” vs “egg”

This model can be used to create data visualisations of how items relate to user-defined goals such as these.

Lastly, as time moves on, this database of recipes, collected trend data from users and nutrient sources would build, allowing Boots to identify areas of focus, trend data (if some deficiencies are seasonal) and so on.

In the immediate timescale, this could inform product promotional forecasting and increase response times for customer consultations in Boots other services.

Longer term this could be used to acquire new food sources to stock in appropriate branches. The reason for this is that often, with particular food items sought for nutritional benefit, the actual value depends on the source, for example the solid it was grown in, waters fished from, storage conditions.


This may seem like a departure for boots, to suggest that deli items and raw ingredients take a place next to other items in store. However I would submit that Boots ranges are already radically diversified beyond what a pharmacy is at its core.

More than the fact tat Boots already does sell food including raw (non processed) items, I submit that some people already treat this tailored sourcing of their diets in the same way that exotic lotions and other items are marketed for their unique capabilities. The disctinction is that for most of us, this level of insight and tailoring is too time consuming, the sourcing too expensive.

Tailored nutrition aims to democratise that work and offer options to establish baselines of wellbeing on which we can build.

Bloqs | The Design Against Crime Research Center

Bloqs is a Social Game designed to facilitate and rekindle relationships between incarcerated fathers and their children on the outside.

Bloqs was the outcome of a six month project by a small team of four, including myself, operating within the Design Against Crime Research Centre, at Central Saint Martins, University of the Arts London.

The team included Alexis Bardini, Alexandra Evans, Xiangie Li, and myself.

During this time, we conducted extensive research and iterated over a range of potential design directions. We attended a conference run by South Denmark University’s Social Games Against Crime team who were responsible for Captivated, the social game Bloqs is base on. We ran numerous workshops and interviews within Prisons, with social care groups and with individuals.

Bloqs Game

Bloqs is a Jenga-like tower stacking game where each piece, called a ‘Bloq’ has two questions or prompts written on each main face and a colour which indicates the ‘level’ the prompt corresponds to.

The players successively take a Bloq from the stack, choose a prompt which will ask them to perform an action or answer a question, sometimes involving other players. If the response is deemed acceptable by the others, the player must place the Bloq back on top and receive their corresponding money.

‘Levels’ refers to the Social Penetration theory, or onion theory, a theory of interpersonal communication and relationship that seeks to describe how relationships move from weak and relatively shallow, to deeper and more intimate connections. The colours correlate to where on the SPT the prompt aims to stimulate interaction:

  • (Red) Likes and Dislikes
  • (Orange) Goals and Aspirations
  • (Green) Religious and Spiritual Convictions
  • (Blue) Deep Fears and Fantasies
  • (Purple) The Concept of the Self

The game involves a currency called ‘Squids’, based off the British slang for pound sterling, or ‘Quids’. On successful response to a prompt the player is rewarded with Squids, higher levels rewarding more.

This works to play off the inherent tension between two people who are not close by balancing that tension off of game play.

The player is incentivised to move down the ‘layers of the onion’ (take progressively higher level Bloqs) by the financial incentive which plays off against the inherent resistance to answering more difficult / vulnerable questions / prompts.

This balancing which the players will engage in is completely neutral, that is to say, a genuine trade off is made between comfort and desire to win. No coercion or forced progression though the levels is employed.

In addition, the game play format revolving around a progressively more unstable tower creates a physical tension and focuses the players attention. Even if their attention wonders during the game, the moments of interacting with the tower magnetically draw players to a single focus point. After which, the tension is eased and the players balloon back out again.

This was one of the biggest yet subtle challenges our team faced when deciding on the format of the game, as we realised through previous iterations, the risk of forced progression and vulnerability.

The game ends when the tower falls, with the last person to interact loosing 50% of their currency. The winners or looser are then determined by how much money they have, although the game is not about who wins.

The game is built on the archetype most commonly recognised as Jenga in order to utilise preexisting associations with the game and avoid having to make players familiarise themselves with an entirely new game format.


It is not a particularly radical notion that Prison as a system is ineffective as a means to rehabilitate.

This is especially true in systems such as the UK justice system which relies heavily on punitive action, both during and after sentence, and has a rapidly increasing population.

When opportunities are missing and rehabilitation is not an option, incarceration can feed into a vicious cycle of crime, anti-social behaviour and derivation.

A component of this is the role of relationships between incarcerated fathers and their children. Children and adolescents with stronger connections to their parents are far less likely to suffer health issues, do well in school and avoid crime cycles than those who’s relationship breaks down.

In their younger years, kids will be brought to visit with relatives, as adults they understand the importance of visiting themselves. Its in their teen years that people begin to drift apart and people stop visiting.

Bloqs was designed in response to this issue: How can we, assuming no radical change in incarceration trends, help incarcerated fathers reconnect and maintain connections with their children on the outside.


The Bloqs project began with the notion of creating a cultural translation for the game Captivated, by the Social Games Against Crime team at South Denmark University.

Captivated is a board game layout out similar to Monopoly which represents a fictional Danish prison. Players progress through the prison, encountering characters who belong to groups or factions. The players learn about the characters and their experiences while also collecting Story, Action and Be Honest cards.

Story cards tell stories about the fictional prison, including interactions between the characters, intended to demystify prison for the child. Action cards ask the player to perform a physical challenge, intended to increase intimacy and bonding through touch. Be Honest cards foster interpersonal communication through the telling of personal stories.

The winner is the first to build up enough currency to buy the prison master.

Prison System Context Differences

Not only is Denmark’s population size far smaller than the UK’s, but their prison population is only 4,000, compared to the UK’s 96,000 at time of research.

Danish prisons are much more standardised than the UK, one prison more or less looks like the next. In addition, the level of amenities is far higher, with a strong bent towards rehabilitation.

When visiting, the visitor and inmate have a private room for periods of a time (typically 1 hour we were told) during which they have a table and other items in complete privacy.

Contrast this to UK meetings which are typically held in one large room on an array of small clusters of table and chairs. Guards line the area and patrol, with a control tower located somewhere. Inmates are only allowed brief moments of physical contact at the beginning and and of the meeting and could have the meeting cut short and be subjected to strip search if they violate this.

Inmates wear high vis vests and must remain seated in a specially designated chair. In one prison we visited tables were simply sawen-off logs with plastic chairs chained to them.

The one exception to this is open days, or family days in which dedicated space is cleared for games, bouncy castles, stalls and such. Families can spend up to eight hours along with inmates and many of the restrictions are removed.

The context of where the game was to be played was the most challenging aspect of the entire project, one which we were not able to resolve in the standard visits. We decided that Bloqs would be constrained to family days and could have applications in therapeutic settings outside of prison.

Blow’s questions were designed to probe areas our research found pertinent to the target users as opposed to Captivated which focused on using the prison itself as the common ground for bonding to occur.

During development, Bloqs was tested with different user groups including younger kids, university faculty, ex-convicts of women’s prison, retirees, and each of our families over the Christmas break.

Serious and Social Games

A picture taken of South Denmark University during our visit
a promotional shot of a board game similar in design to monopoly
Captivated, the social game by Social Games Against Crime

The term Social games comes from the definitions layout out by Thomas Markussen and Eva Knutz in Playful Participation in Social Games (2017) Download Playful Participation in Social Games here

In short, a serious game is one which is aimed at engaging the player in some form of intellectual engagement and uses the context of game play to facilitate this engagement, where enjoyment is not the motivation for play. An example of this could be a simulator for acting in a profession under a crisis situation like a medic making quick decisions in a war zone.

There also exist Health Games which aim to perform an auto-therapeutic function to a patient, helping them to process an event or ailment.

A social game goes further to extract the primary engagement away from the game and onto the relationship between players. Social games exist to use the game context to facilitate an interaction between players for the purpose of exploring or developing the relationship between them.

Presentation of Bloqs + Site

website landing page
Bloqs on display at the 2019 degree show

Bloqs was presented during the Summer 2019 Central Saint Martins Degree Show under the Diploma in Professional Studies section.

In addition to the physical prototype, I designed and launched a promotional site for the project

I included some new-at-the time features such as an interactive diagram built with D3.js to explore the SPT. Most of the graphics were taken from or inspired by the instruction set and packaging visual language.


The project began with the intention that Captivated could be directly ported to a UK context so the first three months before Christmas were spent building games around it’s general layout and mechanics.

This involved workshops modelled around the type run by Markussen T and Knutz E, to see if we could build characters which roughly represented sections of the population without stereotyping.

A worksheet designed for a brainstorming workshop with inmates
Both sides of an example card for one of the “Bikers”.

Doubt was present from the start as to whether this technique, which Captivated relied upon was useful from the start. The characters were deliberately cartoonish and exaggerated to add an element of humour. We trusted that the characters were appropriate to a Danish audience but could not distract ourselves from the more problematic elements of the game. For example the gang entitled “The Ghetto” which featured two Muslim men, one of which had 14 children and the other, named Kebab, who’s interests involved Marijuana and Kebabs.

These doubts were compounded when we struggled to get anything close to a taxonomy of inmate to model characters about.

It was already obvious that the UK differs vastly in local culture and idioms, that a joke common in one place may have different connotations in another, that there was a complex interplay of class and deprivation along strong location-based lines. As it turns out, this is reflected in the prison system where it was told to us that often gangs form around postcodes and social hierarchies are complex.

This meant that, even if we could create a set of inmate archetypes that were flexable and relatable as characters, we would struggle to make them relevant beyond one or two prisons. This was the primary reason that Bloqs moved away from the prison context entirely.

The first Bloqs iteration came from a need, while exploring alternate game modes, to rapidly test random prompts in batches, in an organic fashion. The game worked well enough that we showed it off as a side peace during a faculty end-of-year party and it ended up overshadowing the main game. A few more workshops later and it was clear that we had found the new format, the old game was discontinued after the Christmas break.

While running early workshops with both inmates and children, it became quickly clear that the Be Honest cards were the most interesting part of Captivated. The developed the adaptation based on this premise orienting the game play around encountering as many as possible.


During the period where we were still working with a direct adaptation, I came up with the new circular board, designed around advice from a prison commissioner we were in contact with who suggested that a dart-board radius would be most likely to work across multiple prisons. I designed little 3D-printed pieces, modelled off of objects associated with prison.

I also elected to come up with the aesthetic and design of the currency which we named ‘Squids’. The design started almost as a joke, using the image of a squid in place of a dollar or pound sign, however after minor alterations the theme was made subtle enough to add an amusing touch to the design. I endeavoured to make the aesthetic mature and have a realistic touch but still easily identifiable as mock game currency.

Choosing the colour scheme was a long and involved process for the team, we wanted something friendly and calming but not child-like or likely to be perceived as “soft”.

My colleagues took the task of manufacturing the Bloqs, first by laser-engraving the text in a single batch process, then by developing a technique to place multiple Bloqs together, so the outsides could be painted all at once. The last detail was to use a punching set to engrave numbers on the sides, corresponding to the level of the Bloq, so that new players would not need to keep consulting the instructions.

We worked to solve the issue introduced by the currency, that more loose items added opportunities for items to be lost, with little chance to replace them often. To do this, we settled on a currency tray and itirated over a basic design.

I took the design and refined it down to shave precious millimeters off the overall dimensions as space was at a premium. The final design was 3D printed in three parts and glued together. This was then used as the basis of a polyeurathane mould which we would sand down and spray paint.

I attempted to then use one of these to vacuum form an acrylic sheet but the result put too much stress on the edges and corners. The solution would have to involve inflating the shape to increase the radii of all corners which would be infeasible.


In retrospect, the decision to drop the old format in favour of Bloqs was an obvious step, in fact, one could argue that it made sense to drop the old game much earlier, that it’s flaws were too obvious.

Yet, it took me a while to get used to it, I was stuck in a mindset of developing the old game and found the change to be abrubt and out-of-tune. Ofcourse in hindsight it was obvious that I was too close to the project, and that a widening of perspective was necessary.

This changed the way I looked at projects going forward, always mindful that metaphorically having my nose to the screen could damage my ability to guide the overall direction of the project. Breaks are good, guys!

Right now, more than ever before, fundamental flaws in our societies and institutions are becoming more obvious, more pronounced, more unavoidable.

You will have noticed that I make no bones about the fact that I was, and am, a staunch prison-abolitionist.

That’s another thing I took from DAC, their no-nonsense mincing of words and stances. We figure that other people, people in positions of influence, are plenty capable of making rationalisations and down playing things on their own. There is therefore no need for you and I to soften our statements. If anything, in making ourselves more bold, more direct, and more unavoidable, we simplify things for everyone.

This was one of the best projects I’ve had the privilege of working on. However, one thing that sticks with me to this day is a subtle yet nagging question which looms over any endeavour like this one.

“Are we just adding a plaster to a gash?”, “Are we providing a method to ignore the root cause of the problem, rather than solving it?”, “Are we facilitating each other in avoiding an uncomfortable reality?”.

Working in prison is surreal, the architecture looks almost like a school, only the fences are two high and the locks too thick. You see reminders constantly that you are surrounded by institutional violence. You forget, when working with the men, who are ostensibly the same as any random person you could encounter on the outside, that they will not get to go home, that most of them are probably still there as I write.

I was surprised one day to see a design from one of my year group come across my twitter feed, in the context of an article about the ‘Safer Cell Furniture’ project, which aimed to reduce the number of self inflicted injuries and suicides by creating safer furniture.

Thank about that for a second. Put aside the actual designs and the real need for more comfortable, more human furniture for a moment.

We have a problem; too many people are going into this system and not coming out, or are coming out traumatised. And our solution is to design furniture that makes it harder to kill yourself? Ok, so we do just this, the statistic “number of dead people” goes down, and… that’s that?

I was impressed by much of the work produced for that project while having the same reservations as described above. That being said I could not disagree with the derisive comments left bellow, much to the tune of the last paragraph.

One comment in particular, jumped out and has haunted me ever since, namely because it put into words something I had been trying to wrap my head round for a long time.

In my experience, the problem with design thinking is that is rewards thorough answers to exciting questions over useful answers to boring questions.

Twitter user @nthnashma 08:14 9/4/20

I soul-searched during this project and long after it ended because of this asking if we were simply creating a new, exiting way to not have to think about the real problem.

In the end, the conclusion I came to is that, our project existed in a very defined scope, we had a unique oportunity and insight to inject something into a pre-existing relation that otherwise would already exist. In other words, it was not possible for this project to ever “fix prison”.

This can become a deriliction of duty, a chance to turn face away from the bigger problem the mement we stop doing work at the boundaries edge. We must view projects like this as one small vector to attack the problem and then go further, ask what other ways we can affect change.

Our intervention is a clean up, and its good at being one, it should never be called a solution, or anything close.

For example, for many in DAC this meant confronting people in positions of authority, making them confront uncomfortable truths, never passing an opportunity to apply pressure.

The MoodBoard System | Matter of Stuff

Matter of Stuff are a bespoke design agency specialising in connecting client with their vast network of manufacturers and crafts people.

Matter of Stuff found themselves in a need to streamline the involved process of developing proposals for large-scale projects with clients. They needed something which offered flexibility in working methods and presentation structure that was collaborative and fast.

The result was the MoodBoard system, an integrated curation and display system which displays products, materials and media from the MoS website and catalogue and allows rapid prototyping and iteration of anything from a full project proposal to a simple colour scheme.

The app is built on principles of feed-forward data, with immutable patterns held to where possible and DOM changes staged into render cycles which, in turn, attempt to avoid re-rendering unchanged sections.

The only JS library used was jQuery given its presence on the site pages by default due to the CMS. A jQuery plugin library called Gridstack was used to create the grid snap functionality.

The final version, showing a dummy proposal for a hypothetical Fora branch

The Problem

When Matter of Stuff work on larger projects, they may find the core team tasked with designing the interior of an entire building, high end venues where every detail must be carefully sourced and made to fit with one another, manufactured bespoke.

One aspect of their working method involved creating documents which were, in essence, rolling design proposals. One page may have material swatches for a type of chair, the next a series of images describing a desired ambience, another will detail the layout of a room or floor, another lists technical specifications, another is colour combinations, etc.

These documents not only describe the overview vision of a project, but are ‘rolling’ in the sense that multiple stakeholders would make changes before sending the document to the next. The documents were pieces to have conversations over, more than just a display of specifications.

This was primarily done with Google slides which created a range of issues from content-secure policies, document permission control, images becoming out of date as the products changed on the main site to name a few.

The Solution

an image being adjusted in situation
A demonstration of the custom built crop tool

A number of potential interventions were discussed, eventually we agreed that I would build a Google-slides inspired system specifically tailored to their site’s eco system.

In addition to it’s functionality, the resulting site speaks to the bespoke, high-end image Matter of Stuff seeks to project. This is not just another colab tool, this is their colab tool.

Each MoodBoard presents one or more pages, which appear like slides on a slideshow program, the style heavily influenced by Google Slides which the team relied on before. The content on the slides snaps to a grid, can be resized by clicking and dragging, and will ‘nudge’ smaller elements out of the way. This reduces the combustibility of a slide-show program to allow for rapid layouts without worrying about item size or alignments.

Gone are issues with resizing items to ensure they are the same as each other, spending time lining them up, getting frustrated when they inevitably still end up skewed. The curator is free’d from the tyranny of infinite choice, to focus on the actual content.

The MoodBoard can display just six types of content:

  • MoS Products
  • MoS Material Library Samples
  • Files on the CMS
  • Text Boxes
  • Images (internal or external)
  • Colour swatches

These were chosen based on what types of content were most often used in the old working process on Google Slides, with the addition of files which previously necessitated a hyperlink.

The text box input
a successful product search showing wall lamps
Live updates allow the user to find content quickly without typing verbose queries

With content search (product, material) and image URL’s, live updating is used when the user is typing. To keep the strain on the server low, recent items to be shown first are cached on page load, only performing a database query once the user has stopped typing for a while.

For images, I built a custom crop tool which allowed the user to move and resize images relative to their container.

Double-clicking on an item would display its meta-data in place of the Add menu, allowing users to, for example, choose which image on a product is the cover, or interact quickly with a colour wheel.

A history system recorded the last 10 changes made by the user with standard shortcuts for ‘undo’, ‘redo’, ‘copy’, ‘cut’, and ‘paste’. For instance, if the user is on a mac, they can Cmd + Z to undo, with Ctrl + Z on windows.

The slide navigation icons each show an SVG representing the data on the page which updates on page-away, using a custom built function to dynamically create the thumbnails.

Previous Directions

Across the duration of this project, a number of different versions were created and iterated upon until we were satisfied with a modality that met all of MoS’s needs.

An early prototype designed as a proof-of-concept for Gridstack
The next iteration added pages as opposed to one continuous masonry layout.
Version four featured dymanically-sized board and was going to have a drag and drop interface. Drag and drop made it into the final version as an alternate way to add new items.
Version Five saw much of the same functionality that would reach the live version, but used a minimalist interface where clicking on an area would show this menu above the cursor. The sub-menus would appear in place and functioned the same as he side bar menu in the live version.


When I first started out in design, my only motivation was to contribute something, to ‘save the world’. For my final year project I forewent personal comfort to platform an issue that affects so many of my community on a deep personal level.

Mosaic Voice is a consumer-viable Electroglottograph (EGG) designed to help transgender people (specifically trans women) perform voice therapy training.

Comprising of two parts, a wearable EGG, and a supporting app, Mosaic was conceptualised as an extensible system, providing the basic software, while region and use-case specific needs could be met via a library of plug-ins.

This was based on first hand experience and extensive research which indicated that any solution would be region-specific and require outside experience for each subsequent location / context it was to be applied to.

Voice Therapy Training

Voice therapy training is a sensitive and highly personal process which poses significant practical difficulty and emotional strain for trans people with vocal dysphoria. Resources are scarce and sparse; embarrassment pervades around stigma about adults performing vocal training.

Democratised design can facilitate this process by creating assistive tools and means to alleviate the emotional difficulties incurred.

Voice therapy training (VTT) is a broad term for practices and training regimes designed to modify the voice to take on a different appearance. It can be utilised by singers and actors but also by people recovering from injury and trans people, in particular trans women such as myself, who’s voices are not affected by hormone replacement surgery.

Issues with Specialist Help

Specialist tutors & therapists exist to help with this process, offering guidance, techniques, coaching and so on, usually over a long period of time. I personally make use of the excellent service by Christella Antoni, who takes a holistic approach to sessions, integrating social aspects and involving the client in the process. It should be noted that I am not affiliated with Christella Voice, my opinions are my own and genuine.

This works well to tackle the individualised nature of voice and offer professional guidance, the issues arrise with the fact that so few people actually perform this service. People travel from across the country to attend sessions because there’s nowhere closer, the cost of travel tickets exceeding the sessions. And even then, one person can only see so many clients.

There’s also the issue that many trans people are, for lack of a better term, broke.

Trans people are massively discriminated against in work, promotions, housing and healthcare, among other things. This leads to a significant majority living bellow the poverty line. This works to make access to services like Christella’s difficult for some to manage, then additionally, you must consider the viscous cycle that one’s ability to “pass” is largely dependant on voice, which has a compounding effect on discrimination faced, which leads to more poverty, and so on.

This leads to many of us (if not most at some point), turning to the idea of doing it yourself…

Issues with DIY

Many people turn to resources on the internet such as YouTube videos, the few training apps which exist, and the occasional Reddit post which is cited by everyone you know and threatens to dispensary one day. Also you have to tolerate looking at Reddit.

The issue boils down to the fact that, with say a YouTube video, you are seeing a “patient’s” personal reflection on what they deem to be the most memorable aspects of a highly personal journey, as opposed to structured content.

Even when structured content is available, it is often nullified by the lack of tutoring context and practice structure. There is an issue of “hearing yourself”, that is, gauging correctly where you’re progress stands, what you should be working on still, and knowing when to celebrate achievement.

Three Target Issues

This lead me to define three topic areas to explore as the basis for my dissertation

  • Visualisation: concerning the issue of self-reflection, issues around hearing yourself and your progress
  • Tool: A set of techniques and resources, not to supplant specialist support, but to aid in self-practice
  • Goals: Looking at the auto-therapeutic aspect of VTT, aiming to address the physiological distress and discomfort as well as help people define obtainable targets.

Electroglottographs and Self-‘Visualisation’

The ‘desktop’ version of the Larynograph which I have had experience with

One of the most useful physical tools employed by voice therapists is an Electroglottograph, a device which provides in-depth data on the behaviour of the larynx and audio aspects which comprise the voice.

These devices are difficult to come by, large, clunky and expensive, hence few people who are not specialists are likely to own one.

Their function is effectively quite simple, an electrostatic field is created across the contact probes which are pressed into the client’s neck. The various vibrations and noises interupt the field across various channels, which are then picked up by the device, and processed into usable data by the software.

Rethinking the EGG

A Precedent

Initially, I assumed that designing a new EGG was impossible, so I pushed the idea away. In early December (just as everything was winding down for the winter break), I came across two articles concerning the creating of DIY EGG devices.

I am research and engineering driven in my approach; I was not willing to create a project based on the assumption that a new type of EGG could be made without first seeing something to indicate it’s validity, and then packaging this in some way into a proof-of-viability / proof-of-concept.

The first resource I came across was this project by Marek Materzok on which documented their process of making and refining an EGG device from scratch. There was not enough information to replicate the process but it offered a beginning insight into some of the challenges such a device would face (namely noise filtering and the best way to create the oscillation).

This lead me to this tutorial on Instructables of all places, DIY EEG (and ECG) Circuit by user Cah6 which gave details and specifications for building an Electroglottograph from simple components. This was all I needed to tell me that it could be done.

I made inroads into building my own version but decided to allocate my time elsewhere given how late on in the project I was.

I decided on a wearable typology given the ergonomic difficulty encountered with the strap-on probes. The device would hook round the users neck, designed to cradle on the shoulders. The hinge components on each arm flex to adjust for neck sizes while retaining points of tension on the probes.

Most of the circuitry and interface buttons were placed on the back, with the batteries closer to the shoulder blades, this was to achieve weight balance on the arms but also ensure that any imbalance would only serve to pull the probes against the neck more, not slide off.

Development sketches for the wearable
a diagram of the wearable
The device stretching to fit a larger neck.

Chips on Both Sides

Using Cah6’s article as my template, I found an optimal size for the EEG control chip, which would be placed under another board which would handle interfacing with the ports and network. This second chip was designed around the Broadcom BCM 2835 controller used on the older Raspberry Pi’s given the low cost, versatility and proven record it provided.

Other smaller components such as the WiFi chip were also taken from the Pi series. Cost was a primary motivator with most of the deisgn descisions, given that this device had to be as low cost as possible.

render of the CAD model
Two images showing the interface PCB on top of the EGG

The chassis is comprised of simple injection moulded nylon and designed to be easy to disassemble, repair, hack, etc. Screws are standard size and not hidden, components are all accessible, the batteries are lithium-ion AAA’s so can be swapped out at any time.

Motivation and Repetition

For the software end, I layout out a feature map for a mobile app including:

  • A modular daily training system: Inspired by the Enki app, this would show ‘Pathways’ which would utilise the following tools to guide users though speaking exercises.
  • A set of quick practice tools: These would show a simple animation or instruction and allow the user, in bursts of 30 seconds or so, to practice some aspect of breathing or warm-up.
  • A pitch sample recorder: An area to record and sample the voice over a piece of sample text to view pitch over time.
  • A resonance estimator (using neural networks): While the EGG is needed for accurate resonance sampling, this would provide a middle ground for people without financial access. Using a pre-trained convolutional network, an ‘estimation’ of resonance levels could be pooled. This would record the samples in the same area as the pitch sample recorder.
  • A continuous listening sampler: Somewhat experimental, this functionality would note samples throughout the day of the user’s voice as they perform their daily activities. This could be used by the user to see how they remember their training in various, uncontrolled environments.
  • A voice pattern matcher: Would depend on finding the right region-specific data set. Another convolutional network would match the user’s voice with one that sounded similar in most respects but could be adjusted for vocal features. This could then be used to practice against and set goals for the user to aim for.
  • A voice creator (neural networks): Would depend on finding the right region-specific data set, a recursive generator neural network would modify the input voice to be adjusted for vocal features such as softness, tone, pitch, resonance, etc. This would allow the user to, for example “gender swap” their voice to try it out.

I built the frame for a progressive web app to demo these features which I could implement now, and provide dummy data for items that would require live data.


At the time (late 2019), there was speculation rising about the concept of “Nuemorphism”, coined as a play on “skeuomorphism” by Devanta Ebison. I hadn’t made up my mind about the style but I saw potential for a textureful and soft, welcoming interface which would be great to try for the app.

The result was a pleasing warm aesthetic, I especially liked such items as the progress indicators which felt like little gems that you wanted to have, empty ‘slots’ for the unfilled sections.

I’d like to write extended thoughts about the topic at some time, but sufice to say, while I liked the unique aesthetic of this app, it only worked due to the colour contrasts and would have faltered slightly if I had followed the pattern where a ‘raised’ section is the same colour as the background.

This is one of the critical flaws with nuemorphism (and skeuomorphism to a lesser degree), it’s smooth transitions and drop-shadow facilitated layer / element separation are often incredibly low contrast. This is a problem on displays with lower contrast settings or fewer colour bands, items viewed at any sort of distance, and of course, accessibility.

The advantage of more Matrial-UI esque drop shadow element-separation is that you can still use other features such as subtle borders to add definition and get round this issue. Even skeuomorphism (which, for the record, I am not a fan of), relies on heavy gradients and colour mixing to get it’s textured effect.


The project concluded successful, I got an A and the presentation was received well. Its one of my favourite pieces of work and I’m proud to have it as my final year project.

But I still have hang ups.

This was, for all intents The Big One™, the final year project, and more than that, it was something I so closely believed in. It wasn’t enough for it to be good or even great, it had to be a master-piece.

I was afraid my work couldn’t speak for itself

I kept diversifying the system while not building on what was there. It’s true that the solution should take the form of an integrated system but I remember over and over again not being sattisfied with what I had, constantly striving for something to truley step up to the next level.

In reality, I was having a crisis of design, I saw myself perched between two worlds, one of Product Design, and the other of Engineering / Code. I told myself over and over that there was no divide, that we all have ranges of skillsets but I couldn’t chake the feeling that I was a jack of all trades and a master of none.

So I kept adding ‘stuff’, imagining the basics of a complex system and then trying to work back from there (the egineering approach when you actually know what that system is). I spent a month on a little breathing excersiser device before stepping back to ask “what the * am I doing? What is this?”

When I shifted hard into the EGG route, I was doing so over the break, working tirelessly while others were relaxing, just to catch up.

I used my specialisation as justification for not re-evaluating

Perhaps what is worse is that the warning signs were there; clear indicators that I should clear my head, define one or two things that the product had to do, and just worked on those.

I let myself believe (not incorrectly) that I was uniquely positioned to pull off engineered solutions comepletly unlike anything my collegues could do, due to my specific code based skills. This is a mistake that would unfortunately not fully reveal itself until Tailored Nutrition.

In doing so, I chased these mutliple vauge threads instead of simply doubling down on the core that ended up being the final outcome. I ended up working twice as hard as some but still “only” producing what I could have under older circumstances.

I assumed that I could just grind to finish

Perhaps the most egregious mistake I made whilst all this was going on was the assumption alluded to, that I would miraculously pull off the feat at the last minuite. As I said, I did do that, but for an outcome which could have happened regardless.

I made a habbit of being in the studio from 10:00 until 23:00, calculating when would be the most ‘efficient’ time to drink caffinated drinks, pushing msyself beyond physical limits. I could have halved that time, used the remaining to rest, research, regain my humanity, but instead I saw time as a commodity to be collected and horded as much as possible.

We all have pressing deadlines from time-time. But if you begin to see time itself as an enemy, its overdue you stop and reevaluate.

Page Block System | Matter of Stuff

Following on from the MoodBoard system, as part of the Matter of Stuff site redesign, I implemented new pages using a fully customisation page editing system to give the team full control over their content.

When the Matter of Stuff core team set out to design their site, they wanted an outcome that reflected the bespoke, high-end, and established company which they had built over the past season.

They needed something which not only perfectly reflected the company they are now, and could be updated instantly to reflect changes dynamical, given the rate of expansion, events, new flagship pieces etc that they engage in.

In other words, they needed a site that would keep up with their pace, without having to wait for a developer to be available. Customisability and agency to control every aspect of the design was imperative, with one word at the centre of everything; control.

Working with the visual design created by Daniel Stout, a freelancer at Matter of Stuff at the time, I suggested using the Guttenberg Block system on their WordPress instance and fairly recently been adopted by WP as the default editor.

The result was a custom made series of 20 WordPress Blocks, created with the Block Lab library. These Blocks, which included a mixture of basic components (headings, paragraphs, page breaks etc) and custom layout sections.

The ‘About’ page showing how it was split into blocks. Some blocks like the ‘Icon Call To Action’ are self contained and use the built-in columns system to create layout.

This system would be reusable enough that creating new pages or radically re-arraigning a page layout could be done in minutes, without needing to worry if everything conformed to the Matter of Stuff design style.

Every Block had as many options as possible for customisation of not just all of the content, but also all aspects of layout and display.

Most Blocks had the same basic features to ensure consistency in editing with Block-specific features added on top.

For instance, most blocks featured a wrapping element which defined the element’s position in the document flow. Then, there would be a container to provide the layout for the actual content (unless the contend is full-sized), CSS flexbox was prioritised over grid using patterns of cascading pairs. That is to say that, if four elements are in a row, this would be two flex boxes nested inside another flex box. This meant that mobile layouts were usually intuitive but also allowed MoS to swap the left-right top-bottom alignment of each “level” of content as show bellow.

several itirations of a block component demonstrating different states
Shown here is the ‘What We Do’ block which principally exists to place text next to an image in equal weight. Here you can see how changes made to an individual block in the editor can alter it’s layout.
Mobile view compassion for the ‘About’ page
Paragraph and Quote blocks used on the ‘Manifesto’ page.
The ‘Procurement’ page showing a custom image gallery where, in addition to the ususal layout changes, the team had options to change the ratio of one image to another.

With the Dual Image Block, the team could enter a ratio using a set of standard formats (e.g ‘2 : 1’, ‘3/5’, ‘2 |4’) where my code would sanitise the inputs and use them with flexbox to change the ratio of each image accordingly.

This carousel showed various gif images of each of the MoodBoard features and was designed in the style of the MoodBoard itself.

While exploring options the team did discuss the option of building the site with a page-builder like Elementor.

I advised them to take this option given that I was able to replicate all the core aspects of customisation they would be looking for without having to pay the annual fee for the editor.

In addition, page-builders are a great and accessible options for people who don’t mind a bit of compromise; there will always be design ideas which the framework does not support and, the majority of the time, there is a perceptible feel that the result is “off the shelf”.

For a company like Matter of Stuff, we agreed that, short of fundamental technological limitations, no compromise should be accepted.

Nothing should feel pre-packaged.

Rouge Like React

The final project on the old freeCodeCamp curriculum, a seemingly gargantuan task to push the React framework to create a procedural generated rouge-like dungeon crawler.

This was my second really big React project and also served as my first introduction to Redux.

The game is grid-based, displaying the user in roughly the middle of the map, rooms branding off in various directions. The player’s view is obstructed by default, showing only their immediate surroundings. The player can move around, pick up items and health packs and fight enemies by ramming into them, every time calculating damage for the enemy and player.

game screen showing disrupted view
The Main game screen with darkness enabled.
screenshot of a generated level

The game progresses through levels which are generated completely randomly each time with a set, or slightly varying number of health packs, enemies etc. The player can progress their weapon to deal more damage on each hit of the enemy and prepare them for the final boss (a really tough enemy!)

The game was balanced to make it actually challenging to play; there is a genuine trade off between getting more XP, health and risk factors going into the next level. The collectables and diminished view port incentivise exploration of the auto-generated labyrinths.

Games like this and the Game of Life are brilliant for learning fundamental data structures and data-visualisation methods in an intuitive way. Years later I found myself learning about such data structures, not knee-deep in C or Java, but by relating what I was reading back to this project.

The core data structure is a matrix; a two-dimensional array representing the rows and columns, each entry holding cell data. A single Board.js component is tasked with itirating over this array and painting a cell a particular colour, off-white if the cell is a floor, blue if it is a wall etc.

React then handles updates to this board by re-rendering after mutations apply to the array. This was a great project to learn Redux with, given that it is an ideal use case for Redux’s feed-forward immutable patterns.

Given that nothing on the board moves independently, there was no need for a sequencer or ‘game tick’, the only actions come from the player moving, this is one of the realisations that helps to break down the problem early one; you only have to listen to four key press’es effectively.

On player move, the action-creator looks to the cell the user wishes to move to and decides an action based on what is found there, it then calculates the next state of the board and updates it all at once.

The biggest challenge with this project was dynamically creating each level, every level, and every game had to be unique. A function was created that followed this basic process:

The centre of the board was found, a starting square of 9 x 9 was turned into ‘floor’ (the default cell being ‘wall’). This was designed to ensure there were no proximity collisions on level load, hoping that the user would not notice the centre of each map was the same.

A function for generating a single ‘room’ took in a direction as an argument and decided on a size (width * height) within constraints. Then, in the direction dictated, using the previous centre as a starting point, it would move it’s pointer a random number of cells across (within constraint). Then in the perpendicular direction it would also move a random direction but only to a maximum of two cells.

For instance if I was ‘moving’ right, the pointer would move to the right a random around between 2 and 7 and then move up or down between 0 and 2.

a diagram of the room generator function
This diagram shows how the room generator function to find the new centre would check valid cells and shift in the x and y directions to find the new centre.

This would then give us a new section of floor space which overlaps with the previous (thus extending the room) or, if operated by a wall, a door would be made at a random point to connect the two.

The last thing this function would do is make note if any value was outside of the board boundaries, this allowed the implementation of the last major function; the recursive path generator.

The recursive generator would, for each cardinal direction, keep generating rooms using the previous function, picking a new direction each time, allowing it to loop back on itself. It continues until the data being returned would signal that an outer edge had been hit, and the function would stop. This enabled truly unique levels every time.

All that remains is to randomly place a set number of entities per level; enemies, weapon upgrades, health packs and the exit point leading to the next level. A recursive function is used to do this, randomly picking a cell and calling itself again if that cell is occupied.

I also put some small quirks in the game to give it some personality such as the “you died” screen and the choice of weapons in the progression such as “Rock on a stick” and “The power of React itself”.

This being a project in summer 2017, it was built using an older version of both react and redux which is amusing in hind sight. How remembers React.createComponent({}) ? or lifecycle method based Redux store subscriptions?

a lifecycle method from the old redux

Looking back, this project was a key moment in my development career. Sure, the code is highly inefficient and messy by my current standards given that three years have passed as of time of writing. There were also some pretty bad mistakes and antipatterns which I learned from later on such as the un-pure functions called from within the reducer and functions which dispatch also in the reducer.

But the biggest effect this project had was that it allowed me to realise the size of a challenge I could overcome. Before starting, I had taken a break from code because I had not idea where to even begin with this challenge. I believed I would quickly burn out or waste weeks worth of time only to eventually fail, thus validating my imposter syndrome.

I can’t remember what exactly what let me get over this self-imposed block but I do remember the thought process, bringing together all of my experience and knowledge of algorithms to that point, and experiencing inspiration about the issue of procedural generation which lead me to start.


Polymat: Kitchen Grip Assistance is a silicone grip mat designed to tackle issues involved in the kitchen where the user cannot properly grip an item which will otherwise slide about.

For the first Unit 7 project we were asked to consider the concept of a multi-generational kitchen in the context of human-centred design. Minority groups including the disabled and elderly are often if not completely overlooked in design with a great many products promising to cater to their needs being ‘gadgety’ attempts to make a quick dollar.

demonstrates the progressive steps of cooking and washing with one arm
Alex demonstrating basic cooking and cleaning functions with the main use of one arm

Less than a quarter of people with physical disabilities are born with them, working with Alex (in the images above) we explored the idea of how even a minor impairment (in his case the depleted ability of the left side of his body) can greatly affect the manipulation of kitchen equipment. In particular because a lack of ability to stabilise an item leads to it sliding around.

I became interested in the idea of onset impairment or impairments that will go away eventually (eg broken limb). The market is full of gadgets and products ranging form the genuinely innovative to the obstructive and absurd.

basin development models

I focused primarily on the kitchen sink, as washing dishes proves a key point of importance to food preparation, looking to create a responsive basin which provided variable resistance to pressure. The idea is that, where sliding objects becomes an issue, the user will simply push harder and the basin will grip the object.

modelling the silicon samples
developing CAD models of the samples I wanted to test, I was able to use a two-stage moulding process at great speed

I needed a final appearance model to communicate the idea appropriately. The silicon samples served as proof of material properties but could not be cast at the right scale within time and budget constraints. I CNC routed the model in one piece with laser cut polymethacrylate adding detail underneath. A rubber spray was applied to create the correct surface effect.

polymat model in situation
final appearance model being shown in-sittu

This idea developed into a silicone mat with a variable texture surface designed to provide a range of friction across as large a range of items as possible. Many sample patterns were designed, 3d printed and cast to settle on the optimal pattern.

model topdown view
Second Final Presentation Board

A final ‘looks-like’ model was produced largely by CNC milling, Laser cutting and painting with a silicone rubber. It was unfortunately deemed too costly to cast in the target material.

Reflecting on the project I wonder how imaginative the solution actually was. Visually striking perhaps but does it not simply add the plethora of Gadgets? Would users actually consider using it after or before impairment? Was there actually a change to the kitchen ‘System’? Did I play it safe?

This project especially was a chance to consider my position in relation to design, whilst I am more than happy with the outcome I find it imperative to recognise what this product is not, and what future outcomes must try harder to incorporate.