Using color scheme to signal security/privacy exposure.
In a quick impromptu meeting with @ned about branding, user education, UI/UX, etc, we came up with a potentially really new way of educating our users and simultaneously helping them understand who we are and what we care about without forcing them to understand nitty-gritty details of privacy and security. This started as we were looking at how apps and companies convey this to their users to see if we could learn something from them.
It turns out, we think we need to do something completely new, because what is currently done isn’t enough.
The basic idea is that we dynamically change the color scheme of the UI based on how much information the user is exposing to the world, gradually from dark to light. Dark meaning not exposing much, while light meaning the opposite.
How does an application convey to the end user the security/privacy stance it holds? Most of them (all of them) basically tell the user they’re secure/private, then hopefully provide documentation on how they rationalize the ability to boast this feature with technical implementations and such. The end user is thus required to trust these assertions, defer to a technically savvy person that these assertions are true, or become technically savvy enough to evaluate them by themselves. Realistically, the distribution of these groups is drastically descending in proportion to the way I’ve just listed them. Most people just trust the branding, and wait to hear of an exploit or report that they’ve been duped.
It is quite difficult to quickly and easily show (in a thorough manner) a technically ignorant user the level of security/privacy of an app they are using.
As mentioned in the intro, we simply change the color scheme as a function of how much information the user exposes, from dark to light. This performs multiple functions simultaneously:
- First off, they immediately understand the weight of various options with respect to their overall exposure without having to understand why or how.
- This also provides the immediately context of their current session with the app they are using so they can properly access how they would like to behave with those they are interacting with.
- It clearly shows we care about their ability to think about these things, and act according to their personal judgement and not what we think they should be doing.
- The level of trust they put into us is extended, but as always, the amount of trust they’re required to give us is minimized. We don’t hold your data, but it’s our job to provide the tools you need to properly reason about holding it yourself.
Our app is complicated, this space is complicated, understanding how to navigate it is complicated. Implementing this will be complicated, but it is worth it, imo.
How We Do It
This can start with our website, which would immediately inform the user what we care about, while simultaneously teaching them how much information they’re (probably) exposing about themselves when they visit websites. When you visit a website through your browser, you give up a tremendous amount of information about yourself automatically (see http://webkay.robinlinus.com/ for an example of your current session). Imagine if our website read this information, and changed the color scheme from dark to light based on the amount of personal information it captured, and then informed them of small changes they could make to protect themselves. They could then see the website dynamically change as they decrease their exposure.
This becomes a teaching tool to anyone who wants to learn how to protect yourself on the internet, and it HAS NOTHING TO DO WITH OUR APP, BUT EVERYTHING TO DO WITH WHO WE ARE. There is nothing that I am aware of that uses a visualization in such a way as a teaching tool, and this experiences teaches the broader audience that we care about security and privacy without just saying it like every other secure app does.
If I were to personally come across a communication app that taught me how to better conduct myself through simple visualizations when I just first visited their webpage, you’d be damn sure I’d be more interested in their product.
So we’ve convinced the user on the webpage that we care about their security/privacy, and showed them some things on how to better understand browser sessions and the information they pass around. Now how does that transfer into the app? It transfers through the simple, visual experience of understanding what settings you have has a direct and serious consequence on how much someone can find out about you.
As you’ve seen in the previous townhall, we’re working a lot on the profile settings and UX of personal identity within Status. An important feature to many who use Status is that it doesn’t require a phone number to get an account and start using it. As you add profile features to your account from the start, you increase the surface area of information about yourself, which can be reflected by the color scheme. Dark being the (hopefully future built-in) anonymous user account, to light reflecting a fully fleshed out profile with picture/ENS username/wallet exposed profile.
As people join via various accounts they hold within Status, this dynamic color scheme will immediately inform them of the relative security/privacy profile they are currently using. This could help them realize they’re using a profile in the wrong context, and that they need to switch. There are countless “user stories” one can come up with here.
If we can pull this off, I feel as though we could create an entire new standard on how to convey to users the security/privacy tradeoffs they make with respect to the choices they make. It will be difficult, because quantifying all this and implementing it is non-trivial.
I’d love to hear your thoughts and opinions/dissents/problems/kudos/etc