I was recently asked to help out on a new software project related to security. I was told the problem was that “users don’t understand.” I guessed that wasn’t the real issue.
One message the team heard consistently was, “Why is this a ‘risk event’ and not that?” When we dug into the underlying feelings behind the users’ comments we learned they wanted more confidence that the software was doing the right thing but initially they just didn’t trust it.
How do we instill trust in the end user? On this project, the deployment team gets the system up and running and then spends a large portion of their time explaining what is happening behind the scenes.
Why not incorporate the explanations directly into the software? Historically, the software was like the Wizard of Oz, “pay no attention to the man behind the curtain” – in other words, the team designed and delivered a user experience that was deliberately and impenetrably opaque.
The researchers and developers on the projects are incredibly smart and they have invested thousands of hours to create intelligent algorithms, so it’s no wonder they repeated say, “We want all of this to be automatic”.
Automation is good, but only once the user trusts it.
A historically similar situation is spam filtering. In the beginning, spam filters had a habit of false positives and frequent misses. They eventually got better – but never perfect. Two factors made the filters better; one was better algorithms and the other was giving the user some control to mark things as spam and not-spam.
For this project, my advice was to expose some of the software intelligence. Let the user understand why the algorithms identified a risk or why it has dismissed it. Let the user contribute their own assessments by promoting/demoting a risk event. Finally, let the user hide it all when they are not interested – like when they start to trust the system !