Understanding transparency and customer control in AI driven systems
AI design efforts must be made to provide richer contextual information to the customer in regards to financial outcomes. In finance, ‘smart’ systems are being increasingly applied to speed up workflow processes, improve fraud detection and increase the accuracy of decision making. Neural networks decide whether an individual is a risky investment for a loan, or about what interest or premium should be charged for a product. While a customer does not get to decide whether they qualify, simply an ‘approved’ or ‘declined’ answer, or ‘magic’ score is not enough. One such user experience will leave someone feeling judged, likely unfairly. Worse, it may risk the customer will not come back again or go elsewhere. More transparent intelligent systems are needed. Customers want to be in control even more so when dealing with sensitive decisions like those regarding credit or payments being made by autonomous AI systems.
In the video game industry, this problem was encountered long before explainable AI became a common thing to strive for. It was realised early on that there was such a thing in video game AI as a system being too good; “perfect” then became about the agent being the perfect amount of challenging rather than the perfect player. But it was not too long before even the most calculated systems, made to emulate human opponents or be purposefully imprecise, were too being regarded as unfair. In fact, most of the video games releases most notable for their negative market reception up until the mid-2000s had their failure attributed at least in part to poor AI. With so many that came before them to learn from, how were video games designers still been getting it so wrong?
Over the years the AI agents in games had become increasingly complex. It was not a simple matter of two-dimensional movement or responding to a handful of inputs any more. Designers eventually realised that these claims of unfairness came about from players no longer feeling like they knew how the AI worked: they didn’t know what it was going to do, so whatever it did felt unfair when it was not in their favour.
Efforts then went to exposing these inner workings to the player: an alert indicator here, a hit counter there, some logs that print out agent actions. These methods worked to let the player infer how the agent worked and what to expect when it responded to certain input. A black-box intelligent system could then be regarded with the same concrete behavioural expectations as the rule-based agents of yesteryear, and this became standard practice to build player trust in increasingly complex systems. Trust that would persist even when the AI did something not in the player’s favour.
These lessons can be used to address feelings of unfairness users have experienced with systems elsewhere, including in finance. Design effort must be made for a system to provide richer contextual information such as to let a customer know which of their attributes multiplied their calculated risk score, or which answer they gave that had them charged more or less than otherwise, and this information is in some cases more important to them than the outcome, adding value to that customer’s interaction with a financial institution What if, instead of a binary response, it said something like “your age under N multiplied a medium-risk rating you obtained for having no existing credit score” or “your request for over N amount doubled your risk score”. A customer can work with that, she or he knows what needs to be done. Whether or not the customer can change it, what happened came of something she or he did or are and it can be seen. A customer will feel far less indignant.
Financial institutions may find that the current plan serves the technical but not the experiential requirements. Maybe opt for a different approach, invest the time it takes to codify the factors already used in human decision-making into an expert system instead of just training a neural network. A layperson on the street probably doesn’t know how a neural network works, but many already know of their ability to be unfair. Their use in finance only builds on the inherent distrust many already harbour for the sector.
Executives, developers and designers need to ask “what is it that a customer is trying to get out of this service [that we plan to make intelligent]?” This is always an important question—Human-computer interaction was an active field long before AI resurfaced—but doubly so when considering the tendency for developers to view a smart system as a “train once, use forever” solution. You want to get it right from the start. Mars Geldard is a computing student from Down Under in Tasmania who specialises in data science. She recently co-presented at an AI conference in San Francisco (US) addressing how the needs that drove AI advancement in game development map to similar problems in the real world. The opinions expressed in this article are strictly personal to the writer.