Home / Technology / How to build AI applications users can trust

How to build AI applications users can trust

To work effectively, algorithms need user data — typically on an ongoing basis to help refine and improve the experience. To get user data, you need users. And to get users, especially lasting users who trust you with their data, you need to provide options that suit their comfort levels now while still allowing them to change them in future. In essence, to get user buy-in, you need a two-step approach: Let users know what data you want to collect and why, and give them control over the collection.

Step 1: Providing continuous transparency

The first step in finding the balance is to equip your users with knowledge. Users need to know what data is being collected and how that data is being used before they decide to engage with an application. Already, mounting pressure on the industry is steering the ship in this direction: Apple recently announced a privacy label for all of its applications that will promote greater awareness for users around what data is being collected when they use their apps. Microsoft’s CaptionBot, below, is a good example of how to give users an easy-to-understand overview of what’s happening with their data behind the scenes.

Microsoft’s CaptionBot offers clear information about data storage, publication and usage as well as an easy-to-understand overview of the kinds of systems working behind the scenes to make the AI captioning tool work.

Above: Microsoft’s CaptionBot offers clear information about data storage, publication and usage as well as an easy-to-understand overview of the kinds of systems working behind the scenes to make the AI captioning tool work.

Health app Ada, below, is an example of how to avert user confusion over data collection choices.

Above: Health app ada explains the logic behind its input selections at the outset, so users can understand how their inputs affect the application and its ability to perform the desired actions.

Not only does sharing this information upfront give users a sense of empowerment and help build trust with your experience over time, it also gives you an opportunity to help them understand how sharing their data can improve their experience — and how their experience will be diminished without that data. By arming users with information that helps them understand what happens when they share their data, we also arm them with the tools to understand how this exchange can benefit them, bolstering their excitement for using the app.

In addition to these details upfront, presenting users with information as they use the application is important. Sharing information about algorithm effectiveness (how likely the algorithm is to succeed at the task) and algorithm confidence (how certain the algorithm is in the results it produced) can make a big difference when it comes to user comfort in engaging with these technologies. And as we know, comfort plays a major part in adoption and engagement.

Consider the confidence ratings Microsoft offers in some of its products, below.

Above: When an algorithm is making a “best guess”, displaying a confidence rating (in the first image using Microsoft’s Bing Image Search, a rating between 0 and 1, and in the second from Microsoft’s Celebs Like Me, a percentage rating) helps users understand how much trust they should place in the outcomes of the algorithm.

Users should be given insight into some of the operations and mechanics, too. So it’s important to acknowledge when the mechanisms are at work or “thinking”, or when there’s been a hand-off from the algorithm to a human, or when data is being shared to third-party systems or stored for potential later applications. Continually offering up opportunities for building awareness and understanding about your application will lead to higher levels of trust with using it. And the more users trust it, the more likely they will be to continue to engage with it over time.

Step 2: Handing over control

Even when the benefits of an application are compelling enough for users to opt in, users don’t necessarily want to use AI all the time. There may be circumstances when they want to withdraw from or limit the amount they engage with the technology. How can we empower them to choose the amount of AI they interact with at the moments that matter most? Again, a combination of upfront and semi-regular check-ins works well here.

When informing users about what data you’re collecting and how it’s being used, give them the chance to opt out of sharing certain types of data if the use case doesn’t meet their needs. Where possible, present them with a graduated series of options — what you get when you enable all data sharing versus some versus none — to allow them to choose the option that makes the most sense for them.

Consider the example below from food-ordering app Ritual.

Above: Popular food ordering app Ritual allows users to opt of sharing certain data and also informs users of how opting out will impact the application’s functionality.

Whenever you add a new product feature or a user engages with a feature for the first time, prompt them to look at or change their level of data sharing. What may not have seemed relevant to them before could be very compelling with a new use case presented. And if a new type of data is being collected, prompt them again.

One final way to offer up control: Give users the chance to direct the application. This can mean simply checking in with your users from time to time about what features they like, which ones they don’t, and what they want from your application. Or, more importantly, it can be as a part of the application itself. Can users adjust the level of certain inputs to produce different results (e.g. weighting one input over another for a recommendation algorithm)? Can they go back a step or override certain aspects manually? Handing over the controls in as literal a sense as possible helps users feel empowered by the application instead of intimidated by it.

Youper’s AI therapy app provides a good example of how to offer users control.

Above: Youper’s AI therapy app doesn’t require users to set all parameters at the outset but instead offers up regular opportunities for them to refine their experience as they continue to engage with the application (and explains why it may help them to do so).

Every application is different, and every approach to empowering users will be a little different as a result. But when you offer transparency into how and why your system is taking in the information that it is, and you give consumers the chance to opt out of sharing certain pieces of information, you create space for trust. And when your users trust you, they’ll be more inclined to share the data you need to make your products and services come alive.

Jason Cottrell is Founder and CEO at Myplanet.

Erik von Stackelburg is CDO at Myplanet.

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

VentureBeat

About

Check Also

The scale of ambition in gaming is getting bigger | Brian Ward fireside chat

The scale of ambition for Saudi Arabia when it comes to moving into the games …