Join us for a 30-min Timely Tour

Building trust in the age of AI

Last updated on 
January 3, 2020

Check out:

The jury is still out on whether AI is humanity’s friend or foe. It’s improving society in countless ways – making court cases fairer and saving lives by predicting when floods will happen – but the idea of using AI to supplement human judgement and decision-making still makes us feel uncomfortable. So what will it take for us to make peace with a technology with so much promise to make our lives better?

Why we distrust AI

Discrimination, opaque logic and dystopian narratives lead us to think sceptically about current uses of AI. But without addressing our trust issues, we are limiting the true potential of humanity’s most promising technology. Here are some of the main reasons we mistrust AI and what we need to do to overcome them.


Sensationalist headlines over discriminatory algorithms makes it hard to forget that AI is not neutral. Despite being machines, we realize that AI doesn’t provide a wholly objective truth – they only represent “truths” based on the data that they are given to analyze. And that data can be seriously flawed.

The representativeness of an algorithm’s training dataset in particular poses a huge problem here. Clear problems emerge when algorithms base decision-making on data which doesn’t completely reflect the real world, as demonstrated by a famous MIT project testing the effectiveness of facial recognition technology on people of different ethnicities. Every tool they tested was found to be far less effective at recognizing black women than white men, because the algorithms behind them were trained using predominantly white faces. An exclusionary racial discrimination became embedded in the AI from the very start.

But even if the data trained on is representative, historical biases present within society can be trained into an algorithm. Take Amazon as an example: the company spent four years developing a recruitment algorithm to facilitate selecting the best candidates. But because the algorithm was trained using past hires, it undermined the recruitment process by encoding a positive bias in favour of men for technical roles.

A good AI will minimize potential biases by carefully considering what data is used for training the rules of the algorithm. But failing to design algorithms which adequately account for these sources of bias will only perpetuate mistrust – particularly within communities who are more frequently affected by them.

Lack of transparency

The reason why this bias is so hard to detect stems from a second problem with AI: transparency. Due to the complexity of algorithmic calculations and the difficulty in explaining their logic, AI algorithms are often described as ‘black boxes’. In relation to discrimination, this proves especially problematic, as it is hard to determine whether an algorithm is fair until something has gone wrong.

Algorithms can consider far more factors than humans and come up with answers which seem counter-intuitive, but are in fact more effective. This was evident in the matchup between AlphaGo, an AI created by DeepMind for playing the game Go, and the world’s top-ranked Go player. Though many of the moves chosen by the AI were unexpected, they ultimately proved more effective, with AlphaGo winning every match.

Which cuts to the heart of the matter: even if we have confidence in the fairness of an algorithm’s output, implementing decisions that we don’t understand ourselves makes us uncomfortable. Without an indication of how and why a decision was made, it is difficult to trust outputs which seem counter-intuitive.

Future dystopias

Perhaps the greatest fear surrounding AI is the threat it could pose when it begins to outsmart humans. Current AI has narrow intelligence, in the sense that it is extremely good at performing one assigned task. Looking to the future, there is fear over the creation of ‘artificial general intelligence’ – AI which will be able to outperform humans in any number of tasks.

The fear of hyper-intelligent AI stems from the alignment problem – how can we ensure that these intelligent systems will still do what we want them to? Dystopian depictions from the cinema of human-like machines wielding guns seems an unlikely future. Instead, the risk stems from AI finding the most effective solution to solve a problem, even if this disregards human life. It is not hard to imagine an AI programmed to solve global warming and wiping humanity out as a step in achieving this.

Learning to trust AI

Though a seemingly distant future, the existential risk around AI leaves us cautious to embrace further developments. So how can we can overcome this mistrust of AI? What can be done to turn these black boxes into fair, understandable and safe outputs? And will we ever fully accept the technology?


The first step is to overcome the algorithmic discrimination guiding AI. There is no simple solution for alleviating discrimination, but a number of strategies can help to mitigate this problem. Establishing standards and practices surrounding AI can help ensure that representative data which controls for historical biases is being used. In parallel, having ethicists work with programmers and act as oversight can help simple mistakes from being encoded. In turn, taking these steps will help reduce negative publicity and hopefully mitigate societal worries over unfair AI.


We also need to provide explanations for AI’s decisions. This presents a larger problem as the workings of an algorithm are highly opaque. Even if the exact logic was provided to a person, it is highly unlikely that they would be able to make sense of it.

But to build trust, we need to make the decision-making process understandable, not just transparent. ‘Counterfactual explanation’ poses one option for making AI explainable – outlining the minimum feature(s) that would have to be changed to receive an alternative response, instead of showing users the logic behind an algorithm. Though this solution is far from perfect, providing a rationale, and at least part of the reasoning behind a decision, makes AI less mysterious.

Time and direct experience

Often, mistrust of AI does not stem from a rational concern over potential harm, with even the most innocuous forms of AI being met with hesitation. Automating timesheets, for example, can bring huge benefits to an organization and presents little risk, yet there can still be resistance adopting the unknown. Building trust in AI requires a change of mind-set – and this won’t happen overnight.

Time is perhaps the only factor which will truly lead to trust in the age of AI. As adoption of AI spreads and becomes more present in our daily lives, dystopian predictions will be outweighed by a lived reality of positive experiences. Adopting fair and understandable AI will pave the way for a future made easier by technology – and hopefully that change won’t be too long in the making.

Keep your team ticking

Timely automatically tracks team hours,
activity and capacity to keep remote work visible.
Lead happier, healthier teams.

Book a demo

Keep your team ticking

Timely automatically tracks team hours, activity
and capacity to keep everyone connected.
Lead happier, healthier teams.

Book a demo

Related articles

Read also

No items found.

Related articles

No items found.
Designed by vikings in Oslo, Norway