The use of artificial intelligence (AI) in tech is set to become ubiquitous over the next decade, but getting consumers to embrace products that use it will be a major issue for startups with nearly three-quarters of Australians suspicious of AI.
The only thing playing in favour of startups using AI is that most people don’t realise it’s already there in the software.
A University of Queensland study found that 72% of people don’t trust it, with Australians leading the pack.
Trust experts from UQ Business School led the study in partnership with KPMG, surveying more than 6000 people in Australia, the US, Canada, Germany and the UK to reveal attitudes about AI.
The level of trust is influenced by the specific AI application.
Distrust grows between the use of AI for human resources compared to healthcare.
Overall, most people say they are unwilling or ambivalent about trusting AI in healthcare (63%) and HR (77%). There are no significant differences between countries in willingness to trust AI systems in general. Australians are less trusting of Human Resource AI than US citizens and Healthcare AI than Canadians
People are more willing to rely on Healthcare AI (35%) and AI systems in general (30%) than HR AI (25%).
Study co-author Professor Nicole Gillespie said trust in AI was low across the five countries, with one nation particularly concerned about its effect on employment.
“Australians are especially mistrusting of AI when it comes to its impact on jobs, with 61% believing AI will eliminate more jobs than it creates, versus 47% overall,” she said.
The research identified key areas to build trust and acceptance of AI, including strengthening current regulations and laws, and increasing understanding of AI.
The survey found that people believe most organisations use AI for financial reasons – to cut labour costs rather than to benefit society, with only one in five believing it will create more jobs than it eliminates.
But the good news is that there’s confidence in universities and research institutions to develop, use and govern AI in the public’s best interests.
An overwhelming 95% of respondents expected organisations to uphold ethical principles of AI and Prof Gillespie said that for people to embrace it more, organisations must build trust with ethical AI practices, including increased data privacy, human oversight, transparency, fairness and accountability.
The research showed that distrust came from low awareness and understanding of when and how AI technology was used.
“For example, our study found while 76% of people report using social media, 59% were unaware that social media uses AI,” Prof Gillespie said.
“Putting in place mechanisms that reassure the community that AI is being developed and used responsibly, such as AI ethical review boards, and openly discussing how AI technologies impact the community, is vital in building trust.”
The study spoke to 1200 Australians with around two-thirds (66%) of those surveyed aged between 18 and 55.
More people support than oppose the development and use of AI, but some applications are less supported than others, with overall support in general sitting at 47%. Once again healthcare leads the way at 46%, falling away to 34% for HR. Notably, a significant proportion of people are ambivalent about AI development and use (34-36%), or oppose its development and use (18-30%) regardless of the AI application.
Prof Gillespie is the KPMG Chair of Organisational Trust and currently integrating the study findings for building trustworthy AI into the new UQ Master of Business Analytics program.
The full research report is available online.