Is AI really about trust? 

Innovation

In less than 12 months, the discourse surrounding artificial intelligence (AI) has shifted from speculative predictions about its ascent to a palpable and urgent concern among industry leaders.  
Image: Getty

OpenAI’s CEO, Sam Altmann, has recently been more vocal about his unease regarding the remarkable capabilities and implications of his company’s own technology. This heightened awareness and concern within Silicon Valley is indicative of the accelerating development of AI and its revolutionary potential. 

A pivotal hearing before Congress on May 16, 2023, focused on the pressing issue of regulating AI systems. Over the course of four hours, key stakeholders from government, industry, and academia engaged in a discussion, shedding light on the potential risks and benefits of AI technology and its impact on society. The hearing emphasised the need to strike a balance between innovation and accountability, highlighting the importance of ethical guidelines, industry self-regulation, and government oversight to safeguard against potential harm caused by unchecked AI systems. 

Interestingly, Mira Murati, OpenAI’s CTO and the mastermind behind the rapidly adopted ChatGPT, was not called to testify. As an observer of AI development since 2017, and particularly attentive since the launch of ChatGPT in November 2022, I’ve come to ponder the question of trust when it comes to AI. Traditionally, trust in the tech context has been framed as the belief that a system will perform as expected without causing harm or acting in an unpredictable or uncontrolled manner.  

The concept of trust in technology has been extensively studied, and one such research study titled “Trust in Technology: A Meta-Analysis of Empirical Findings” conducted by Rainer K. Silbereisen and Jochen Peter in 2009 aimed to analyse the factors that influence trust in technology. The study identified 81 empirical studies that examined trust in various types of technology, including mobile phones, internet-based as well as automated systems.  

The study revealed that perceived usefulness and perceived ease of use were the two most significant factors influencing trust in technology. Other factors, such as privacy and security concerns and social influence, were also found to be significant.  

Notably, none of these studies explored the impact of trust between humans in a technological context. 

Trust in a technological context is built when actions consistently and transparently align with algorithms. During the congressional hearing, Altman proposed the establishment of a new independent agency tasked with licensing “powerful” AI models. If “trust is the glue of life,” as Stephen Covey posited in his 1989 book “The 7 Habits of Highly Effective People, is this enough glue?  

Technology does not create itself; it is designed and operated by humans. Consequently, issues like bias, fairness, transparency, and accountability in AI systems are reflections of our humanity, with all its flaws, enriching and complicating the human experience. 

Kai-Fu Lee, former Google executive and author of “AI2041: Ten Visions of the Future,” eloquently summarises the essence of artificial intelligence: “Artificial intelligence is the elucidation of the human learning process, the quantification of the human thinking process, the explication of human behaviour, and the understanding of what makes intelligence possible. It is humankind’s final step to understand themselves.” 

Thus, we must be prepared to introspect and accept all aspects of our human selves as we navigate the legal and governance frameworks surrounding AI. Earlier this year, Murati acknowledged the existence of numerous challenging problems to address, such as ensuring the model performs as intended and aligns with human intentions, ultimately serving humanity. 

I asked ChatGPT to provide me with a thoughtful closing line to this article, and this is what it said: As we embark on this intricate journey of AI, grappling with the complexities of trust and human implications, one thing becomes clear: The true measure of AI’s success lies not only in its technological prowess but in its ability to reflect and augment the best of our human nature, fostering a symbiotic relationship between humans and machines.  

And what a mammoth exercise in human trust that is. 


Anna believes that legal innovation is invigorating, change is energising and efficiency will never go out of fashion. Starting out at a major Australian law firm, she has spent the majority of her legal career in-house working in the banking, automotive and cosmetics industries, before starting her own multi-dimensional consultancy, Anna Lozynski Advisory. In addition, she sits on various Advisory Boards, as well as being a co-founder of The Mindful Lawyers, and inCite LegalTech.As an award-winning transformer including being recognised as one of APAC’s Top 10 Innovative Lawyers by Financial Times, Anna is a sought-after commentator, speaker & brand ambassador both domestically and internationally.

More from Forbes Australia