Data bias, deep fakes, copyright issues – can ChatGPT be responsibly used in business? 

Innovation

Cutting through the hype, think through the ethical implications and apply established best practices before using generative AI in the workplace, writes Frank Buytendijk. 
Images: Getty

ChatGPT is one of the fastest-growing technologies in history … so too are the myriad of ethical issues surrounding it, from bias in data and copyright issues to undesirable and offensive responses. Can it actually be used in an ethical way in the workplace

As we go through this journey of discovery, don’t stop asking this question – we need to learn through experience while regulation and best practices play catch up.  

We’re used to interacting with others and the world through technology. You’re reading this article on a device. We talk with each other on a Teams or Zoom call. But digital interactive technologies, often powered by AI, offer a fundamentally new dynamic between people and technology: interacting with technology.  

Socially, legally and practically, ChapGPT is new territory that raises all kinds of questions. What do we believe is good and bad about technology becoming an active actor in our businesses and in society? The answer to this may vary greatly by country, culture or company. We’ve already seen many move to ban it. 

In the short time ChatGPT has been on the market, several issues have come to light. ChatGPT has threatened people, “confessed” to spying on others through webcams and tried to convince people it was the year 2022 instead of 2023. The technology has had an almost immediate impact on daily life. Many school students, for example, now have to manually write essays again to avoid its potential misuse. 

Despite all shortcomings, ChatGPT and other generative AI applications are an amazing set of innovations, which quickly lead to hype and getting carried away. It’s easy to forget that some well-established digital ethics best practices already apply today, so take note if you’re considering using in your organisation today. 

Related

Be clear what comes from a machine vs a human 

DJ and music producer David Guetta used generative AI to create lyrics in the style of Eminem. He then used other forms of the technology to generate a rapped version of those lyrics in the voice of Eminem — who was not involved in any part of the process.  

In business, the step to generating speeches and then generating a video with a fake version of the CEO delivering that speech isn’t far away. It could even be seen as a plus to do so in a personalised style, starting with birthday wishes for each employee. 

From there it’s a slippery slope, bringing all-so-present levels of misinformation to yet another level. After all, similar creations by other generative AI systems have already been used to fraud and extort people at scale. These only represent one-directional examples, not even interactivity.  

Until now it was easy to detect a chatbot at work, but generative AI brings another level of sophistication to chatbots, prompting the ethical need to make sure people know they are dealing with technology. 

Make sure your staff, customers and citizens know they’re interacting with a machine by clearly labelling the conversation using notifications in text, watermarks in images or identification through voice. It’s important to prevent any appearance of deception. 

Be aware of bias and trustworthiness issues 

There’s a problem with bias in training every type of algorithm with sets of data … ChatGPT is no exception. Technology is being politicised immediately, for being left-wing or right-wing biased, for being “woke” or “anti-woke,” or any other social issue. Gender and racial biases, for example, are very common.  

ChatGPT acts on patterns in the data it was fed. It doesn’t truly fathom linguistic content, but ingests what essentially are parameters, attributes and signals, and responds in calculated ways. Such responses may contain biases left uncontrolled or unchecked that invariably offend various audiences in different ways.  

Make sure you do due diligence — despite the lack of clear tooling in the market — toward processes and guardrails, to track uncontrolled bias and other trustworthiness issues. 

Address privacy and security concerns 

ChatGPT is trained before deployment on information of which the legitimacy of use is often doubtful at best. After deployment, the model generates output based on customer data, proprietary or other sensitive information that it has previously been exposed to and may inadvertently reveal that information. This must be avoided.  

It could also generate information protected by copyright, or other intellectual property, putting your organisation in a position of liability and regulatory compliance risk. For this reason, limit ChatGPT use to prompts that don’t deliberately aim to uncover such sensitive data and disallow unchecked use of output if the legitimacy is even remotely doubted. 

Not only does ChatGPT generate information that can carry risk, it further continues to ingest such information while feeding it prompts. Until better solutions come to the market, don’t allow any cut-and-paste of enterprise content, such as emails, reports, chat logs or customer data into ChatGPT prompts. 

Promote tolerance 

ChatGPT has neither positive nor negative intentions. Yet, concerns should remain regarding how responses to prompts may be received by users. Equally important is how those responses are subsequently used.  

Ethical discussions will continue to evolve through experience and making mistakes. Regulation will emerge over time, and we will learn through laws put in place. Technologies will evolve and we will learn through best practices.  

For now, promote tolerance for inaccuracy by keeping ChatGPT functionality in beta for an extended period of time, resulting in people not expecting perfect results. Until these things catch up, push your teams to think through the ethical implications of using ChatGT and apply established best practices. 


Frank Buytendijk is a distinguished VP analyst in Gartner’s Innovation and Disruption team, focused on the future of digital ethics and society. He will be presenting at the Gartner Data & Analytics Summit in Sydney, 31 July-1 August. 

More from Forbes Australia

Avatar of Frank Buytendijk - Contributor
Topics: