Business Leaders Can’t Agree on Who’s to Blame for AI Mistakes

A Tech.co survey has found that not everyone agrees on when it's okay to use AI at work, and who's to blame for its mistakes.

With ChatGPT, Bard, and other AI tools now a common fixture of many employees’ day-to-day lives, it’s only natural that ethical questions concerning their application and usage in workplaces are starting to arise.

A Tech.co survey conducted in July 2023 found that business leaders are divided on who should take responsibility for mistakes made by AI in the workplace. Almost a third think that employees completing important tasks with AI tools are solely to blame for errors or mistakes made, while a marginally higher percentage reckoned that both the employee and their manager are both partly responsible and should share the blame.

The same survey also revealed that 82% of business leaders think it’s okay to use AI tools like ChatGPT to write responses to colleagues.

Tech.co’s Survey of Business Leaders and Decision Makers

Recently, Tech.co asked a group of 86 business leaders and decision-makers several ethical questions about the use of AI tools like ChatGPT in their place of work.

Using AI in a day-to-day business is a relatively new phenomenon for a lot of companies, so we thought it would be interesting to find out if decision-makers are aligned on what they consider to be proper, ethical practices in this context.

Those we spoke to were not required to respond to every question to be included in our survey, so we’ve highlighted how many people responded to each question in the sections below.

Want Another AI Option?

Sign up for ClickUp's free plan and join the waitlist to try its new AI assistant

Businesses Divided on Who Should Take Responsibility for AI Mistakes

69 business leaders and decision-makers responded to our question: “If a manager gives permission to an employee to use an AI tool such as ChatGPT to complete an important task, and the AI tool makes a consequential error, who is responsible – the AI tool, the employee, or the manager?”

Almost a third of respondents (31.9%) lay the blame solely at the feet of the employee or “user” operating the AI tool used to complete the important task.

Many referenced the fact that an AI tool is simply one of many tools the average employee now uses at work, and therefore, they are responsible for ensuring it’s used appropriately.

A slightly higher proportion of respondents – 33.3% – say that the blame should be shared between the employee who used the AI tool and the manager responsible for them.

Just over a quarter (26.1%), on the other hand, believe that all three parties – the AI tool, the employee, and the manager – share some sort of responsibility for the mistake.

For some respondents, holding the AI tool for failing to fulfill its purpose is important – others say it isn’t possible to hold a software application to account.

Only 5.8% of respondents say the manager is solely to blame, while 2.9% suggest that the employee and the AI tool were, in some regard, both partly responsible.

82% of Business Leaders Think It’s Okay to Use AI to Write Responses to Colleagues

68 business leaders and decision-makers responded to our question: “Do you think it is ethical to use an AI tool such as ChatGPT to write a response to a message from a colleague?”

The vast majority (82.4%) say that it is ethical to use AI to help write a response to employees. By way of contrast, only 8.8% report that this isn’t acceptable practice.

The same percentage (8.8%) argued that it really depends on the message. Longer, more personal messages – ones that require a human touch – were highlighted as instances where the use of ChatGPT wouldn’t be appropriate.

Out of the 52 business leaders who responded to our follow-up question about whether you should have to disclose your AI usage in such responses, the vast majority (80.8%) believe it is ethical to do this, while almost a fifth (19.2%) say there was no need to reveal it.

68% of Business Leaders Think Employees Shouldn’t Use AI Tools Without Permission

73 business leaders and decision-makers responded to our question: “Do you think it is ethical for an employee to use AI tools such as ChatGPT without their employer’s permission?”

68.5% think that employees shouldn’t be using AI tools like ChatGPT without express permission from an employer, manager, or supervisor. However, 12.3% of respondents to this question believe using AI tools without express permission is permissible.

The remaining 19.2% specified that it depends entirely on what the employee in question is planning to do. These respondees centered their context-dependent answers around the sensitivity of the data in use, the nature of the task at hand, and the company in question’s existing policies.

AI in the Workplace: Clear Guidelines Are Key

New ethical questions relating to AI’s usage in the workplace are being asked every day. The more inventive uses businesses find for ChatGPT and other AI tools, the more questions will arise about how to use them responsibly, ethically, and fairly.

If you have members of your team using AI tools like ChatGPT, providing clear guidelines on precisely how and when they can use them is the key to avoiding the negative consequences of when they are misapplied.

Who should be involved in these conversations is up to you – every business will do things slightly differently. But maintaining an open dialogue with employees about how using AI tools can benefit your business – and also cost it dearly – is the easiest way to ensure no one slips up.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Aaron Drapkin is Tech.co's Content Manager. He has been researching and writing about technology, politics, and society in print and online publications since graduating with a Philosophy degree from the University of Bristol six years ago. Aaron's focus areas include VPNs, cybersecurity, AI and project management software. He has been quoted in the Daily Mirror, Daily Express, The Daily Mail, Computer Weekly, Cybernews, Lifewire, HR News and the Silicon Republic speaking on various privacy and cybersecurity issues, and has articles published in Wired, Vice, Metro, ProPrivacy, The Week, and Politics.co.uk covering a wide range of topics.
Explore More See all news
Back to top
close Step up your business video conferencing with GoToMeeting, our top rated conferencing app – try it free for 14 days Try GoToMeeting Free