Companies’ fears grow about AI assistants
Businesses are increasingly wary of using generative AI assistants at work, with large rises in those concerned about errors and security, a survey of 79,000 companies in five countries suggests.
Only one in ten of the small and medium-sized businesses polled in Britain, Ireland, New Zealand, Australia and Canada regularly made use of artificial intelligence assistants such as ChatGPT and Gemini.
Of those that did, more said they worried about the quality of the responses and where the data that they entered was stored, when compared with the results of the same survey conducted by Peninsula, the HR and employment law specialist, last year.
Forty per cent of the businesses from the UK cited inaccuracies in the information provided as a key concern, up from 14 per cent last year. There were also increases in those limiting their use because of the risk of reputational damage and the risk of breaking laws, such as those governing data protection.
It has resulted in a fall of six percentage points in the number of businesses crediting generative AI assistants as having a transformational impact at work. Many more see it as a useful tool that needs to be used carefully and selectively. For the second year running, Canadian businesses were the most cautious, with only 23 per cent believing that generative AI would be transformative in the workplace.
Alan Price, Peninsula’s chief operations officer, said: “The legal risks around compliance, reputational damage and loss of intellectual property are believed to be of great concern for small businesses. While AI can certainly speed up processes, there is still a very long way to go before employers could rely on AI alone.”
Among the comments from companies taking part in the survey were those highlighting how generative AI services were being misused: “We currently receive many job applications with CVs written by AI — they are usually badly written, confused and often not representative of the applicant,” one business owner said.
A charity was concerned about where the data they entered might be stored. “[The] harvesting of sensitive information from emails and performance reports and its uncontrolled reuse is concerning for us as a charity whose reputation is based on confidentiality,” it said.