Why banning generative AI at work won’t work
IT leader Jamie Moles advises companies to embrace AI by developing tools and processes to use it securely, because this tech phenomenon isn’t going anywhere.
Those who have been in the IT industry for any period of time know the truth: banning tools in the workplace doesn’t work. Consider that back in the day, we used to monitor how long users were on the internet or block access to the internet altogether. Now businesses are operated entirely through internet-based tools and restrictions are minimal. This is the reality under which generative AI operates.
The explosion of generative AI tools – and subsequent use in the workplace – have many IT leaders concerned about the security risks to their businesses. But if history tells us anything, the mass proliferation of this technology means that bans won’t work. In fact, a recent survey from ExtraHop found that 32pc of organisations have banned generative AI tools, but only 5pc say employees never use these tools at work.
So, what should IT leaders do?
The reality of generative AI
Organisations might be able to block certain generative AI services at the domain level, but there are so many tools out there and so many more coming. It is important to understand that the maintenance effort to keep blocking emerging tools will become unmanageable very quickly. Though ChatGPT will no longer allow users to create security threats, it is far from the only generative AI or large language model (LLM) tool out there.
The only option is to accept that these tools are a new, permanent extension of the modern workplace, and recognise the strong value they offer employees in boosting productivity. This potential to change the workplace and enhance the employee experience is evidence enough that banning this technology is not appropriate for future success. As with any new technology, generative AI still needs to have guardrails in place, and that means leaders need to focus on people, processes and tooling.
Improving company processes
Setting up good policies around the use of generative AI is a critical first step to governing their use in the workplace. The good news is that there are already several solid models on how to safely store and transmit sensitive data – think GDPR, HIPAA, PCI-DSS. Implementing general data security best practices when developing policies is a good starting point.
ExtraHop’s research found that 90pc of respondents want the government involved in regulating AI. While we have seen some initial movement, from the US AI executive order to the EU AI Act, it’s essential to remember that governments are slow-moving beasts. They will provide recommendations and guidance when they can, but the ultimate responsibility for securing corporate data lies with a corporation itself.
Training people
All company AI policies should include strong security training which requires cooperation between non-technical leadership and the more technical leaders who oversee security. It’s equally important to involve employees who are actually using these tools so leaders can understand the benefits of specific use cases and how restrictions could potentially limit their workflow, as well as ensuring they understand the risks of leaking sensitive corporate data.
AI training should be clear and straightforward. For example, users shouldn’t share anything with these tools that is marked as sensitive or internal information. The ultimate goal of training should be to teach users to assess risk themselves in an intuitive way – and if they don’t know, to ask. A good rule of thumb: anything you would not send in an email to external organisations should never be shared with or uploaded to a generative AI service.
Investing in the right tools
The right security tools are a necessary backstop to security training. Though more than 80pc of IT and security leaders said that they were confident that their organisation’s current security stack could protect them from generative AI threats, 74pc are planning to invest in generative AI security measures.
The most valuable way to protect against data leakage with generative AI is to audit the use of the tools. Visibility measures that can help with monitoring data transfers are key.
Most organisations already have the basic tools for monitoring who is accessing certain sites and how often: firewalls and proxies. Both have the ability to monitor every site users connect to on the internet.
Certain security appliances can offer even more insight through measuring bytes in and bytes out. If IT leaders have broad network visibility and see users are sending more bytes out than they should – in the form of data they’re uploading to LLMs – they can assess risk quickly and address the problem easily.
Leaning into innovation
Though it’s been more than a year since ChatGPT made its debut, generative AI technology has made massive strides in the industry. As we see it used more on personal devices for messages, searches and more, we’ll quickly hit a stage where users expect to use these tools on their corporate devices, and they won’t necessarily wait for approval – nor will they know they’re doing something wrong without the right guardrails.
Fortunately, we live in a technology-first society, and many have been quick to raise concerns and create awareness around the risks of generative AI.
To a large extent, this loud and early alarm will encourage AI companies to improve safety in the generative AI products and self-regulate to meet market expectations.
That being said, relying on AI companies’ or government guidance is not an answer, nor is implementing a ban and hoping users will adhere to it. Generative AI is already bringing massive benefits to the workplace, and it’s poised to do so much more. To take advantage of these benefits in the safest way possible, leaders need to integrate generative AI planning into their people, processes and tooling. Generative AI is here to stay.
Jamie Moles is senior technical manager at ExtraHop, a cloud-native cybersecurity solutions provider. He brings more than 30 years of cybersecurity experience helping customers understand and mitigate the risks that contemporary threats pose to their business.
Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.