Is your cloud network ready to embrace it?
Paul Gampe, Chief Technology Officer at Console Connect, discusses how to deploy generative AI safely and securely into cloud networks.
Generative Artificial Intelligence is inserting itself into nearly every sector of the global economy as well as many aspects of our lives. People are already using this groundbreaking technology to query their bank bills, request medical prescriptions, and even write poems and university essays.
In the process, generative AI has the potential to unlock trillions of dollars in value for businesses and radically transform the way we work. In fact, current predictions suggest generative AI could automate up to 70% of employees’ time today.
But regardless of the application or industry, the impact of generative AI can be most keenly felt in the cloud computing ecosystem.
As companies rush to leverage this technology in their cloud operations, it is essential to first understand the network connectivity requirements – and the risks – before deploying generative AI models safely, securely, and responsibly.
Data processing
One of the primary connectivity requirements for training generative AI models in public cloud environments is affordable access to the scale of datasets. By their very definition, large language models (LLM) are extremely large. To train these LLMs, vast amounts of data and hyper-fast computing are required, and the larger the dataset, the greater the demand for computing power.
The enormous processing power required to train these LLMs is only one part of the jigsaw. You also need to manage the sovereignty, security, and privacy requirements of the data transiting in your public cloud. Given that 39% of businesses experienced a data breach in their cloud environment in 2022, it makes sense to explore the private connectivity products on the market which have been designed specifically for high-performance and AI workloads.
Regulatory trends emerging in the generative AI landscape
Companies should pay close attention to the key public policies and regulation trends which are rapidly emerging around the AI landscape. Think of a large multinational bank in New York that has 50 mainframes on its premises where they keep their primary computing capacity; they want to do AI analysis on that data, but they cannot use the public internet to connect to these cloud environments because many of their workloads have regulatory constraints. Instead, private connectivity affords them the ability to get to where the generative AI capability exists and sits within the regulatory frameworks of their financing industry.
Even so, the maze of regulatory frameworks globally is very complex and subject to change. The developing mandates of the General Data Protection Regulation (GDPR) in Europe, as well as new GDPR-inspired data privacy laws in the United States, have taken a privacy-by-design approach whereby companies must implement techniques such as data mapping and data loss prevention to make sure they know where all personal data is at all times and protect it accordingly.
Sovereign borders
As the world becomes more digitally interconnected, the widespread adoption of generative AI technology will likely create long-lasting challenges around data sovereignty. This has already prompted nations to define and regulate their own legislation regarding where data can be stored and where the LLMs processing that data can be housed.
Some national laws require certain data to remain within the country’s borders, but this does not necessarily make it more secure. For instance, if your company uses the public internet to transfer customer data to and from London on a public cloud service, even though it may be travelling within London, somebody can still intercept that data and route it elsewhere around the world.
As AI legislation continues to expand, the only way your company will have the assurance of maintaining your sovereign border may be to use a form of private connectivity while the data is in transit. The same applies to AI training models on the public cloud; companies will need some type of connectivity from their private cloud to their public cloud, where they do their AI training models, and then use that private connectivity to bring their inference models back.
Latency and network congestion
Latency is a critical factor in terms of interactions with people. We have all become latency-sensitive, especially with the volume of voice and video calls that we experience daily. Still, the massive datasets used for training AI models can lead to serious latency issues on the public cloud.
For instance, if you’re chatting with an AI bot that’s providing you customer service and latency begins to exceed ten seconds, the dropout rate accelerates. Therefore, using the public internet to connect your customer-facing infrastructure with your inference models is potentially hazardous for a seamless online experience, and a change in response time could impact your ability to provide meaningful results.
Network congestion, meanwhile, could impact your ability to build models on time. If you have significant congestion in getting your fresh data into your LLMs it’s going to start to backlog, and you won’t be able to achieve the learning outcomes that you’re hoping for. The way to overcome this is by having large pipes to ensure that you don’t encounter congestion in moving your primary data sets into where you’re training your language model.
Responsible governance
One thing everybody is talking about right now is governance. In other words, who gets access to the data, and where is the traceability of the approval of that data available?
Without proper AI governance, companies could face severe consequences, including commercial and reputational damage. A lack of supervision when implementing generative AI models on the cloud could easily lead to errors and violations, not to mention the potential exposure of customer data and other proprietary information. Simply put, the trustworthiness of generative AI depends on how companies use it.
Examine your cloud architecture before deploying generative AI
Generative AI is a transformative field with untold opportunities for countless businesses, but IT leaders cannot afford to get their network connectivity wrong before deploying its applications.
Remember, data accessibility is everything when it comes to generative AI, so it is essential to define your business needs in relation to your existing cloud architecture. Rather than navigating the risks of the public cloud, the high-performance flexibility of a Network-as-a-Service (NaaS) platform can provide forward-thinking companies with a first-mover advantage.
The agility of NaaS connectivity makes it simpler and safer to adopt AI systems by interconnecting your clouds with a global network infrastructure that delivers fully automated switching and routing on demand. What’s more, a NaaS solution also incorporates the emerging network technology that supports the governance requirements of generative AI for both your broader business and the safeguarding of your customers.