Mind the gap: AI leaders pulling ahead as LLMs take off
Advice on how to implement large language models safely within an enterprise IT environment
By
Oliver King-Smith
Published: 05 Jan 2024
When ChatGPT burst onto the scene, it grew to 100,000,000 users within three months. The fastest growth in a new product ever, ChatGPT showed the power and potential of large language models (LLMs). Google, Facebook, and Anthropic quickly followed with their own models.
Companies that have fully embraced AI strategies, technologies, and processes are accelerating ahead, while laggards risk being left behind. The most powerful AI engines to date are LLMs, and forward-thinking enterprises are developing strategies for applying this revolutionary tool.
But are large language models safe? This is the most common question brought up by its likely users. The fear and confusion is very valid.
Do you know that you don’t need to share data to leak information?
Simply asking a question of ChatGPT can reveal internal knowledge about your organisation’s future plans. Microsoft has advised its employees to avoid using ChatGPT because of security risks, despite being the largest shareholder in OpenAI.
How to take advantage of LLMs safely and responsibly
Private LLMs are models run inside an organisation’s internal IT infrastructure without relying on any outside connections. By keeping these models within their own secure IT infrastructure, the enterprise knowledge and data can be protected.
Private models need buy-in from all stakeholders in the organisation and a risk assessment should be carried out prior to implementation. When they are deployed, companies should have well-defined policies for their use. As with any critical IT resource, key employee access control needs to be implemented especially when they deal with sensitive information.
Organisations required to comply with standards, such as ITAR (International Traffic in Arms Regulations), GDPR ( General Data Protection Regulation) and HIPAA (Health Insurance Portability and Accountability Act), need to consider whether their LLMs are compliant. For example, unaware lawyers have been caught preparing cases on ChatGPT, a clear violation of attorney-client privilege.
With private models the enterprise can control the model’s training, ensuring your training dataset is appropriate and that the model you create will be standards compliant. Since the models will safely handle sensitive data when running, it will not retain any information within its short-term memory, known as the context. This ability to split knowledge between permanent storage and short-term storage provides a great flexibility in designing standard compliant systems.
Another huge advantage private models have over ChatGPT is that they can learn “tribal knowledge” within the organisation which is often locked away in emails, internal documents, project management systems and other data sources. This rich storehouse captured into your private model enhances the model’s ability to operate inside your enterprise.
A divide is growing between the “AI haves” and “have-nots”. But as with any new technology, it’s important to understand the risks and rewards across the organisation before jumping to a solution. With good project management and involvement of all stakeholders, enterprises can implement AI securely and effectively through private LLMs, providing the safest way to deploy responsible AI agents.
Oliver King-Smith is CEO of smartR AI, a company which develops applications based on the evolution of interactions, behaviour changes, and emotion detection.
Read more on Artificial intelligence, automation and robotics
LLM series – eSentire: Start secure, to avoid black clouds later
By: Adrian Bridgwater
New MongoDB tools enable generative AI development
By: Eric Avidon
Custom chatbots and model gardens: putting Private AI to work
By: Bryan Betts
Netherlands starts building its own AI language model