Since ChatGPT came out last year, Large Language Models have been on the tip of every enterprise leader’s tongue. These AI-powered tools have promised to dramatically increase productivity by automating or assisting with the creation of marketing content, sales materials, regulatory documents, legal contracts and more—while transforming customer service with more responsive, human-like chatbots.
By default, ChatGPT saves users’ chat history and repurposes it to further train their models. It’s possible this data could then be exposed to other tool users. If you use an external model provider, be sure to find out how prompts and replies can be used, if they are used for training and how and where they are stored.
To avoid these risks entirely, enterprises should consider training and running their AI chatbot tools within their own secure environment: private cloud, on-premises—whatever the enterprise considers secure. This approach not only ensures LLM applications abide by the same security policies as the rest of the enterprise’s IT stack, but it also gives enterprises more control over the cost and performance of their models.
Model security is thus as important or more important than data security. Keeping your models in-house will give you more control over the security measures to protect them. You may also want to consider model obfuscation: in other words, making your models unintelligible without a separate decoding key. Think of it like encryption but for LLMs.