How To Mitigate The Enterprise Security Risks Of LLMs

  • 📰 ForbesTech
  • ⏱ Reading Time:
  • 56 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 26%
  • Publisher: 59%

Technology Technology Headlines News

Technology Technology Latest News,Technology Technology Headlines

Christopher Savoie, PhD, is the CEO & founder of Zapata AI. He is a published scholar in medicine, biochemistry and computer science. Read Christopher Savoie's full executive profile here.

Since ChatGPT came out last year, Large Language Models have been on the tip of every enterprise leader’s tongue. These AI-powered tools have promised to dramatically increase productivity by automating or assisting with the creation of marketing content, sales materials, regulatory documents, legal contracts and more—while transforming customer service with more responsive, human-like chatbots.

By default, ChatGPT saves users’ chat history and repurposes it to further train their models. It’s possible this data could then be exposed to other tool users. If you use an external model provider, be sure to find out how prompts and replies can be used, if they are used for training and how and where they are stored.

To avoid these risks entirely, enterprises should consider training and running their AI chatbot tools within their own secure environment: private cloud, on-premises—whatever the enterprise considers secure. This approach not only ensures LLM applications abide by the same security policies as the rest of the enterprise’s IT stack, but it also gives enterprises more control over the cost and performance of their models.

Model security is thus as important or more important than data security. Keeping your models in-house will give you more control over the security measures to protect them. You may also want to consider model obfuscation: in other words, making your models unintelligible without a separate decoding key. Think of it like encryption but for LLMs.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 318. in TECHNOLOGY

Technology Technology Latest News, Technology Technology Headlines