Advertisement

NVIDIA made an open source tool for creating safer and more secure AI models

NeMo Guardrails is available through the company's AI Foundations service.

NVIDIA

Since March, NVIDIA has offered AI Foundations, a service that allows businesses to train large language models (LLMs) on their own proprietary data. Today the company is introducing NeMo Guardrails, a tool designed to help developers ensure their generative AI apps are accurate, appropriate and safe.

NeMo Guardrails allows software engineers to enforce three different kinds of limits on their in-house LLMs. Specifically, firms can set “topical guardrails” that will prevent their apps from addressing subjects they weren’t trained to tackle. For instance, NVIDIA suggests a customer service chatbot would, with the help of its software, decline to answer a question about the weather. Companies can also set safety and security limits that are designed to ensure their LLMs pull accurate information and connect to apps that are known to be safe.

According to NVIDIA, NeMo Guardrails works with all LLMs, including ChatGPT. What’s more, the company claims nearly any software developer can use the software. “No need to be a machine learning expert or data scientist,” it says. Since NeMo Guardrails is open source, NVIDIA notes it will also work with all the tools enterprise developers already use.

NVIDIA is incorporating NeMo Guardrails into its existing NeMo framework for building generative AI models. Business customers can gain access to NeMo through the company’s AI Enterprise software platform. NVIDIA also offers the framework through its AI Foundations service. The release of NeMo Guardrails comes after some of the most high-profile generative AIs, including Microsoft Bing and Google Bard, have come under the microscope for their tendency to “hallucination” information. In fact, Google’s chatbot made a factual error during its first public demo.

“NVIDIA made NeMo Guardrails — the product of several years’ research — open source to contribute to the developer community’s tremendous energy and work AI safety,” NVIDIA said. “Together, our efforts on guardrails will help companies keep their smart services aligned with safety, privacy and security requirements so these engines of innovation stay on track.”

If you want to read a deep dive into how NeMo Guardrails works, NVIDIA has published a blog post on the subject that also shares information on how to get started with the software.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.