Kong Inc. is expanding its API gateway capabilities to tackle the growing security and governance challenges of AI systems. The company’s updated Kong AI Gateway introduces new features to prevent hallucinations in large language models and protect personal information across multiple languages and AI providers. These enhancements reflect how critical API management has become in creating secure, reliable connections between AI models and the data resources they depend on.
The big picture: Kong’s gateway technology now provides specialized AI security and management features designed to regulate how AI applications connect to various data resources and services.
- The updated Kong AI Gateway offers automated retrieval-augmented generation (RAG) pipelines to prevent hallucinations in large language models.
- The system now includes personally identifiable information (PII) sanitization across 12 languages and multiple AI providers, enhancing privacy protection.
Key technical features: The gateway includes several specialized components to address AI-specific connectivity challenges.
- Kong’s PII plugin enables sanitization and protection of personal data, passwords, and codes across various languages and AI platforms.
- The system supports RAG pipelines with bounded endpoints that determine the specific purpose and limitations of AI model connections.
Why this matters: As AI applications proliferate, managing the connections between models, data resources, and services has become a critical security and governance issue.
- Unmanaged or poorly secured connections between AI systems and data resources can lead to security vulnerabilities and compliance failures.
- Kong’s approach treats AI connectivity as a specialized form of API management that requires dedicated tools and guardrails.
How it works: Kong’s technology acts as a gatekeeper that determines what should connect with what across AI architectures.
- The gateway functions as a management layer for API connections, providing governance guardrails for AI systems.
- It helps prevent LLM hallucinations by managing how models access and incorporate additional domain-specific knowledge through RAG functions.
Kong Beats Chest To Ward Off Bad AI Connections