×
How Portkey is Helping Enterprises Safely Deploy LLMs
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI Gateway advances with integrated guardrails: Portkey, an AI infrastructure company, has introduced guardrails to their Gateway framework, addressing a critical challenge in deploying Large Language Models (LLMs) in production environments.

  • Portkey’s AI Gateway, which processes billions of LLM tokens daily, now incorporates guardrails to enhance control over LLM outputs and mitigate unpredictable behaviors.
  • This integration aims to solve issues such as hallucinations, factual inaccuracies, biases, and potential privacy violations in LLM responses.

The evolution of Portkey’s AI Gateway: The company’s journey began with addressing operational challenges in deploying LLM applications, leading to the development of their open-source AI Gateway.

  • Initially, Portkey focused on solving “ops” challenges like debugging LLM requests, monitoring costs, and streamlining prompt iterations.
  • The Gateway has since expanded to handle request/response transformations across over 200 different LLMs, improving their robustness.

Addressing core LLM behavior: Despite the Gateway’s success in operational aspects, the unpredictability of core LLM behavior remained a significant concern for production deployments.

  • LLMs can produce outputs that are completely fabricated or factually incorrect.
  • These models may also exhibit biases, breach privacy norms, or potentially cause harm to organizations using them.

Industry recognition of the challenge: The need for better control over LLM outputs has been highlighted by industry experts as a crucial component for building generative AI platforms.

  • Chip Huyen, a prominent voice in the AI community, emphasized the importance of guardrails in her guide on “Building a Gen AI Platform.”

Integration of guardrails into the Gateway: Portkey’s solution involves incorporating guardrail systems directly into their Gateway framework.

  • This integration allows for orchestration of LLM requests based on the guardrail’s verdict, providing precise control over LLM behavior.
  • The combination brings together interoperability, routing, and guardrails within a single Gateway solution.

Collaboration with guardrails experts: Recognizing the specialized expertise required for steering and evaluating LLM behavior, Portkey is partnering with leading AI guardrails platforms.

  • These partnerships aim to make advanced guardrail capabilities available through the Portkey Gateway.

Availability and implementation: The guardrails feature is now accessible through multiple channels to encourage adoption and experimentation.

  • Users can access guardrails through Portkey’s open-source repository and their hosted application.
  • Detailed documentation and a dedicated plugins folder are available for developers to explore the full potential of the guardrails integration.

Implications for AI development: This advancement represents a significant step in addressing a crucial production gap faced by many companies implementing AI solutions.

  • The integration of guardrails into the Gateway framework could potentially accelerate the adoption of LLMs in production environments by mitigating risks associated with unpredictable outputs.
  • It also highlights the importance of collaborative efforts in solving complex challenges in AI development and deployment.

Future outlook: Portkey’s guardrails integration marks an important milestone in the evolution of AI infrastructure, but it’s clear that this is just the beginning of a longer journey.

  • The AI community will need to continue learning, adapting, and collaborating to address emerging challenges in LLM deployment and management.
  • As these technologies evolve, we can expect to see further innovations in controlling and refining AI outputs to meet the diverse needs of production environments.
Guardrails on the Gateway Framework

Recent News

Baidu reports steepest revenue drop in 2 years amid slowdown

China's tech giant Baidu saw revenue drop 3% despite major AI investments, signaling broader challenges for the nation's technology sector amid economic headwinds.

How to manage risk in the age of AI

A conversation with Palo Alto Networks CEO about his approach to innovation as new technologies and risks emerge.

How to balance bold, responsible and successful AI deployment

Major companies are establishing AI governance structures and training programs while racing to deploy generative AI for competitive advantage.