×
‘Arch-Function’ AI models are purpose-built for lightning fast agentic AI
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Revolutionizing enterprise AI with Arch-Function LLMs: Katanemo’s open-source release of Arch-Function large language models (LLMs) promises to significantly accelerate agentic AI applications for complex enterprise workflows.

The big picture: Arch-Function LLMs, built on Qwen 2.5 with 3B and 7B parameters, offer ultra-fast speeds for function-calling tasks critical to agentic workflows, potentially outperforming industry leaders like OpenAI’s GPT-4 and Anthropic’s models.

  • Katanemo claims Arch-Function models are nearly 12 times faster than GPT-4 while delivering significant cost savings.
  • The open-source release aims to enable super-responsive agents capable of handling domain-specific use cases without excessive costs for businesses.
  • Gartner predicts that by 2028, 33% of enterprise software tools will use agentic AI, up from less than 1% currently, enabling 15% of day-to-day work decisions to be made autonomously.

Key features and capabilities: Arch-Function LLMs are designed to excel at function calls, allowing them to interact with external tools and systems for performing digital tasks and accessing up-to-date information.

  • The models can understand complex function signatures, identify required parameters, and produce accurate function call outputs.
  • This capability enables the execution of various tasks, from API interactions to automated backend workflows, facilitating the development of agentic applications.
  • Arch-Function analyzes prompts, extracts critical information, engages in lightweight conversations to gather missing parameters, and makes API calls, allowing developers to focus on writing business logic.

Performance and cost advantages: Katanemo’s Arch-Function LLMs demonstrate significant improvements in both speed and cost-effectiveness compared to leading models in the market.

  • Arch-Function-3B reportedly delivers approximately 12x throughput improvement and 44x cost savings compared to GPT-4.
  • Similar performance gains were observed against GPT-4o and Claude 3.5 Sonnet.
  • These benchmarks were achieved using an L40S Nvidia GPU to host the 3B parameter model, which is a more cost-effective option compared to the standard V100 or A100 GPUs typically used for LLM benchmarking.

Potential applications and market impact: The high-throughput performance and low costs of Arch-Function LLMs make them suitable for real-time, production use cases in various industries.

  • Potential applications include processing incoming data for campaign optimization and sending automated emails to clients.
  • The global market for AI agents is expected to grow at a CAGR of nearly 45% to become a $47 billion opportunity by 2030, according to Markets and Markets.
  • Enterprises can leverage these models to build fast, secure, and personalized generative AI applications at scale.

Broader context: Arch-Function is part of Katanemo’s larger ecosystem of AI infrastructure tools, building upon their previous release of the Arch intelligent prompt gateway.

  • Arch, open-sourced a week prior, uses specialized sub-billion parameter LLMs to handle critical tasks related to prompt processing and management.
  • The combination of Arch and Arch-Function aims to provide a comprehensive solution for developers looking to build efficient and secure AI-native applications.

Looking ahead: While Arch-Function LLMs show promise in benchmarks, real-world adoption and case studies will be crucial in determining their impact on the enterprise AI landscape.

  • The open-source nature of these models may accelerate adoption and foster innovation in the agentic AI space.
  • As enterprises increasingly seek to integrate AI into their workflows, solutions like Arch-Function could play a pivotal role in making advanced AI capabilities more accessible and cost-effective.
Arch-Function LLMs promise lightning-fast agentic AI for complex enterprise workflows

Recent News

Social network Bluesky says it won’t train AI on user posts

As social media platforms debate AI training practices, Bluesky stakes out a pro-creator stance by pledging not to use user content for generative AI.

New research explores how cutting-edge AI may advance quantum computing

AI is being leveraged to address key challenges in quantum computing, from hardware design to error correction.

Navigating the ethical minefield of AI-powered customer segmentation

AI-driven customer segmentation provides deeper insights into consumer behavior, but raises concerns about privacy and potential bias.