×

What does it do?

  • Real-Time Information Access
  • Fine-Tuning for Freshness and Factuality
  • Time-Sensitive Queries
  • Detailed Data Requests
  • Technical Explanations

How is it used?

  • Access via REST API; input queries
  • receive real-time text.
  • 1. Access w/ pplx-api
  • 2. Fine-tuned web snippets
  • 3. Fast inference w/ GPUs
See more

Who is it good for?

  • Researchers
  • Data Scientists
  • AI Enthusiasts
  • Business Analysts
  • Software Developers

What does it cost?

  • Pricing model : Unknown

Details & Features

  • Made By

    Perplexity AI
  • Released On

    2022-08-27

Perplexity's PPLX Online LLMs are advanced language models that provide accurate, up-to-date responses by leveraging real-time internet data. These models, launched on November 29, 2023, are accessible via the pplx-api and Perplexity Labs.

Key features:
- Real-time access to current internet data, enabling accurate answers to time-sensitive queries
- Fine-tuned for effective use of web snippets, ensuring responses are current and factual
- Fast inference speeds using NVIDIA H100 GPUs, suitable for high-demand applications

How it works:
Users interact with the models through the pplx-api, which supports a RESTful interface for easy integration into various applications. Perplexity Labs provides a playground for experimenting with the models and refining queries. Users generate an API key through their Perplexity account settings to authenticate requests.

Integrations:
The API integrates with popular open-source models like Mistral and Llama, providing flexibility for developers. It is designed to be OpenAI client-compatible, facilitating easy integration with existing applications that use OpenAI's models.

Use of AI:
The PPLX models leverage the robust capabilities of `mistral-7b` and `llama2-70b` base models while enhancing them with real-time data access. Perplexity's proprietary search, indexing, and crawling infrastructure ensures the models have access to the most relevant and up-to-date information.

AI foundation model:
The PPLX models are built on top of `mistral-7b` and `llama2-70b` open-source base models.

How to access:
The PPLX Online LLMs are available as a REST API, making them accessible for web and mobile applications. Perplexity Labs provides a web-based interface for interacting with the models. The models are ideal for developers looking to integrate advanced LLM capabilities into their applications, businesses needing accurate real-time information, and researchers requiring precise and current data.

  • Supported ecosystems
    Perplexity AI, Apple, iOS, Google, Android, Microsoft, Google, Android, iOS, Apple
  • What does it do?
    Real-Time Information Access, Fine-Tuning for Freshness and Factuality, Time-Sensitive Queries, Detailed Data Requests, Technical Explanations
  • Who is it good for?
    Researchers, Data Scientists, AI Enthusiasts, Business Analysts, Software Developers

PRICING

Visit site
Pricing model: Unknown

Alternatives

Sourcely.net simplifies academic research by providing reliable sources based on user input.
Harvey is a generative AI platform that enhances legal workflows with domain-specific models and tools.
WizardLM-13B-V1.2 is an open-source language model that follows complex instructions to provide detailed responses.
WizardLM-13B-V1.2 is an open-source language model that follows complex instructions to provide detailed responses.
AgentGPT is a web-based platform that uses AI to create autonomous agents for tasks like web scraping and trip planning.
Starling-LM-7B-alpha is an open-source language model that provides helpful, harmless conversational AI.
Starling-LM-7B-alpha is an open-source language model that provides helpful, harmless conversational AI.
Vicuna-7B-v1.5 is a research-focused chat assistant model fine-tuned from Llama 2 for NLP and AI researchers.
Vicuna-7B-v1.5 is a research-focused chat assistant model fine-tuned from Llama 2 for NLP and AI researchers.
Lumina is an AI research assistant that streamlines finding and digesting scientific literature.