pplx-7b-online
What does it do?
- Real-Time Information Access
- Fine-Tuning for Freshness and Factuality
- Time-Sensitive Queries
- Detailed Data Requests
- Technical Explanations
How is it used?
- Access via REST API; input queries
- receive real-time text.
- 1. Access w/ pplx-api
- 2. Fine-tuned web snippets
- 3. Fast inference w/ GPUs
Who is it good for?
- Researchers
- Data Scientists
- AI Enthusiasts
- Business Analysts
- Software Developers
What does it cost?
- Pricing model : Unknown
Details & Features
-
Made By
Perplexity AI -
Released On
2022-10-24
Perplexity's PPLX Online LLMs are advanced language models that provide accurate and up-to-date responses by leveraging real-time internet data. These models are designed to overcome common limitations of traditional language models, such as outdated information and inaccuracies, by accessing current online sources.
Key features:
- Real-Time Information Access: PPLX models, including pplx-7b-online and pplx-70b-online, can access and utilize current internet data for time-sensitive queries.
- Fine-Tuning for Freshness and Factuality: Models are fine-tuned to use web snippets effectively, ensuring current and accurate responses.
- High-Performance Infrastructure: Utilizes NVIDIA H100 GPUs for fast inference speeds, suitable for high-demand applications.
- Time-Sensitive Query Handling: Capable of answering questions about recent events with up-to-date information.
- Detailed Data Compilation: Creates tables of statistics and comprehensive lists on specific topics.
- Technical Explanations: Provides precise information on niche and specialized subjects.
How it works:
1. Users access the models via the pplx-api, which supports a RESTful interface.
2. Perplexity Labs provides a playground for experimenting with the models.
3. Users generate an API key through their Perplexity account settings for authentication.
4. The API supports various models, including Mistral 7B, Llama 13B, Code Llama 34B, and Llama 70B.
Integrations:
Open-source models (Mistral, Llama), OpenAI client-compatible
Use of AI:
The PPLX models are built on top of mistral-7b and llama2-70b, enhancing their capabilities with real-time data access. Perplexity's proprietary search, indexing, and crawling infrastructure ensures access to relevant and up-to-date information.
AI foundation model:
The base models for PPLX are mistral-7b and llama2-70b, which are enhanced with Perplexity's in-house search technology.
Target users:
- Developers integrating advanced LLM capabilities into applications
- Businesses requiring accurate, real-time information for various purposes
- Researchers needing precise and current data
How to access:
The models are available via a REST API for web and mobile applications. Perplexity Labs provides a web-based interface for interacting with the models.
-
Supported ecosystemsPerplexity AI, Apple, iOS, Google, Android, Microsoft, Google, Android, iOS, Apple
-
What does it do?Real-Time Information Access, Fine-Tuning for Freshness and Factuality, Time-Sensitive Queries, Detailed Data Requests, Technical Explanations
-
Who is it good for?Researchers, Data Scientists, AI Enthusiasts, Business Analysts, Software Developers
PRICING
Visit site| Pricing model: Unknown |