×

What does it do?

  • Natural Language Processing
  • Machine Learning
  • Chatbots
  • Research
  • Conversational AI

How is it used?

  • Access via CLI or APIs to generate text from prompts.
  • 1. Use FastChat CLI
  • 2. Access OpenAI & Huggingface APIs
  • 3. Train w/ supervised instruction
  • 4. Evaluate w/ standard benchmarks
See more

Who is it good for?

  • AI Researchers
  • Natural Language Processing Researchers
  • Chatbot Developers
  • Machine Learning Researchers
  • Language Model Enthusiasts

What does it cost?

  • Pricing model : Open Source

Details & Features

  • Made By

    LMSYS
  • Released On

    2023-05-16

Vicuna-13B-v1.5-16k is an advanced chat assistant developed by LMSYS, designed for research in large language models and chatbots. It is fine-tuned from the Llama 2 model using user-shared conversations from ShareGPT.

Key features:
- Auto-regressive language model based on the transformer architecture
- Fine-tuned using supervised instruction fine-tuning and linear RoPE scaling
- Trained on approximately 125,000 conversations collected from ShareGPT.com
- Evaluated using standard benchmarks, human preference, and LLM-as-a-judge methodologies

How it works:
Vicuna-13B-v1.5-16k is an auto-regressive language model that generates text based on the input provided. It leverages the transformer architecture to process and understand the context of the input, allowing it to generate coherent and relevant responses.

Integrations:
- Command line interface: FastChat CLI
- APIs: OpenAI API, Huggingface API

Use of AI:
Vicuna-13B-v1.5-16k is primarily intended for research on large language models and chatbots. It is designed for researchers and hobbyists in the fields of natural language processing, machine learning, and artificial intelligence.

AI foundation model:
Vicuna-13B-v1.5-16k is fine-tuned from the Llama 2 model, which serves as its foundation.

How to access:
- Command line interface: FastChat CLI (https://github.com/lm-sys/FastChatvicuna-weights)
- APIs: OpenAI API, Huggingface API (https://github.com/lm-sys/FastChat/tree/mainapi)

The model is available under the Llama 2 Community License Agreement. Detailed training information and evaluation results can be found in the appendix of the research paper (https://arxiv.org/abs/2306.05685).

  • Supported ecosystems
    GitHub, Hugging Face, GitHub, Hugging Face, OpenAI
  • What does it do?
    Natural Language Processing, Machine Learning, Chatbots, Research, Conversational AI
  • Who is it good for?
    AI Researchers, Natural Language Processing Researchers, Chatbot Developers, Machine Learning Researchers, Language Model Enthusiasts

PRICING

Visit site
Pricing model: Open Source

Alternatives

Sourcely.net simplifies academic research by providing reliable sources based on user input.
Harvey is a generative AI platform that enhances legal workflows with domain-specific models and tools.
WizardLM-13B-V1.2 is an open-source language model that follows complex instructions to provide detailed responses.
WizardLM-13B-V1.2 is an open-source language model that follows complex instructions to provide detailed responses.
AgentGPT is a web-based platform that uses AI to create autonomous agents for tasks like web scraping and trip planning.
Starling-LM-7B-alpha is an open-source language model that provides helpful, harmless conversational AI.
Starling-LM-7B-alpha is an open-source language model that provides helpful, harmless conversational AI.
Vicuna-7B-v1.5 is a research-focused chat assistant model fine-tuned from Llama 2 for NLP and AI researchers.
Vicuna-7B-v1.5 is a research-focused chat assistant model fine-tuned from Llama 2 for NLP and AI researchers.
Lumina is an AI research assistant that streamlines finding and digesting scientific literature.