×

What does it do?

  • Language Understanding
  • Coding
  • Mathematical Problem-Solving
  • Text-to-Code
  • Step-by-Step Math Solutions

How is it used?

  • Access via API or SDK for text-based input and output.
  • researchers
  • 1. Access via APIs
  • SDKs
  • & web apps
See more

Who is it good for?

  • Researchers
  • Students
  • Data Scientists
  • Software Developers
  • Mathematicians

What does it cost?

  • Pricing model : Unknown

Details & Features

  • Made By

    Microsoft
  • Released On

    2022-08-27

WizardLM is a suite of large pre-trained language models designed to follow complex instructions across various domains, including general language understanding, coding, and mathematical problem-solving. These models are developed by the WizardLM Team to enhance the capabilities of large language models (LLMs).

Key features:
- WizardLM Models: WizardLM-70B-V1.0 (70 billion parameters) and WizardLM-13B-V1.2 (13 billion parameters) for general language understanding and instruction following.

- WizardCoder Models: WizardCoder-Python-34B-V1.0 (34 billion parameters) and WizardCoder-15B-V1.0 (15 billion parameters) specialized in coding tasks, outperforming models like GPT-4, ChatGPT-3.5, and Claude2 on coding benchmarks.

- WizardMath Models: WizardMath-70B-V1.0 (70 billion parameters) excels in mathematical problem-solving, surpassing models like ChatGPT 3.5 and PaLM 2 540B on GSM8K and MATH benchmarks.

How it works:
Users interact with WizardLM models through text-based inputs. The models handle multi-turn conversations, making them suitable for applications requiring detailed and context-aware responses. WizardCoder models generate code from natural language descriptions, while WizardMath models provide step-by-step solutions to complex mathematical problems.

Integrations:
- Hugging Face: Models are available on the Hugging Face platform for easy access and deployment.
- GitHub: Source code and model weights are available on GitHub for developers to integrate into their applications.
- Discord: A community on Discord for support and collaboration.

Use of AI:
WizardLM models leverage the Llama 2 architecture, a state-of-the-art foundation model known for its efficiency and performance in various NLP tasks. The models are fine-tuned to follow complex instructions and generate high-quality outputs in their respective domains.

AI foundation model:
WizardLM models are built on the Llama 2 architecture.

How to access:
The models are available as APIs, SDKs, and can be deployed via web apps. They are released under different licenses, including OpenRAIL-M and Llama 2 License, depending on the specific model and use case.

  • Supported ecosystems
    Hugging Face, GitHub, Hugging Face, Microsoft
  • What does it do?
    Language Understanding, Coding, Mathematical Problem-Solving, Text-to-Code, Step-by-Step Math Solutions
  • Who is it good for?
    Researchers, Students, Data Scientists, Software Developers, Mathematicians

PRICING

Visit site
Pricing model: Unknown

Alternatives

Sourcely.net simplifies academic research by providing reliable sources based on user input.
Harvey is a generative AI platform that enhances legal workflows with domain-specific models and tools.
WizardLM-13B-V1.2 is an open-source language model that follows complex instructions to provide detailed responses.
WizardLM-13B-V1.2 is an open-source language model that follows complex instructions to provide detailed responses.
AgentGPT is a web-based platform that uses AI to create autonomous agents for tasks like web scraping and trip planning.
Starling-LM-7B-alpha is an open-source language model that provides helpful, harmless conversational AI.
Starling-LM-7B-alpha is an open-source language model that provides helpful, harmless conversational AI.
Vicuna-7B-v1.5 is a research-focused chat assistant model fine-tuned from Llama 2 for NLP and AI researchers.
Vicuna-7B-v1.5 is a research-focused chat assistant model fine-tuned from Llama 2 for NLP and AI researchers.
Lumina is an AI research assistant that streamlines finding and digesting scientific literature.