×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI introduces fine-tuning for GPT-4o: OpenAI has announced the ability for third-party developers to fine-tune custom versions of its latest large multimodal model, GPT-4o, enhancing its applicability for specific applications or organizational needs.

Key features and benefits:

  • Fine-tuning allows developers to adjust the model’s tone, follow specific instructions, and improve accuracy in technical tasks, even with small datasets.
  • Developers can access this feature through OpenAI’s fine-tuning dashboard by selecting the gpt-4o-2024-08-06 base model.
  • The company claims strong results can be achieved with as few as a dozen examples in the training data.

Promotional offer and pricing:

  • OpenAI is offering up to 1 million free tokens per day for GPT-4o fine-tuning until September 23, 2024, for any third-party organization.
  • Regular pricing for GPT-4o fine-tuning is $25 per million tokens, with inference costs at $3.75 per million input tokens and $15 per million output tokens.
  • For the smaller GPT-4o mini model, 2 million free training tokens are available daily until the same date.

Competitive landscape: This move comes as OpenAI faces increased competition from both proprietary and open-source model providers.

  • Google and Anthropic offer competitive pricing for their proprietary models.
  • Open-source models like Nous Research’s Hermes 3, based on Meta’s Llama 3.1, are also entering the market.
  • OpenAI’s advantage lies in its hosted infrastructure, eliminating the need for developers to manage model inference or training on their own servers.

Success stories and benchmarks:

  • Cosine, an AI software engineering firm, achieved state-of-the-art results of 43.8% on the SWE-bench benchmark using their fine-tuned GPT-4o-based agent, Genie.
  • Distyl, an AI solutions partner, ranked first on the BIRD-SQL benchmark with a 71.83% execution accuracy using their fine-tuned GPT-4o model.

Safety and privacy considerations:

  • OpenAI emphasizes that fine-tuned models allow full control over business data, with no risk of inputs or outputs being used to train other models.
  • The company has implemented layered safety mitigations, including automated evaluations and usage monitoring.
  • However, research has shown that fine-tuning can potentially cause models to deviate from their original safeguards and reduce overall performance.

OpenAI’s vision and future developments:

  • The company believes that most organizations will eventually develop customized AI models tailored to their specific needs.
  • OpenAI continues to invest in expanding model customization options for developers, with this release being just the beginning of their efforts in this direction.

Analyzing the implications: While OpenAI’s move to allow fine-tuning of GPT-4o presents exciting opportunities for developers and organizations to create more specialized AI solutions, it also raises questions about the balance between customization and maintaining safety standards. The success of this initiative will likely depend on how well OpenAI can support developers in creating effective fine-tuned models while ensuring they remain aligned with intended use cases and ethical guidelines.

OpenAI brings fine-tuning to GPT-4o with 1M free tokens per day through Sept. 23

Recent News

AI Governance Takes Center Stage in ASEAN-Stanford HAI Workshop

Southeast Asian officials discuss AI governance challenges and regional cooperation with Stanford experts.

Slack is Launching AI Note-Taking for Huddles

The feature aims to streamline meetings and boost productivity by automatically generating notes during Slack huddles.

Google’s AI Tool ‘Food Mood’ Will Help You Create Mouth-Watering Meals

Google's new AI tool blends cuisines from different countries to create unique recipes for adventurous home cooks.