×
OpenAI Announces GPT-4o Fine-Tuning for Developers
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI introduces fine-tuning for GPT-4o: OpenAI has announced the ability for third-party developers to fine-tune custom versions of its latest large multimodal model, GPT-4o, enhancing its applicability for specific applications or organizational needs.

Key features and benefits:

  • Fine-tuning allows developers to adjust the model’s tone, follow specific instructions, and improve accuracy in technical tasks, even with small datasets.
  • Developers can access this feature through OpenAI’s fine-tuning dashboard by selecting the gpt-4o-2024-08-06 base model.
  • The company claims strong results can be achieved with as few as a dozen examples in the training data.

Promotional offer and pricing:

  • OpenAI is offering up to 1 million free tokens per day for GPT-4o fine-tuning until September 23, 2024, for any third-party organization.
  • Regular pricing for GPT-4o fine-tuning is $25 per million tokens, with inference costs at $3.75 per million input tokens and $15 per million output tokens.
  • For the smaller GPT-4o mini model, 2 million free training tokens are available daily until the same date.

Competitive landscape: This move comes as OpenAI faces increased competition from both proprietary and open-source model providers.

  • Google and Anthropic offer competitive pricing for their proprietary models.
  • Open-source models like Nous Research’s Hermes 3, based on Meta’s Llama 3.1, are also entering the market.
  • OpenAI’s advantage lies in its hosted infrastructure, eliminating the need for developers to manage model inference or training on their own servers.

Success stories and benchmarks:

  • Cosine, an AI software engineering firm, achieved state-of-the-art results of 43.8% on the SWE-bench benchmark using their fine-tuned GPT-4o-based agent, Genie.
  • Distyl, an AI solutions partner, ranked first on the BIRD-SQL benchmark with a 71.83% execution accuracy using their fine-tuned GPT-4o model.

Safety and privacy considerations:

  • OpenAI emphasizes that fine-tuned models allow full control over business data, with no risk of inputs or outputs being used to train other models.
  • The company has implemented layered safety mitigations, including automated evaluations and usage monitoring.
  • However, research has shown that fine-tuning can potentially cause models to deviate from their original safeguards and reduce overall performance.

OpenAI’s vision and future developments:

  • The company believes that most organizations will eventually develop customized AI models tailored to their specific needs.
  • OpenAI continues to invest in expanding model customization options for developers, with this release being just the beginning of their efforts in this direction.

Analyzing the implications: While OpenAI’s move to allow fine-tuning of GPT-4o presents exciting opportunities for developers and organizations to create more specialized AI solutions, it also raises questions about the balance between customization and maintaining safety standards. The success of this initiative will likely depend on how well OpenAI can support developers in creating effective fine-tuned models while ensuring they remain aligned with intended use cases and ethical guidelines.

OpenAI brings fine-tuning to GPT-4o with 1M free tokens per day through Sept. 23

Recent News

Mark Cuban uses Grok to challenge Musk’s latest political comments

Growing international concern over Elon Musk's endorsement of Germany's far-right AfD party underscores tension between tech billionaires' political influence and Europe's rising nationalist movements.

AI-generated street photography and what we lose by simulating our world with AI

Despite a contracting global TV market, Samsung maintained its lead through premium models and advanced display technology, capturing one-fifth of worldwide sales.

How reinforcement learning may unintentionally lead to misaligned AGI

AI systems enhanced with reinforcement learning can optimize for goals in potentially uncontrollable ways, raising new concerns about transparency and safety.