Microsoft has made OpenAI’s new open-source GPT model available on Windows through its AI Foundry platform, marking the first time users can run an OpenAI model locally on Windows. The lightweight gpt-oss-20b model requires at least 16GB of VRAM and is optimized for code execution and tool use, with macOS support coming soon.
What you should know: The gpt-oss-20b model represents a significant shift in OpenAI’s approach, offering a free and open alternative that can run entirely on local hardware.
The big picture: This development adds a new dynamic to the complex relationship between Microsoft and OpenAI, as competitors like Amazon also quickly adopted the open-weight models for their cloud services.
Why this matters: Local AI inference eliminates dependency on internet connectivity and cloud services, enabling businesses to deploy AI solutions in environments with limited bandwidth or strict data privacy requirements.
What’s next: Microsoft’s quick integration suggests the company is positioning Windows as a preferred platform for local AI deployment, building on its recent efforts to add various local AI models to the operating system.