1. Launch of GPT-OSS-120B and GPT-OSS-20B
- OpenAI has launched two new open-source models—GPT-OSS-120B and GPT-OSS-20B, available under the Apache 2.0 license. These models are highly capable of tasks such as instruction following, tool use, and coding.
2. Power of GPT-OSS-120B
- The GPT-OSS-120B model is at par with OpenAI’s more advanced models like the o4-mini and can be easily run on a single 80 GB GPU. It is suitable for large-scale tasks.
3. GPT-OSS-20B: Low-cost and fast
- GPT-OSS-20B delivers good results even in low memory (only 16 GB). It is ideal for those who want to work on low-infrastructure and local inference.
4. Improved tool use and reasoning
- These models also perform well in tasks such as tool use, code generation, web search, and Chain-of-Thought (CoT) reasoning—notably outperforming GPT-4o in benchmarks like HealthBench.
5. Fully customizable
- These models support advanced features such as structured outputs, few-shot learning, and function calling. Users can modify them according to their needs.
6. Security is also a priority.
- OpenAI has thoroughly tested these models on a security scale. GPT-OSS-120B has been released under its Preparedness Framework after adversarial testing.
7. GPT-OSS is now on Windows.
- Microsoft has announced the launch of GPU-optimized GPT-OSS-20B on Windows devices. Developers can now use it via Foundry Local and AI Toolkit for VS Code.
8. How to use Foundry Local?
- Install Foundry Local with the WinGet command:
- winget install Microsoft.FoundryLocal
- Run this command in the terminal:
- foundry model run gpt-oss-20B
- Now you can start using it by sending your own prompts.
9. How to use the AI Toolkit (VS Code)?
- Install Visual Studio Code and AI Toolkit.
- Download GPT-OSS-20B from the Model Catalog.
- Open the Model Playground and start sending prompts from there.
10. What does this mean for users?
Now developers and advanced users can use powerful AI tools locally without heavy server infrastructure—in a fast, secure, and fully customizable way.