HyperAI
Back to Headlines

OpenAI's New Open Models Mark Strategic Shift Toward Transparency

16 hours ago

OpenAI has released two new open-weight language models, gpt-oss-120b and gpt-oss-20b, marking its first such release since GPT-2 in 2019. The models, available for free and under the Apache 2.0 license, can be downloaded, customized, and run locally—on laptops or cloud systems—making them accessible to developers, researchers, and small companies. This move represents a significant shift for OpenAI, which had long maintained a closed approach to its AI technology, citing safety concerns. CEO Sam Altman acknowledged earlier this year that the company had been “on the wrong side of history” for not releasing open models, signaling a strategic pivot to remain competitive amid rising pressure from open-source rivals. The 120-billion-parameter version can run on a single high-end Nvidia GPU and performs similarly to OpenAI’s closed o4-mini model, while the 20-billion-parameter version, requiring only 16GB of memory, matches the performance of the o3-mini model and can be used on consumer hardware. Both models are text-only and designed for advanced tasks like reasoning, code generation, web browsing, and agent-based workflows via OpenAI’s existing APIs. They are available on platforms including Hugging Face, GitHub, Azure, AWS, and Databricks, and can be deployed using tools like LM Studio and Ollama. OpenAI emphasized that the models underwent rigorous safety testing, including filtering harmful content during training and simulating malicious fine-tuning attempts. The company worked with three external safety firms to evaluate risks related to cybersecurity, biological weapons, and other high-stakes threats. OpenAI also implemented a “chain of thought” visibility feature to monitor for deception or misuse. Despite being open-weight—meaning the model’s parameters are public—OpenAI has not disclosed its training data, maintaining a balance between transparency and security. The release follows growing industry momentum toward open AI. Competitors like Meta (Llama series), Mistral AI, and Chinese startup DeepSeek have gained traction with open-weight models that offer lower costs and greater customization. OpenAI’s new models aim to close the gap, enabling broader innovation while still adhering to its mission of ensuring artificial general intelligence benefits humanity. The company is collaborating with major tech partners including Nvidia, AMD, Cerebras, Groq, Orange, and Snowflake to optimize performance across diverse hardware and real-world applications. Nvidia CEO Jensen Huang praised the release as a step forward in open-source AI innovation. OpenAI’s shift also reflects a broader structural change. After facing criticism over plans to convert into a for-profit entity, the company revised its model: its profit-making arm will remain under the oversight of the nonprofit board, preserving its founding mission. This change addresses concerns from co-founder Elon Musk and AI safety advocates. While OpenAI has not released benchmarks comparing gpt-oss-120b and gpt-oss-20b to competitors like Llama, DeepSeek, or Google’s Gemma, early testing shows strong performance on coding and reasoning tasks, including Humanity’s Last Exam. The company has not committed to a regular release schedule for future open models but hopes the current ones will lower barriers to entry and spark unexpected innovations. This release signals OpenAI’s renewed commitment to open collaboration, balancing accessibility with safety, and positioning itself at the forefront of the evolving AI ecosystem.

Related Links