Opinion: Open Source Success Drives Openness in the Global AI Race
DeepSeek underscores how open-source models are rapidly catching up to — and in some cases outperforming — their closed-source counterparts
The release of DeepSeek R1, an open-source large language model from Chinese firm DeepSeek, has become the talk of artificial intelligence (AI) world.
According to DeepSeek, its model’s benchmark performance rivals that of OpenAI’s o1 model, which launched in December, and even surpasses it in certain areas. At the same time, its inference pricing is one-twenty-fifth to one-fiftieth of OpenAI’s rates. This follows the buzz generated by DeepSeek V3, another open-source LLM lauded for its exceptionally low training costs. Such achievements are remarkable for a relatively unknown Chinese AI firm, earning it praise from international peers and leading to a surge of national pride during the recent Lunar New Year holiday.
【Subscribe now and check out for more coverage about China’s AI and Chip industry. Less than $0.7/week.】
Keep reading with a 7-day free trial
Subscribe to Caixin Global China Watch to keep reading this post and get 7 days of free access to the full post archives.