Chinese lab unveils 'reasoning' AI model competing OpenAI's o1
Technology
DeepSeek plans to open-source DeepSeek-R1 and release an API.
(Web Desk) - A Chinese laboratory has introduced what seems to be one of the first AI models capable of "reasoning," designed to compete with OpenAI’s o1.
On Wednesday, DeepSeek, an AI research firm backed by quantitative traders, revealed a preview of DeepSeek-R1, which it claims is a reasoning model that competes with o1.
Reasoning models, unlike typical AI, spend more time considering questions or queries, effectively fact-checking themselves, which helps them avoid common errors. Like o1, DeepSeek-R1 approaches tasks by reasoning, planning and performing a series of actions to arrive at an answer, sometimes taking several seconds to process complex questions.
DeepSeek asserts that DeepSeek-R1 (specifically the DeepSeek-R1-Lite-Preview version) performs similarly to OpenAI’s o1-preview on two well-known AI benchmarks, AIME and MATH.
AIME evaluates a model’s performance using other AI models, while MATH consists of word problems. However, the model isn’t flawless, with some users on X pointing out that DeepSeek-R1 struggles with games like tic-tac-toe and other logic puzzles, similar to o1.
The model is also vulnerable to jailbreaking, meaning it can be prompted to bypass safeguards. One user managed to make DeepSeek-R1 provide a detailed meth recipe.
On the other hand, DeepSeek-R1 blocks queries on politically sensitive topics, such as questions about Chinese leader Xi Jinping, Tiananmen Square and the implications of a potential Chinese invasion of Taiwan.
This behavior is likely due to government pressure on Chinese AI projects, requiring them to undergo regulator-mandated benchmarking to ensure compliance with "core socialist values."
The rise of reasoning models comes as "scaling laws" — the theory that more data and computing power can endlessly improve a model’s capabilities — is being questioned.
Reports suggest that AI models from major companies like OpenAI, Google, and Anthropic are no longer seeing the same dramatic improvements. This has prompted a search for new approaches to AI, including test-time computation, which provides models extra processing time for task completion, a technique seen in both o1 and DeepSeek-R1.
Microsoft CEO Satya Nadella recently mentioned this emerging “scaling law” at the Ignite conference.
DeepSeek plans to open-source DeepSeek-R1 and release an API. The company, supported by High-Flyer Capital Management, a Chinese hedge fund using AI for trading, is an intriguing player in the field.
Its earlier model, DeepSeek-V2, which analyzes text and images, pressured competitors like ByteDance, Baidu, and Alibaba to lower prices on some of their models and make others free.
High-Flyer, which builds its own server clusters for model training, has reportedly invested 1 billion yen (~$138 million) in a new cluster featuring 10,000 Nvidia A100 GPUs.
Founded by computer science graduate Liang Wenfeng, High-Flyer’s goal is to develop "superintelligent" AI through its DeepSeek division.