With the buzz surrounding DeepSeek overtaking the tech landscape, it’s no surprise that its open-source R1 AI model has taken the crown in reasoning tasks—beating out OpenAI’s o1 in math, science, and coding. This success catapulted it to become the most downloaded free app in the U.S., pushing ChatGPT aside. As a result of this excitement, the stocks of major players like Microsoft, Meta, and NVIDIA have taken a notable dip, reflecting the market’s reaction.
Yann LeCun, Meta’s leading AI scientist, attributes DeepSeek’s standout performance in the AI sector to its model’s open-source framework. Despite the acclaim, there are growing security issues, with the company recently limiting new user registrations due to “large-scale malicious attacks.” Fortunately, current users can still access its features without worry.
The achievements of DeepSeek’s AI, surpassing proprietary models, have been praised by industry leaders. Nonetheless, some critics downplay this success, pointing out the open-source nature of the software which allows anyone to access and tweak it for free. DeepSeek’s open-source V3 model powers the app, which reportedly cost around $6 million to train—comparatively modest against the enormous investments in flagship models, which have faced hurdles in development due to a shortage of quality training content.
This wave of enthusiasm for DeepSeek follows the announcement of the $500 billion Stargate Project by OpenAI and SoftBank, aiming to enrich AI infrastructure across the U.S. President Donald J. Trump touted the initiative as the largest of its kind, intended to maintain America’s technological edge.
While DeepSeek appears aligned with OpenAI’s original vision of providing society-wide benefits from AI, its recent run-ins with security threats highlight the risks for this Chinese startup. These issues bring to light OpenAI CEO Sam Altman’s perspective on the potential safety advantages of closed-source advanced AI models.