This article reflects the author’s insights and is not a formal review.
The artificial intelligence sector is experiencing a rapid surge in investment, much like past technological bubbles, such as Web3, the dot-com boom, and the cryptocurrency craze of 2017.
While the term “bubble” is sometimes associated with market crashes, it does not inherently carry a negative connotation. Instead, it reflects a period of intense growth and speculation, where interest and capital rapidly flow into an emerging technology.
History has shown that when excitement outpaces actual utility, market corrections are natural, but they also leave behind lasting innovations. Amid this wave of AI enthusiasm, DeepSeek has emerged as a noteworthy contender, opposing OpenAI’s ChatGPT. However, the conversation extends beyond chatbots; the evolution of AI agents is already shaping the next stage of development, with DeepSeek pushing boundaries in this domain.
AI boom: a modern gold rush?
AI, particularly in the generative space, has captivated investors, drawing in vast amounts of funding. While the potential is undeniable, there are also cautionary tales from previous technological trends that promised revolution but ultimately fell short due to speculative overreach.
Just as Web3 struggled to move beyond conceptualisation and weaker adoption, AI might be heading down a similar road. The warning signs are already visible, overinflated company valuations, monopolistic control, and an industry driven more by fear of missing out than sustainable innovation.
DeepSeek vs. ChatGPT: a new AI face-off
DeepSeek’s rapid rise has not been without controversy. Unlike ChatGPT, which refines responses through OpenAI’s reinforcement learning framework, DeepSeek operates on a system that integrates specialised “agents.” These agents enable structured, adaptable interactions, offering a distinctive approach to AI-generated content.
However, as soon as DeepSeek started gaining traction, an unexpected wave of criticism emerged. Social media was quickly flooded with reviews that followed a suspiciously similar script, apart from introducing DeepSeek as an alternative, the focus swiftly cast doubt on its reliability and security. The coordinated nature of these critiques can let us speculate about an intentional effort to curb DeepSeek’s momentum.
Additionally, just as DeepSeek made headlines, it was hit with a cyber-attack. While security breaches are not uncommon for high-profile tech platforms, the timing of this incident conveniently reinforced the narrative that DeepSeek was inexperienced in cybersecurity. Critics quickly frame it as proof of DeepSeek’s vulnerability, further undermining public confidence.
Who holds the reins? Nvidia, OpenAI, and market control
To understand the scepticism surrounding DeepSeek’s success, one must consider the dominant players in AI. Nvidia, whose hardware is essential for training and running AI models, had its market position strengthened as AI adoption has surged. Meanwhile, OpenAI, backed by Microsoft, has solidified its status as a leader in generative AI. These companies thrive in a controlled AI landscape, where barriers to entry remain high.
DeepSeek’s emergence challenged this model by proving that AI models could operate locally on minimal hardware while maintaining competitive quality. This raised an uncomfortable question: Have leading AI firms exaggerated the need for high-end computing to stifle competition? If effective AI models can run on standard consumer hardware, the industry’s reliance on expensive, centralised GPU clusters might be artificially sustained.
That said, while DeepSeek’s efficiency in inference is commendable, it is crucial to differentiate between running a model and training one. The computational demands for training an AI system are significantly greater than those for inference. DeepSeek has not fully disclosed its training methodologies, leaving open-ended questions about how its models are developed and the extent of resources required.
Data controversy
DeepSeek has also faced accusations regarding the unauthorised use of external training data, particularly allegations that it leveraged OpenAI’s proprietary models without permission. If these allegations were true, it would undoubtedly raise ethical and legal concerns.
Training an AI model, even when utilising external sources, is an immense technical challenge. Simply acquiring data is not enough. It must be structured, preprocessed, and aligned with the model’s learning objectives. Fine-tuning a large-scale model involves optimising billions of parameters, balancing efficiency with response accuracy, and ensuring contextual relevance. This process requires significant engineering expertise and computational power.
Even if DeepSeek had leveraged OpenAI’s proprietary data, the fact that it successfully built a functional and competitive model around it is remarkable. It highlights the technical proficiency required to construct an independent AI system, rather than simply mirroring or copying existing models. Whether DeepSeek’s approach was entirely ethical or not, its ability to develop a model capable of competing with ChatGPT suggests a high level of AI research and engineering competence.
DeepSeek has not provided a comprehensive breakdown of its data sources. While AI development remains a grey area regarding dataset use, it is essential to distinguish between utilising publicly accessible data and outright intellectual property infringement. Many AI leaders have built models on large-scale web-scraped datasets, making it difficult to single out one entity as uniquely responsible for such practices.
The API: pragmatism vs. accusations
Another contentious topic is DeepSeek’s use of OpenAI’s API. Some have accused the platform of “stealing” by leveraging OpenAI’s infrastructure. However, this perspective overlooks a critical reality: using an existing API is not theft; rather, it is a strategic decision. Rather than rebuilding an entire system from the ground up, DeepSeek integrates a well-established framework, focusing on innovation in areas that differentiate its product.
Employing a widely recognised API is advantageous from a user experience standpoint. Familiarity with response patterns enhances adoption, while the open-source model allows for community-driven improvements, audits, and forks. Instead of diminishing credibility, this approach fosters greater transparency and collaboration.
AI bubble: a matter of time?
If history has shown us anything, is that speculative bubbles are not sustainable. The dot-com bubble burst when overvalued technology companies failed to justify their market positions. Web3 and NFTs lost traction when their real-world applications failed to meet expectations. Despite its transformational potential, AI is exhibiting similar warning signs of overinflation.
DeepSeek’s competition with ChatGPT is one aspect of a much wider narrative. Massive investments, strategic monopolies, and carefully crafted media narratives helped strengthen the AI business. Eventually, economic reality will force a correction. The only question is, when the dust settles, which entities will stand?