DeepSeek’s open-source model, DeepThink (R1), by a little-known company in China, sent shock waves across the technology world. It’s amazing. Yes, it does excel at benchmarks comparable to other state-of-the-art models. Yes, it’s partially open source. Yes, the DeepSeek app explains its reasoning by default. But there are far-reaching implications to this important AI development, especially for privacy, security and geopolitical barriers.

The Cost Barrier To Training Models Efficiently Just Plummeted

What’s disruptive and truly amazing is how the DeepSeek engineers created the DeepThink (R1) model, especially the cost to train the model. Due to clever optimizations, the DeepThink (R1) model purportedly cost around $5.5 million to train. That’s tens of millions of dollars less than comparable models. We expect these optimizations to be copied and improved upon by model builders worldwide. Short term, that is bad news for NVIDIA because it will temper the demand. Longer term, however, the lower cost (and, thus, energy) will open up model creation opportunities for many, many more startups and enterprises alike, thereby increasing demand. This validates the fact that vendors that only provide core AI foundation models won’t be enough, and this disruptive shift will open up the AI model market even more. For tech leaders, this should be a strong signal to closely examine overreliance on a few big players in the AI space.

Also, don’t forget that while the cost to train the model has just declined significantly, the cost to support inferencing will still require significant compute (and storage). Don’t cry for NVIDIA and the hyperscalers just yet. Also, there might be an opportunity for Intel to claw its way back to relevance. Intel ceded dominance of high-end computing to NVIDIA, but the company has always bet that tech leaders will want to embed AI everywhere, from the PC to the edge to the data center to the cloud, and there will be strong demand for smaller, targeted large language models (LLMs) — a portfolio of chips at the appropriate price point might just pay off.

Edge Computing And Intelligence Is No Longer An Aspiration — It’s Here

The DeepSeek app already has millions of downloads on mobile phone app stores. The app connects to and uses the model in the cloud. Another cool way to use DeepSeek, however, is to download the model to any laptop. Several Forrester analysts have run tests on laptops. It’s a bit slow but runnable. This means that the models can run far and wide without the need for specialized hardware. This will dramatically accelerate edge computing.

Edge computing processes data closer to its source, reducing latency and bandwidth usage. This helps firms anticipate customer needs, act on their behalf, and operate businesses efficiently in localized contexts, including internet-of-things-enabled scenarios. The ability to run LLMs on laptops and edge devices amplifies these benefits by providing powerful AI capabilities directly at the edge.

Based on what we’ve seen so far from DeepSeek R1, it can process and analyze vast amounts of data in real time, enabling more responsive and intelligent edge devices. This capability is particularly valuable in scenarios where immediate decision-making is critical, such as in autonomous vehicles, industrial automation, and smart cities. By leveraging LLMs at the edge, enterprises can achieve faster data processing, improved accuracy in predictions, and enhanced user experiences, all strategic goals of AIOps initiatives.

Geopolitical, Privacy, And Security Barriers Remain

The massive downloads of DeepSeek mean that thousands (and even millions of users) are experimenting and uploading what could be sensitive information into the app. This may include enterprise data, especially for developers experimenting with the technology. According to its privacy policy, DeepSeek explicitly says it can collect “your text or audio input, prompt, uploaded files, feedback, chat history, or other content” and use it for training purposes. It also states that it can share this information with law enforcement agencies, public authorities, etc., at its discretion. Educate and inform your employees on the ramifications of using this technology and inputting personal and company information into it. Align with product leaders on whether developers should be experimenting with it and whether the product should support its implementation without stricter privacy requirements.

There Is No Excuse Not To Pursue AI Innovation (And ROI) Anymore

DeepSeek is not just “China’s ChatGPT”; it is a giant leap for global AI innovation, because by reducing the cost, time, and energy to build models, many more researchers and developers can experiment, innovate, and try new sets. Having said that, one should not assume that LLMs are the only path to more sophisticated AI. It may be that a new model architecture brings us right back to needing gobs of compute, especially for artificial general intelligence. But for the time being, DeepSeek’s release of this model and the techniques it used to create it should be a celebratory moment for AI. Now is not the time to scale back on AI prematurely.