DeepSeek Unveils Cost-Effective AI Model, Challenging OpenAI with Open-Source Innovation
September 29, 2025
DeepSeek has launched its latest experimental AI model, DeepSeek-V3.2-Exp, built on the V3.1-Terminus architecture, incorporating innovative Sparse Attention (DSA) to enhance training and inference speed for long-context NLP tasks.
The new sparse attention mechanism allows the model to perform nearly as well as previous versions while significantly reducing resource consumption, making it more cost-effective for large-scale deployment.
This release marks an important step toward a next-generation architecture, following the success of earlier versions V3 and R1, and aims to improve efficiency and scalability.
Open access and affordability of DeepSeek's models could disrupt the competitive landscape, challenging major players like OpenAI and Anthropic, and empowering smaller businesses to deploy advanced AI solutions.
The model's deployment coincides with a booming AI market projected to reach $390 billion by 2025, positioning DeepSeek to compete with industry giants through its open-source approach and cost advantages.
DeepSeek has reduced its API pricing by over 50%, with costs now at least 50% lower than competitors like GPT-4 and Claude-3.5, thanks to improvements in inference speed, memory efficiency, and overall training costs.
The model is available for deployment via platforms like HuggingFace, Docker, and vLLM, with hardware recommendations involving multiple H100 GPUs, making it suitable for both research and enterprise applications.
API costs are based on a cache-hit system, leading to 70-80% cost reductions in high cache-hit scenarios, further lowering barriers for widespread adoption in sectors like finance and healthcare.
The open-source ecosystem supports DeepSeek through high-performance kernels for CUDA, research, and sparse attention, licensed under MIT, encouraging community participation and commercial use.
DeepSeek’s focus on efficiency and open-source development is part of a broader strategy to stay competitive amid a fierce AI price war in China, where rivals like Z.ai and Alibaba are aggressively undercutting prices.
Previous versions V3 and R1 demonstrated performance comparable to OpenAI and Google models at lower costs, establishing DeepSeek as a serious contender in the AI industry.
Future plans include architectural improvements, multimodal capabilities, and next-generation models like V4, with community feedback playing a vital role in ongoing development.
Summary based on 12 sources
Get a daily email with more Tech stories
Sources

TechCrunch • Sep 29, 2025
DeepSeek releases ‘sparse attention’ model that cuts API costs in half
South China Morning Post • Sep 29, 2025
China’s DeepSeek unveils experimental version of its AI foundation model