DEEPX

As a pioneer in on-device AI, DEEPX designs high-performance AI semiconductors that maximize efficiency, minimize power usage, and lower costs across a wide range of applications. Built for seamless integration, DEEPX’s advanced chips bring powerful, efficient and low-cost AI capabilities to any device.
DEEPX delivers reliable on-device AI through:
- Thermal Efficiency: Their AI chip maintains high performance while operating between -40° C - 85°C, ensuring reliability in industrial environments.
- AI Accuracy: It delivers GPU-level AI accuracy using energy-efficient Int-8 precision, crucial for reliable autonomous decision-making in on-device systems.
- Performance Efficiency: DEEPX’s solution achieves industry-leading AI inference speed per watt, outperforming competitors by more than 2x in real-world conditions.
- Total Cost of Ownership: With up to 94% savings in electricity costs over five years, DEEPX enables more intelligent and cost-effective AI deployments.
Target applications include:
- Industrial Automation
- Vision Systems
- AIoT
- Retail
- Robots
- Surveillance Systems
AI Chips
Product Name | Type | AI Performance | Power Consumption |
---|---|---|---|
DX-M1 | AI Accelerator | 25 TOPS | 3 ~ 5 W |
DX-M1M (Q4 2025) | AI Accelerator | TBU | TBU |
DX-V3 (Q4 2025) | AI Vision SoC | 13 TOPS | TBU |
DX-M2 (Coming Soon) | GenAI Accelerator | TBU | TBU |
AI Modules
Product Name | Form Factor | AI Performance | Power Consumption |
---|---|---|---|
DX-M1 M.2 LPDDR5x2 | M.2 M Key (22 x 80 mm) | 25 TOPS | 3 ~ 5 W |
DX-M1M M.2 2242 (Q4 2025) | M.2 B+M Key (22 x 42 mm) | TBU | |
DX-H1 PCIe Card Quattro | PCIe Card (167mm x 66.4mm) | 100 TOPS | 20 W |
DX-H1 PCIe Card Dual V-NPU (Q4 2025) | Low Profile Slim PCIe Card (167mm x 66.4mm) | 50 TOPS | |
DX-V3 IPCam DX-Cam (Coming Soon) | TBU | 13 TOPS |
Introducing DXNN: Your Gateway to AI Excellence
DXNN offers a comprehensive software ecosystem, meticulously designed for our DEEPX AI SoCs, featuring:
- IQ8™ (Intelligent Quantization Integer 8)
IQ8 is Intelligent Quantization technology, utilizing Integer 8-bit. In comparison to GPU-based solutions using 32 Floating Point, IQ8 maintains the same level of accuracy or even outperforms them.
- DX-COM (NPU Compiler)
At the core of model optimization, it includes a high-performance quantizer for maximum accuracy and efficiency. This powerful tool ensures your models are precisely tuned for optimal NPU inference.
- DX-RT (NPU Runtime System Software)
This robust suite provides an API-enabled runtime, dedicated NPU device drivers, and advanced NPU firmware, ensuring seamless operation.
Designed to integrate effortlessly with a wide range of DNN models from AI software giants like TensorFlow, Pytorch, and Caffe, DXNN is your conduit to the forefront of deep learning technology.
For more information: