Tensor Computing Processor BM1684

SOPHON BM1684, the third-generation tensor processor launched by BITMAIN for deep learning, has improved performance by 6 times versus previous generation

Peak performance

17.6 TOPS INT8

Video decoding

38 channel hardware decoding

On-chip SRAM capacity


Arithmetic Unit


Support INT8 and FP32 precision, greatly improve computing power

Integrated high-performance ARM core,support secondary development

Integrated video and image decoding and encoding capabilities

Support PCIe, Ethernet interface

Support TensorFlow, Caffe and other mainstream frameworks

Wide application and rich scenes

The edge computing AI chip BM1684 can be used in artificial intelligence, machine vision and high performance computing environments.

Easy to use and convenient, full stack efficient

BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools such as the underlying driver environment, compiler, inference deployment tool and so on. Easy to use and convenient, covering the model optimization, efficient runtime support and other capabilities required for the neural network inference stage, providing easy-to-use and efficient full-stack solutions for deep learning application development and deployment. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of Fortune Group to facilitate intelligent applications.

Support mainstream programming framework