SOPHON BM1680 is BITMAIN's first tensor processor for deep learning, suitable for training and inference of neural network models such as CNN / RNN / DNN and other neural network models
Peak performance
Data precision
On-chip SRAM capacity
Average power consumption
Tensor Computing acceleration
The architecture is optimized for deep learning
The product form is flexible and can be customized
The product form is flexible and can be customized
Optimized instruction set and software stack
The edge computing AI chip BM1680 can be used in artificial intelligence, machine vision and high performance computing environments.
BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools such as the underlying driver environment, compiler, inference deployment tool and so on. Easy to use and convenient, covering the model optimization, efficient runtime support and other capabilities required for the neural network inference stage, providing easy-to-use and efficient full-stack solutions for deep learning application development and deployment. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of Fortune Group to facilitate intelligent applications.