When ONNC meets NVDLA — 當開源專案交會時互放的光亮ONNC (Open Neural Network Compiler) is a compilation framework designed specifically for proprietary deep learning accelerators. Its…Apr 8, 2021Apr 8, 2021
ONNC Runtime Incorporated with Intel MKL LibrarySummary: In this article, we describe how we leverage Intel® Math Kernel Library (Intel® MKL) and significantly improved ONNC runtime…Oct 7, 2020Oct 7, 2020
ONNC Quantization to INT8 ExperimentSome hardware modules inside NVDLA change the precision of the prediction results. If a calibrator doesn’t consider hardware architectural…Nov 18, 2019Nov 18, 2019
Porting ONNC to Proprietary DLA is a BreezeNOTE: The feature described below is scheduled to be available in version 1.0.0.Aug 17, 2018A response icon1Aug 17, 2018A response icon1
Liveness Analysis Helps Save Gigabytes of Memory Usage for AI InferenceMemory allocation is an essential step in the traditional compiler and in the neural network (NN) compiler as well. Each variable of…Jul 20, 2018A response icon1Jul 20, 2018A response icon1
Open Neural Network Compiler (ONNC)The Open Neural Network Compiler (ONNC) project aims to provide a compiler to connect Open Neural Network Exchange Format (ONNX) to every…Jun 22, 2018A response icon1Jun 22, 2018A response icon1