Binarized Neural Networks for efficient deep learning

Larq is an ecosystem of open-source Python packages for building, training and deploying Binarized Neural Networks to enable efficient inference on mobile and edge devices.

Get started with Larq

Deep learning with 1-bit weights and activations

Most neural networks use 32, 16 or 8 bits to encode each weight and activation, making them slow and power-hungry. Binarized Neural Networks (BNNs) restrict weights and activations to be only +1 or -1, and drastically reduce the model’s memory footprint and computational complexity.

End-to-end tools for developing BNNs

Larq lets engineers and researchers access state-of-the-art BNNs, train their own from scratch, and deploy them on mobile and edge devices.

Ready to use pretrained models

Larq Zoo provides implementations and pretrained weights for cutting-edge BNNs, allowing you to effortlessly start using efficient deep learning in your projects.

Intuitive and flexible extension of TensorFlow Keras

Larq is a powerful yet easy-to-use library for building and training BNNs that is fully compatible with the larger tf.keras ecosystem.

Simple deployment for the fastest inference

Larq Compute Engine is a highly-optimized inference library for deploying BNNs on mobile and edge devices.

LEARN LARQ

Introduction to BNNs with Larq

Image

LEARN LARQ

Deploy your first BNN on Android

Image
Read the docs