It is a very powerful computer for deep learning, and likely the best performance/$. It was benchmarked in MLPerf Training 4.0 vs computers that cost 10x as much. And of course, anything that can train can do inference.
How do I get a tinybox?
Place an order through the links above. The factory is up and running, and it will ship within one week of us receiving the payment. Currently offering pickup in San Diego + shipping worldwide.
Where can I learn more about the tinybox?
We have a lot of content on our Twitter, we also have a tinybox docs page and a #tinybox discord channel.
Can I customize my tinybox?
In order to keep prices low and quality high, we don't offer any customization to the box or ordering process. Of course, after you buy the tinybox, it's yours and you are welcome to do whatever you want with it!
Can you fill out this supplier onboarding form?
In order to keep prices low and quality high, we don't offer any customization to the box or ordering process. If you aren't capable of ordering through the website, I'm sorry but we won't be able to help.
Can I pay with something besides wire transfer?
In order to keep prices low and quality high, we don't offer any customization to the box or ordering process. Wire transfer is the only accepted form of payment.
tinygrad is used in openpilot to run the driving model on the Snapdragon 845 GPU. It replaces SNPE, is faster, supports loading onnx files, supports training, and allows for attention (SNPE only allows fixed weights).
Is tinygrad inference only?
No! It supports full forward and backward passes with autodiff. This is implemented at a level of abstraction higher than the accelerator specific code, so a tinygrad port gets you this for free.
How can I use tinygrad for my next ML project?
Follow the installation instructions on the tinygrad repo. It has a similar API to PyTorch, yet simpler and more refined. Less stable though while tinygrad is in alpha, so be warned, though it's been fairly stable for a while.
When will tinygrad leave alpha?
When we can reproduce a common set of papers on 1 NVIDIA GPU 2x faster than PyTorch. We also want the speed to be good on the M1. ETA, Q2 next year.
How is tinygrad faster than PyTorch?
For most use cases it isn't yet, but it will be. It has three advantages:
It compiles a custom kernel for every operation, allowing extreme shape specialization.
All tensors are lazy, so it can aggressively fuse operations.
The backend is 10x+ simpler, meaning optimizing one kernel makes everything fast.