From effd0dc833adaf47b274947b458b43cd62bde100 Mon Sep 17 00:00:00 2001 From: George Hotz Date: Sat, 30 Oct 2021 16:34:00 -0700 Subject: [PATCH] update readme --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 31ec64f2..2b0052c8 100644 --- a/README.md +++ b/README.md @@ -8,9 +8,9 @@ For something in between a [pytorch](https://github.com/pytorch/pytorch) and a [karpathy/micrograd](https://github.com/karpathy/micrograd) -This may not be the best deep learning framework, but it is a deep learning framework. +This may not be the best deep learning framework, but it is a deep learning framework. The core of it is in `tinygrad/`, and it's under 1000 lines. -Due to its extreme simplicity, it aims to be the easiest framework to add new accelerators to, with support for both inference and training. Support the simple basic ops, and you get SOTA [vision](https://arxiv.org/abs/1905.11946) `extra/efficientnet.py` and [language](https://arxiv.org/abs/1706.03762) `extra/transformer.py` models. We are working on support for the Apple Neural Engine. +Due to its extreme simplicity, it aims to be the easiest framework to add new accelerators to, with support for both inference and training. Support the simple basic ops, and you get SOTA [vision](https://arxiv.org/abs/1905.11946) `models/efficientnet.py` and [language](https://arxiv.org/abs/1706.03762) `models/transformer.py` models. We are working on support for the Apple Neural Engine. Eventually, [we will build custom hardware](https://geohot.github.io/blog/jekyll/update/2021/06/13/a-breakdown-of-ai-chip-companies.html) for tinygrad, and it will be blindingly fast. Now, it is slow.