tinygrad/README.md

182 lines
8.6 KiB
Markdown
Raw Normal View History

<div align="center">
2020-10-27 01:12:49 +08:00
<picture>
<source media="(prefers-color-scheme: light)" srcset="/docs/logo_tiny_light.svg">
<img alt="tiny corp logo" src="/docs/logo_tiny_dark.svg" width="50%" height="50%">
</picture>
2020-10-19 02:27:37 +08:00
tinygrad: For something between [PyTorch](https://github.com/pytorch/pytorch) and [karpathy/micrograd](https://github.com/karpathy/micrograd). Maintained by [tiny corp](https://tinygrad.org).
2020-10-19 04:41:51 +08:00
<h3>
2023-01-29 03:36:15 +08:00
2024-06-13 06:29:19 +08:00
[Homepage](https://github.com/tinygrad/tinygrad) | [Documentation](https://docs.tinygrad.org/) | [Discord](https://discord.gg/ZjZadyC7PK)
2020-10-18 13:57:01 +08:00
</h3>
2022-11-09 11:13:11 +08:00
[![GitHub Repo stars](https://img.shields.io/github/stars/tinygrad/tinygrad)](https://github.com/tinygrad/tinygrad/stargazers)
[![Unit Tests](https://github.com/tinygrad/tinygrad/actions/workflows/test.yml/badge.svg)](https://github.com/tinygrad/tinygrad/actions/workflows/test.yml)
[![Discord](https://img.shields.io/discord/1068976834382925865)](https://discord.gg/ZjZadyC7PK)
2022-11-09 11:13:11 +08:00
</div>
2020-10-19 03:48:17 +08:00
---
2020-10-19 03:48:17 +08:00
This may not be the best deep learning framework, but it is a deep learning framework.
2020-10-19 03:48:17 +08:00
2023-06-06 03:20:14 +08:00
Due to its extreme simplicity, it aims to be the easiest framework to add new accelerators to, with support for both inference and training. If XLA is CISC, tinygrad is RISC.
2020-10-19 03:48:17 +08:00
2023-06-06 03:20:14 +08:00
tinygrad is still alpha software, but we [raised some money](https://geohot.github.io/blog/jekyll/update/2023/05/24/the-tiny-corp-raised-5M.html) to make it good. Someday, we will tape out chips.
2020-10-19 03:48:17 +08:00
## Features
2020-10-19 03:48:17 +08:00
### LLaMA and Stable Diffusion
2020-10-19 03:48:17 +08:00
tinygrad can run [LLaMA](/docs/showcase.md#llama) and [Stable Diffusion](/docs/showcase.md#stable-diffusion)!
2020-10-18 13:57:01 +08:00
### Laziness
2023-03-07 00:25:13 +08:00
Try a matmul. See how, despite the style, it is fused into one kernel with the power of laziness.
2023-03-07 00:25:13 +08:00
```sh
2023-11-24 06:54:52 +08:00
DEBUG=3 python3 -c "from tinygrad import Tensor;
2023-06-04 23:52:13 +08:00
N = 1024; a, b = Tensor.rand(N, N), Tensor.rand(N, N);
c = (a.reshape(N, 1, N) * b.T.reshape(1, N, N)).sum(axis=2);
2023-03-07 00:25:13 +08:00
print((c.numpy() - (a.numpy() @ b.numpy())).mean())"
```
And we can change `DEBUG` to `4` to see the generated code.
### Neural networks
2020-10-19 07:40:42 +08:00
As it turns out, 90% of what you need for neural networks are a decent autograd/tensor library.
Throw in an optimizer, a data loader, and some compute, and you have all you need.
2020-10-19 07:40:42 +08:00
```python
2023-11-24 06:54:52 +08:00
from tinygrad import Tensor, nn
2020-10-19 05:32:45 +08:00
2023-11-24 06:54:52 +08:00
class LinearNet:
2020-10-19 05:32:45 +08:00
def __init__(self):
2023-11-24 06:54:52 +08:00
self.l1 = Tensor.kaiming_uniform(784, 128)
self.l2 = Tensor.kaiming_uniform(128, 10)
def __call__(self, x:Tensor) -> Tensor:
return x.flatten(1).dot(self.l1).relu().dot(self.l2)
2020-10-19 05:32:45 +08:00
2023-11-24 06:54:52 +08:00
model = LinearNet()
optim = nn.optim.Adam([model.l1, model.l2], lr=0.001)
2020-10-19 05:32:45 +08:00
2023-11-24 06:54:52 +08:00
x, y = Tensor.rand(4, 1, 28, 28), Tensor([2,4,3,7]) # replace with real mnist dataloader
2020-10-19 05:32:45 +08:00
with Tensor.train():
for i in range(10):
optim.zero_grad()
loss = model(x).sparse_categorical_crossentropy(y).backward()
optim.step()
print(i, loss.item())
2020-10-19 05:32:45 +08:00
```
2020-10-19 04:08:14 +08:00
2023-11-24 06:58:22 +08:00
See [examples/beautiful_mnist.py](examples/beautiful_mnist.py) for the full version that gets 98% in ~5 seconds
## Accelerators
2020-11-03 00:33:48 +08:00
tinygrad already supports numerous accelerators, including:
2020-11-03 00:33:48 +08:00
- [x] [GPU (OpenCL)](tinygrad/runtime/ops_gpu.py)
- [x] [CLANG (C Code)](tinygrad/runtime/ops_clang.py)
- [x] [LLVM](tinygrad/runtime/ops_llvm.py)
- [x] [METAL](tinygrad/runtime/ops_metal.py)
- [x] [CUDA](tinygrad/runtime/ops_cuda.py)
- [x] [AMD](tinygrad/runtime/ops_amd.py)
- [x] [NV](tinygrad/runtime/ops_nv.py)
2024-09-07 03:49:21 +08:00
- [x] [QCOM](tinygrad/runtime/ops_qcom.py)
2020-11-03 00:33:48 +08:00
And it is easy to add more! Your accelerator of choice only needs to support a total of ~25 low level ops.
2022-06-09 02:46:09 +08:00
To check default accelerator run: `python3 -c "from tinygrad import Device; print(Device.DEFAULT)"`
## Installation
2022-06-09 02:46:09 +08:00
The current recommended way to install tinygrad is from source.
2020-12-14 13:32:20 +08:00
### From source
2020-12-14 13:32:20 +08:00
```sh
git clone https://github.com/tinygrad/tinygrad.git
cd tinygrad
python3 -m pip install -e .
2020-12-14 13:32:20 +08:00
```
2023-12-20 10:04:00 +08:00
### Direct (master)
```sh
python3 -m pip install git+https://github.com/tinygrad/tinygrad.git
```
2020-12-14 13:32:20 +08:00
## Documentation
2022-06-09 02:41:19 +08:00
2024-06-29 05:36:20 +08:00
Documentation along with a quick start guide can be found on the [docs website](https://docs.tinygrad.org/) built from the [docs/](/docs) directory.
2022-06-09 02:41:19 +08:00
### Quick example comparing to PyTorch
2022-06-09 02:41:19 +08:00
```python
2023-11-24 06:54:52 +08:00
from tinygrad import Tensor
2020-11-03 00:30:43 +08:00
x = Tensor.eye(3, requires_grad=True)
y = Tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()
2020-11-08 04:26:57 +08:00
print(x.grad.numpy()) # dz/dx
print(y.grad.numpy()) # dz/dy
2020-11-08 04:26:57 +08:00
```
The same thing but in PyTorch:
```python
import torch
2023-05-26 14:10:41 +08:00
x = torch.eye(3, requires_grad=True)
y = torch.tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()
2023-05-26 14:10:41 +08:00
print(x.grad.numpy()) # dz/dx
print(y.grad.numpy()) # dz/dy
2023-05-26 14:10:41 +08:00
```
## Contributing
2023-12-20 04:19:14 +08:00
There has been a lot of interest in tinygrad lately. Following these guidelines will help your PR get accepted.
2021-10-31 10:47:34 +08:00
2023-12-20 04:19:14 +08:00
We'll start with what will get your PR closed with a pointer to this section:
2021-10-31 10:47:34 +08:00
2023-12-20 04:19:14 +08:00
- No code golf! While low line count is a guiding light of this project, anything that remotely looks like code golf will be closed. The true goal is reducing complexity and increasing readability, and deleting `\n`s does nothing to help with that.
- All docs and whitespace changes will be closed unless you are a well-known contributor. The people writing the docs should be those who know the codebase the absolute best. People who have not demonstrated that shouldn't be messing with docs. Whitespace changes are both useless *and* carry a risk of introducing bugs.
- Anything you claim is a "speedup" must be benchmarked. In general, the goal is simplicity, so even if your PR makes things marginally faster, you have to consider the tradeoff with maintainablity and readablity.
2023-12-22 00:06:24 +08:00
- In general, the code outside the core `tinygrad/` folder is not well tested, so unless the current code there is broken, you shouldn't be changing it.
2024-01-20 02:58:47 +08:00
- If your PR looks "complex", is a big diff, or adds lots of lines, it won't be reviewed or merged. Consider breaking it up into smaller PRs that are individually clear wins. A common pattern I see is prerequisite refactors before adding new functionality. If you can (cleanly) refactor to the point that the feature is a 3 line change, this is great, and something easy for us to review.
2023-12-20 04:19:14 +08:00
Now, what we want:
- Bug fixes (with a regression test) are great! This library isn't 1.0 yet, so if you stumble upon a bug, fix it, write a test, and submit a PR, this is valuable work.
- Solving bounties! tinygrad [offers cash bounties](https://docs.google.com/spreadsheets/d/1WKHbT-7KOgjEawq5h5Ic1qUWzpfAzuD_J06N1JwOCGs/edit?usp=sharing) for certain improvements to the library. All new code should be high quality and well tested.
- Features. However, if you are adding a feature, consider the line tradeoff. If it's 3 lines, there's less of a bar of usefulness it has to meet over something that's 30 or 300 lines. All features must have regression tests. In general with no other constraints, your feature's API should match torch or numpy.
- Refactors that are clear wins. In general, if your refactor isn't a clear win it will be closed. But some refactors are amazing! Think about readability in a deep core sense. A whitespace change or moving a few functions around is useless, but if you realize that two 100 line functions can actually use the same 110 line function with arguments while also improving readability, this is a big win. Refactors should pass [process replay](#process-replay-tests).
2023-12-20 04:19:14 +08:00
- Tests/fuzzers. If you can add tests that are non brittle, they are welcome. We have some fuzzers in here too, and there's a plethora of bugs that can be found with them and by improving them. Finding bugs, even writing broken tests (that should pass) with `@unittest.expectedFailure` is great. This is how we make progress.
- Dead code removal from core `tinygrad/` folder. We don't care about the code in extra, but removing dead code from the core library is great. Less for new people to read and be confused by.
2022-06-06 03:13:05 +08:00
2020-10-27 23:10:51 +08:00
### Running tests
2023-12-20 04:19:14 +08:00
You should install the pre-commit hooks with `pre-commit install`. This will run the linter, mypy, and a subset of the tests on every commit.
For more examples on how to run the full test suite please refer to the [CI workflow](.github/workflows/test.yml).
2023-12-20 04:19:14 +08:00
Some examples of running tests locally:
```sh
2023-12-20 04:19:14 +08:00
python3 -m pip install -e '.[testing]' # install extra deps for testing
python3 test/test_ops.py # just the ops tests
python3 -m pytest test/ # whole test suite
2020-10-27 23:10:51 +08:00
```
#### Process replay tests
[Process replay](https://github.com/tinygrad/tinygrad/blob/master/test/external/process_replay/process_replay.py) compares your PR's generated kernels against master. If your PR is a refactor or speedup without any expected behavior change, It should include [run_process_replay] in the PR title, [example](https://github.com/tinygrad/tinygrad/pull/4995). Note that you should keep your branch up-to-date with master.