You like pytorch? You like micrograd? You love tinygrad! ❤️
Go to file
chenyu e4bbbc5bc3
Revert "Use the reduceop dtype to define the acc in linearizer (#2625)" (#2783)
This reverts commit f3ed96a929.
2023-12-15 16:29:10 -05:00
.github/workflows tests from new lazy branch (#2774) 2023-12-14 23:06:39 -08:00
disassemblers/adreno [ready] Replacing os with pathlib (#1708) 2023-08-30 10:41:08 -07:00
docs FROM -> COPY, move vars_from_ast (#2675) 2023-12-07 16:32:30 -08:00
examples tests from new lazy branch (#2774) 2023-12-14 23:06:39 -08:00
extra TernaryOps.WHERE has vin[0] as bool and BinaryOps.CMPLT always outputs bool (#2782) 2023-12-15 14:51:51 -05:00
openpilot tests from new lazy branch (#2774) 2023-12-14 23:06:39 -08:00
test Revert "Use the reduceop dtype to define the acc in linearizer (#2625)" (#2783) 2023-12-15 16:29:10 -05:00
tinygrad Revert "Use the reduceop dtype to define the acc in linearizer (#2625)" (#2783) 2023-12-15 16:29:10 -05:00
weights gitignore in weights 2023-08-02 16:26:41 +00:00
.editorconfig Revert "update editorconfig, enforce via CI (#1343)" (#1380) 2023-07-31 10:35:50 -07:00
.gitignore green dtypes ALU tests (#2617) 2023-12-06 08:15:46 -08:00
.pre-commit-config.yaml FROM -> COPY, move vars_from_ast (#2675) 2023-12-07 16:32:30 -08:00
.pylintrc ruff checks the max line length is 150 (#2734) 2023-12-12 17:34:47 -08:00
.tokeignore Add a quick start guide (#900) 2023-06-04 08:51:20 -07:00
CONTRIBUTING.md feat: reword contributing (#1131) 2023-07-04 22:17:47 -07:00
LICENSE Updated LICENSE year (#760) 2023-05-01 15:35:23 -07:00
README.md multitensor start (#2676) 2023-12-07 17:07:05 -08:00
mypy.ini back to 6.54GB for stable diffusion (#2288) 2023-11-13 16:50:04 -08:00
push_pypi.sh push pypi 2020-10-27 08:13:15 -07:00
pytest.ini update pytest marks and CI test filters (#2587) 2023-12-03 15:20:44 -05:00
ruff.toml ruff checks the max line length is 150 (#2734) 2023-12-12 17:34:47 -08:00
run_multibackend.sh convert `$@` to `"$@"` in `run_multibackend.sh` (#1379) 2023-07-31 10:39:22 -07:00
setup.py ruff checks the max line length is 150 (#2734) 2023-12-12 17:34:47 -08:00
strip_whitespace.sh strip whitespace 2023-06-27 10:11:43 -07:00
sz.py ruff checks the max line length is 150 (#2734) 2023-12-12 17:34:47 -08:00

README.md

logo

tinygrad: For something between PyTorch and karpathy/micrograd. Maintained by tiny corp.

Homepage | Documentation | Examples | Showcase | Discord

GitHub Repo stars Unit Tests Discord


This may not be the best deep learning framework, but it is a deep learning framework.

Due to its extreme simplicity, it aims to be the easiest framework to add new accelerators to, with support for both inference and training. If XLA is CISC, tinygrad is RISC.

tinygrad is still alpha software, but we raised some money to make it good. Someday, we will tape out chips.

Features

LLaMA and Stable Diffusion

tinygrad can run LLaMA and Stable Diffusion!

Laziness

Try a matmul. See how, despite the style, it is fused into one kernel with the power of laziness.

DEBUG=3 python3 -c "from tinygrad import Tensor;
N = 1024; a, b = Tensor.rand(N, N), Tensor.rand(N, N);
c = (a.reshape(N, 1, N) * b.T.reshape(1, N, N)).sum(axis=2);
print((c.numpy() - (a.numpy() @ b.numpy())).mean())"

And we can change DEBUG to 4 to see the generated code.

Neural networks

As it turns out, 90% of what you need for neural networks are a decent autograd/tensor library. Throw in an optimizer, a data loader, and some compute, and you have all you need.

from tinygrad import Tensor, nn

class LinearNet:
  def __init__(self):
    self.l1 = Tensor.kaiming_uniform(784, 128)
    self.l2 = Tensor.kaiming_uniform(128, 10)
  def __call__(self, x:Tensor) -> Tensor:
    return x.flatten(1).dot(self.l1).relu().dot(self.l2)

model = LinearNet()
optim = nn.optim.Adam([model.l1, model.l2], lr=0.001)

x, y = Tensor.rand(4, 1, 28, 28), Tensor([2,4,3,7])  # replace with real mnist dataloader

for i in range(10):
  optim.zero_grad()
  loss = model(x).sparse_categorical_crossentropy(y).backward()
  optim.step()
  print(i, loss.item())

See examples/beautiful_mnist.py for the full version that gets 98% in ~5 seconds

Accelerators

tinygrad already supports numerous accelerators, including:

And it is easy to add more! Your accelerator of choice only needs to support a total of 26 (optionally 27) low level ops. More information can be found in the documentation for adding new accelerators.

Installation

The current recommended way to install tinygrad is from source.

From source

git clone https://github.com/tinygrad/tinygrad.git
cd tinygrad
python3 -m pip install -e .

Don't forget the . at the end!

Documentation

Documentation along with a quick start guide can be found in the docs/ directory.

Quick example comparing to PyTorch

from tinygrad import Tensor

x = Tensor.eye(3, requires_grad=True)
y = Tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()

print(x.grad.numpy())  # dz/dx
print(y.grad.numpy())  # dz/dy

The same thing but in PyTorch:

import torch

x = torch.eye(3, requires_grad=True)
y = torch.tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()

print(x.grad.numpy())  # dz/dx
print(y.grad.numpy())  # dz/dy

Contributing

There has been a lot of interest in tinygrad lately. Here are some basic guidelines for contributing:

  • Bug fixes are the best and always welcome! Like this one.
  • If you don't understand the code you are changing, don't change it!
  • All code golf PRs will be closed, but conceptual cleanups are great.
  • Features are welcome. Though if you are adding a feature, you need to include tests.
  • Improving test coverage is great, with reliable non-brittle tests.

Additional guidelines can be found in CONTRIBUTING.md.

Running tests

For more examples on how to run the full test suite please refer to the CI workflow.

Some examples:

python3 -m pip install -e '.[testing]'
python3 -m pytest
python3 -m pytest -v -k TestTrain
python3 ./test/models/test_train.py TestTrain.test_efficientnet