Commit Graph

189 Commits

Author SHA1 Message Date
George Hotz 0335cb86b9 refactor comparison. there's a bug in the method cache 2023-03-02 10:10:16 -08:00
George Hotz 8902764167 fit nits in compare 2023-03-02 08:15:26 -08:00
Diogo 52204a7b88
adding comparison operators (#616)
* Less, LessOrEqual, Greater, GreaterOrEqual, Equal

* lint fix

* using built in functions

* overriding __eq__ breaks things

* backwards pass for less - foward only tests

* one other spot

* removing backwards for comparison ops to match pytorch

* raise runtime error

* more tests for comparison ops

* fixed the lineup

* added number upcast tests
2023-03-02 08:10:44 -08:00
George Hotz 7ff92550bb slice -> pad, shrink 2023-02-28 19:58:12 -08:00
George Hotz a8de233e12
only div, no reciprocal (#601)
* only div, no reciprocal

* remove reciprocal

* fix pad shuffling
2023-02-25 09:35:03 -08:00
George Hotz 2c5e13a513
Reluless (#600)
* replace relu for maximum

* fix for other backend

* clean up RELU and GT0

* tests for maximum

* had to clean that up

* why reverse a maximum?
2023-02-25 01:21:16 -08:00
George Hotz 2e56a4793e rename log_softmax, support dim, fix onnx Softmax 2023-02-24 10:11:24 -08:00
George Hotz 82caa2d5b7 MULACC -> FusedOp 2023-02-23 18:17:57 -08:00
George Hotz 758515dcc0
conv2d is an hlop (#589)
* conv2d is an hlop

* shorter conv

* KOPT=-1

* alt imp

* MULACC

* smarter mulacc

* pop conv

* 7x7 -> 5x5

* didn't fix, that's not going to work

* this is faster and matches old behavior

* oh, non lazy just won't work with mulacc

* mulacc in torch

* bool types were creeping in

* optimizer is actually better with hlop conv

* fix pushing permutes issue

* refactor einsum_mulacc

* fix up readme

* update readme

* _image_conv2d

* fix bias addition location

* pushing permutes gets back to 200 kernels

* conv cleanup

* disable hlop conv

* don't hide that in helpers
2023-02-23 17:52:31 -08:00
Liam 09315ef34f
Add tinygrad.org reference in Readme. (#556) 2023-02-14 09:39:00 -08:00
timmermansjoy d56c57b112
adding more robust install method (#532) 2023-02-06 13:12:05 -06:00
Jacky Lee 54c68defc7
Replace SIGN with GT0 (#511)
* Replace sign with gt0

* Replace sign with gt0

* GT0 works on GPU

* Fix brackets

---------

Co-authored-by: Tom Finet <tom.codeninja@gmail.com>
2023-02-01 11:01:39 -08:00
George Hotz 259c48f235 discord image is invite link 2023-01-28 11:42:11 -08:00
George Hotz d748000ada tinygrad discord 2023-01-28 11:36:15 -08:00
George Hotz 6d7658db12 delete opencl <celebration> 2023-01-24 14:18:35 -08:00
Faisal Memon 538b1d7f5b
Print out the tensor using numpy(). (#454)
This commit resolves issue https://github.com/geohot/tinygrad/issues/453

In the example code in the README.md, when it is run, it prints for Tiny
Grad the tensors as:
<Tensor <LB (3, 3) op:MovementOps.RESHAPE> with grad None>
<Tensor <LB (1, 3) op:MovementOps.RESHAPE> with grad None>

But to be equivalent to the output of the Torch example, we need
to use numpy() to get it to show:
[[ 2.  2.  2.]
 [ 0.  0.  0.]
 [-2. -2. -2.]]
[[1. 1. 1.]]
2023-01-09 10:08:05 -08:00
Nicolai Stoianov 8dbf76268d
Add step for setting up Stable Diffusion (#452) 2023-01-07 08:40:12 -08:00
George Hotz 0994705166 contrib more 2022-11-08 19:14:37 -08:00
George Hotz c0bba9649a more that 2022-11-08 19:13:11 -08:00
George Hotz 5143da6a9f contributing 2022-11-08 19:12:12 -08:00
George Hotz 271446e3eb
set requires_grad to None (#387)
* set requires_grad to None

* some things need gradients

* hmm, why was get_parameters filtering
2022-09-21 11:16:02 -04:00
George Hotz 0516359af8 fix stupid OPENCL=1 OOM 2022-09-06 14:29:23 -07:00
George Hotz 682dc64430 works at work 2022-09-06 08:06:11 -07:00
George Hotz 0ba6179de7 stable diffusion in readme 2022-09-05 18:51:56 -07:00
George Hotz b132de677d
tinygrad.nn (#367)
* tinygrad.nn

* flake8

* working on pylint

* more pylint

* more pylint

* pylint passes

* networkx

* mypy can't infer that type

* junk
2022-08-18 07:41:00 -07:00
George Hotz bdfdbc8f8d broken amfi patch 2022-08-13 10:41:25 +02:00
George Hotz 01de17eeb8 amfi note 2022-08-08 13:17:36 +02:00
George Hotz 3c4565fa21 SLICE -> PAD,SHRINK 2022-07-17 11:33:59 -07:00
George Hotz f6ea7c022a Revert "EXPAND -> REPEAT"
This reverts commit 115d2eadf5.
2022-07-17 08:42:10 -07:00
George Hotz 115d2eadf5 EXPAND -> REPEAT 2022-07-17 08:38:54 -07:00
George Hotz df16b455a7
make lazy the default (#352)
* make lazy the default

* always float32

* while the lazy framework should be default, lazyness itself shouldn't be (for now)

* bugfixes

* remove the need for the ops class

* fxn_for_op

* hmm, my contiguous asserts went away

* move small shape thing

* refactor reduce

* remove the weird unused new functions

* only that install works

* thats broken

* unused imports, should be good if it passes
2022-07-03 11:40:27 -07:00
George Hotz a11deb5150 shapetracker check for noop 2022-06-16 16:29:18 -07:00
George Hotz ff648e9510 remove convt and compute dx with conv 2022-06-15 19:54:15 -07:00
George Hotz 6d98366214 move CONVDW out of llops 2022-06-15 12:05:11 -07:00
George Hotz e057ca23bb add flip 2022-06-14 17:28:43 -07:00
George Hotz dcbca4fdf1
Expand Operator (#327)
* replace broadcasting with expand

* Tensor, not self

* remove broadcasting from mlops

* delete useless A operator

* expand, not repeat

* remove A op

* expand on gpu

* binary_op doesn't broadcast anymore

* expand is still total junk, but the tests should pass
2022-06-12 12:31:48 -07:00
George Hotz fc7eabb86f processing op 2022-06-11 08:12:02 -07:00
George Hotz 72186ebd5a movement ops, reshape is a copy now 2022-06-10 20:01:47 -07:00
George Hotz c8bacd0d8e rename transpose to permute 2022-06-10 19:41:50 -07:00
George Hotz 462f1ce0da
Remove Matmul (#323) 2022-06-10 19:26:23 -07:00
George Hotz 30ab2249eb match order 2022-06-08 11:46:51 -07:00
George Hotz 4a9882d495 hlops 2022-06-08 11:46:09 -07:00
George Hotz e046a2fd9f readme fix typos 2022-06-08 11:43:05 -07:00
George Hotz 4b09ca90a1 readme: still WIP 2022-06-08 11:41:19 -07:00
George Hotz f0fe37bd34 simpler graph demo 2022-06-05 12:40:12 -07:00
George Hotz 89acf6742d more graph docs 2022-06-05 12:16:50 -07:00
George Hotz 88de42fb6e document graph mode 2022-06-05 12:13:05 -07:00
George Hotz d8d19ed468 wikimedia wasn't returning 200 2022-01-15 19:09:29 -08:00
George Hotz a95ef16c8c sub 1000 lines 2021-10-30 19:48:24 -07:00
George Hotz 844540a5ed yolo in readme 2021-10-30 19:47:34 -07:00
George Hotz 121d5a17ee use tinynn for Conv2d 2021-10-30 19:40:44 -07:00
George Hotz 114f6ca3fd more readme cleanup 2021-10-30 16:51:25 -07:00
George Hotz effd0dc833 update readme 2021-10-30 16:34:00 -07:00
George Hotz 2e71ae33f6 max op works 2021-06-17 17:01:21 -07:00
George Hotz e8eb7d1b7e max op 2021-06-17 16:20:56 -07:00
George Hotz c1d469d440 sum op 2021-06-17 16:19:35 -07:00
George Hotz ff3fdc58e5 risk -> cherry 2021-06-16 09:59:48 -07:00
George Hotz 1e62e45d67 better todo 2021-06-15 10:30:16 -07:00
George Hotz 9ca4388695 debug 2021-06-15 10:24:21 -07:00
George Hotz 3d44aab52c more 2021-06-15 10:23:57 -07:00
George Hotz 4850d6eb43 update todo 2021-06-15 10:22:39 -07:00
George Hotz 508ced114c readme 2021-06-13 17:17:44 -07:00
George Hotz 77ba198b57
Revert "Update README.md (#259)" (#260)
This reverts commit 5a69c5db6d.
2021-06-04 14:41:41 -07:00
Gabriel Rojas 5a69c5db6d
Update README.md (#259) 2021-06-04 14:41:07 -07:00
George Hotz 0702e0c763 nah, no sign, it's not what you want. use relu 2021-01-03 09:30:33 -08:00
George Hotz c2eeb6950b add support for sign. technically relu can be second class now 2021-01-03 08:29:57 -08:00
George Hotz 92abe43683 reduce before binary because of unbroadcasting 2020-12-31 09:49:52 -05:00
George Hotz de7fe085de no read out of bounds 2020-12-31 09:41:36 -05:00
George Hotz 30f8132646 reorder ops in ops cpu 2020-12-30 11:00:01 -05:00
George Hotz e5b2803b5d ops in readme 2020-12-30 10:48:55 -05:00
George Hotz 2d44bf7f1a Dot -> Matmul 2020-12-30 10:41:51 -05:00
George Hotz fcfe3dae01 write slice for CPU 2020-12-30 10:32:53 -05:00
George Hotz 1f5c9618ef refactor in readme and issue #225 2020-12-29 17:30:04 -05:00
George Hotz 4bbad11afe link to papers 2020-12-29 14:15:46 -05:00
George Hotz 3f8e137b6f extra/transformer 2020-12-29 14:14:00 -05:00
George Hotz 8f9232d59b readmee 2020-12-29 13:40:34 -05:00
George Hotz 837aaacfbf Unpad2D on GPU: 2020-12-29 13:16:14 -05:00
George Hotz 02655c07d5 break maxpool2d on GPU 2020-12-29 13:05:57 -05:00
George Hotz 061e37de39 touchups 2020-12-29 12:41:21 -05:00
George Hotz a2e6562330 fix max op, less lines 2020-12-29 10:47:04 -05:00
George Hotz 628d21f899 doc touchup 2020-12-28 10:45:26 -05:00
George Hotz fafece9db7 avgpool2d is a second class op 2020-12-28 10:41:59 -05:00
George Hotz 593233b668 log and exp are first class ops 2020-12-28 10:00:30 -05:00
Liam bcf1518309
All devices are equal! (#196)
* Update all devices to be tested

ANE, CPU and OCL all now support all tests.

However tests are not currently passing on GPU and I cannot test on CPU.

Failing GPU test are not an issue caused by this update. Tests have not
been passing due to a missing "six" required installation.

OpenCL Tests have not been run since commit: 1a1c63a08b

devices have 3 types and are handle by a new DeviceTypes enum. (The goal
is to revert to Tensor.<type>, but this current setup allows for keyword
argument defaults: `device=DeviceType.CPU`)

All references to Tensor.GPU/CPU/ANE as been converted to the
corresponding `DeviceTypes` enum.

Refactor of the conversion code to allow for any device to any device
conversion.

* Add six dependency in requirements.txt

* Resolve failure to run tests

Move six into gpu required installs. Remove six from standard
installation.

* Remove repeated data conversion

* Refactor method names

Also reduce code with .to and .to_

* Dynamic device handlers

* Refactor DeviceTypes -> Device

* Add mem copy profiling back

* test_backward_pass_diamond_model passing

* Resolve Sum issue on GPU

* Revert batchnorm2d tests

* Update README with upadated API

* ANE testing with

* Last minute line gains
2020-12-15 23:44:08 -08:00
George Hotz b86bbd2e72 readmes 2020-12-13 21:32:20 -08:00
George Hotz 4d8235d5f7 readme update 2020-12-13 20:24:33 -08:00
NeuralLink 1a1c63a08b
Gan is real...Look what tiny just generated! (#192)
* mode collapse solved

* info add

* delete unnecessary imports

* readme
2020-12-13 20:23:12 -08:00
George Hotz f95e79dab7 update readme 2020-12-12 17:14:10 -08:00
George Hotz a5aced8d47 30 MEGAReLUs. we need to lose 12 lines 2020-12-12 17:07:34 -08:00
WillemKauf 49da969d25
Fixed a typo. (#189) 2020-12-12 16:25:33 -08:00
George Hotz bc5df477de readme and .ane() 2020-12-12 16:15:38 -08:00
George Hotz c63f950348 need zero grad now 2020-12-07 23:10:43 -08:00
George Hotz 102e6356e9 replace layer_init_uniform with .uniform 2020-12-06 13:44:31 -08:00
George Hotz 888689b57b proprotip 2020-12-04 09:24:46 -08:00
George Hotz 2862b42bac install from github 2020-12-04 09:06:25 -08:00
George Hotz 1290e01e2c all ops supported on GPU now 2020-12-03 10:43:11 -08:00
George Hotz 621a93b777 ane in readme 2020-12-03 10:40:31 -08:00
baplou c83cebccda
Made the readme more consistent (#136) 2020-11-28 08:20:02 -06:00
Marcel Bischoff 541330c42a
Update README.md (#133)
should we put `ipython3` otherwise the path doesn't work or we have to add the env, not sure what is nicer
2020-11-25 07:53:54 -08:00
George Hotz 2d4a5d5950 readme 2020-11-10 01:27:04 -08:00