tinygrad/extra
George Hotz 232ed2af3f
more test cleanups (#2631)
* more test cleanups

* move test example back
2023-12-05 16:17:57 -08:00
..
accel move things, clean up extra (#2292) 2023-11-13 20:18:40 -08:00
assembly dtypes.float.vec(sz) (#2386) 2023-11-22 17:43:14 -08:00
datasets bump version to 0.8.0, clean CI, remove requests (#2545) 2023-12-01 10:42:50 -08:00
dist hip & cuda to gpuctypes (#2539) 2023-12-01 09:25:27 -08:00
gemm only 62 gflops (#2629) 2023-12-05 13:28:24 -08:00
junk coder.py can write and run code (#2439) 2023-11-25 12:27:54 -08:00
models JIT=0 llama.py should not jit (#2609) 2023-12-04 20:21:07 -05:00
optimization bye bye NOOP (#2534) 2023-11-30 23:10:35 -08:00
triton Non fp32 math (#2264) 2023-12-03 13:45:49 -08:00
archprobe.py
augment.py
autopad.py autopad shapetracker for BEAM (#2375) 2023-11-22 21:05:25 -05:00
dump_cache.py wow how did i think that was okay (#2339) 2023-11-16 21:21:11 -08:00
export_model.py new style device (#2530) 2023-11-30 17:07:16 -08:00
gradcheck.py
introspection.py move optimize_local_size (#2221) 2023-11-05 21:00:52 -08:00
lr_scheduler.py ResNet training changes (update benchmark) (#2390) 2023-11-22 17:41:12 -08:00
onnx.py onnx ops cleanup (#2413) 2023-11-23 18:39:49 -08:00
onnx_ops.py Fix: Get item from ndarray before casting to int (#2525) 2023-11-30 18:34:31 -08:00
thneed.py new style device (#2530) 2023-11-30 17:07:16 -08:00
to_movement_ops.py Node.vars() returns a set and properly dedup (#2356) 2023-11-18 17:44:52 -05:00
training.py cleanups before interpreted jit (#2306) 2023-11-14 21:44:25 -08:00