Commit Graph

117 Commits

Author SHA1 Message Date
chenyu 63eb0a68af
fix return dtype of gather (#4159) 2024-04-12 16:25:12 -04:00
chenyu d9c5a2b1bb
fix return dtype of getitem Tensor indexing (#4158)
the use of sum can auto-upcast the result. fixed by using the data dtype as the acc_dtype
2024-04-12 15:55:02 -04:00
chenyu 380f27d629
move sum acc_dtype into lazy so it applies to backward (#4149)
* move sum acc_dtype into lazy so it applies to backward

* unit test
2024-04-11 14:43:56 -04:00
chenyu 7bc560ec49
remove outdated bf16 comments in test_dtype (#3987) 2024-03-29 00:56:18 -04:00
uuuvn 8a40d7d423
Shape changing bitcast and assert bitcast in disk (#3973)
* Shape changing bitcast

* only support it on disk

* basic test

* more tests

* RuntimeError instead of assert

* create unique temp files

* move tests that use disk to test_disk_tensor

* linter

* remove assert on error messages

* that's RuntimeError now

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2024-03-28 21:49:10 -07:00
chenyu 793ab0512e
use ctypes to truncate float64 and float32 in uops (#3986)
this fixed the softmax.argmax bug for ops_python as the float is truncated to float32
2024-03-28 23:56:50 -04:00
chenyu 4ecd5789ab
#include <tgmath.h> in ops_clang (#3927)
* different clang sqrt/log2/exp2/sin function based on dtype

fixed softmax_argmax issue in #3552 for clang.

* tgmath.h

* revert those
2024-03-25 17:48:57 -04:00
chenyu 83f39a8ceb
env var to change default float (#3902)
* env var to change default float to fp16 or bf16

looking for standard names for these. we have FLOAT16 that does something to IMAGE and HALF to convert weights.

working on default bf16 too.
```
RuntimeError: compile failed: <null>(6): error: identifier "__bf16" is undefined
    __bf16 cast0 = (nv_bfloat16)(val0);
```

remove that in cifar

* DEFAULT_FLOAT

* default of default

* unit test

* don't check default

* tests work on linux
2024-03-24 20:33:57 -04:00
chenyu 2c69888654
include negative float in test_dtype (#3884)
* include negative float in test_dtype

* that is ub

* too annoying

* pack can overflow
2024-03-24 02:39:15 -04:00
chenyu 2d3ce53348
touchup test_dtype.test_gradient_dtype (#3887)
add back bad merge from #3613 and add float.double and float.bfloat16 to test
2024-03-22 20:56:45 -04:00
David Hou fc11808a79
initialize Tensor grad same type as self (#3613)
* initialize Tensor grad same type as self

* also test different default float

* check dtype + try/finally

* don't test_gradient_dtype if f16 is not supported

* fix bad merge

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-03-22 20:33:18 -04:00
chenyu c5467e5bd6
diverse test value in test_dtype DATA based on dtype (#3864)
* diverse test value in test_dtype DATA based on dtype

* eh fix typo

* that too?

* PTX does not support i8 and s8

* skip that

* unused line

* pus the hack back

* remove that
2024-03-22 14:22:06 -04:00
chenyu d17900bc45
use int32 instead of default_int in simplify_phi_loops (#3828)
* use int32 instead of default_int in simplify_phi_loops

indices are in int32 now and is separated from buffer dtype. fix #3823

* return early if not supported

* it's not that

* why is it failing for RHIP
2024-03-19 17:49:58 -04:00
chenyu 99cbc24390
use dtypes.int32 as return dtype for functions that return indices (#3827)
behavior matches jax. It's fine to have a tensor greater than max int8 size even if we set default int to int8
2024-03-19 17:06:57 -04:00
chenyu fa1921ec7d
move test_dtype tests to test dtype and output value (#3826) 2024-03-19 16:31:27 -04:00
chenyu 639bd5dbfc
move bf16 cast hack to Tensor.llvm_bf16_cast (#3788) 2024-03-17 18:51:22 -04:00
chenyu a2d3cf64a5
move is_dtype_supported to test.helpers (#3762)
* move is_dtype_supported to test.helpers

updated all places that check if float16 is supports

* fix tests
2024-03-15 14:33:26 -04:00
chenyu d3a6319630
bf16 tests in test_dtype.py (#3749)
With bf16 creation and bf16 to numpy, we can test bf16 in test_dtype.
Only support HIP now as it needs bf16 buffer support. Also the rtoal is slightly larger
2024-03-15 00:17:11 -04:00
chenyu 75d4344cda
UOps.BITCAST (#3747)
* UOps.BITCAST

implicitly fixed no const folding for bitcast

* python backend

* ptx

* consistent llvm
2024-03-14 21:00:35 -04:00
chenyu 11c61ae044
Revert "fix const bitcast should not be constant folded (#3743)" (#3744)
This reverts commit 38ba277ac8.
2024-03-14 19:24:05 -04:00
chenyu 38ba277ac8
fix const bitcast should not be constant folded (#3743)
* fix const bitcast should not be constant folded

* fixed const bf16 creation

* LLVM still broken
2024-03-14 19:13:52 -04:00
chenyu 4d6ec41adb
failed test cases for bf16 Tensor.full (#3729)
fixable with float const then cast to bf16. cast folding with bitcast is incorrectly skipped
2024-03-13 20:46:45 -04:00
chenyu 6793db169b
bfloat16 tensor creation from list and numpy (#3724) 2024-03-13 18:44:05 -04:00
George Hotz 69ca7f7bf9
changes for teenygrad (#3665)
* changes for teenygrad

* upd

* simpler test
2024-03-09 15:30:34 -08:00
Zaffer 1853ec9a02
add tests for bfloat16 on HIP (#3638)
* Fix bug in login functionality

* Remove HSA backend test and add bfloat16 dtype tests that run in CI

* Skip tests on HIPCPU

* skip tests causing segfault on LLVM backend

* Exclude bfloat16 tests causing segfaults in LLVM backend

* move bf16 cast tests to only test on HIP
2024-03-07 10:45:36 -08:00
qazal abc5f3a6a0
hip bf16 hotfix (#3630)
* hip bf16

* remu dev mac

* Revert "remu dev mac"

This reverts commit 465069a0dc3c7f2045f3348b312a1dcbf1587acd.

* skip disk tests in CI

* bring float8 back
2024-03-06 11:42:30 -08:00
chenyu bc2a13a5f7
test case to show clang and python doing math in double (#3628) 2024-03-06 13:49:03 -05:00
chenyu 3275260c98
Revert "test: add failing bfloat16 test case for metal backend (#3481)" (#3618)
This reverts commit 1e12a2ae80.
2024-03-05 09:08:42 -05:00
Skosh 1e12a2ae80
test: add failing bfloat16 test case for metal backend (#3481)
* test: add failing bfloat16 test case for metal backend

* test: move bfloat 16 test to dtypes test
2024-03-05 08:44:54 -05:00
qazal a29cd6d464
run f64 increased precision tests on remu (#3509)
* run the test in CI

* temp: use the pre-release

* Revert "temp: use the pre-release"

This reverts commit 28e8571421aa66e54594c3eb3efce43130557dc8.
2024-02-26 18:01:07 -05:00
chenyu b154089884
float64 function support for HIP (#3492)
* float64 function support for HIP

* not CI
2024-02-24 09:46:20 -05:00
chenyu 35aff8b0c2
properly exclude PYTHON backend and support of half (#3491)
should be able to run in CI with python 3.12
2024-02-24 09:22:06 -05:00
Patrick Tsai 9dd64b1f5f
Fix python cast uint/int overflow (#3448)
* Fix numpy uint/int overflow

* lol

* Works

* Update

* Move overflow test to float64/float32

* One line

* Update

* One more

---------

Co-authored-by: Patrick Tsai <patosai@users.noreply.github.com>
2024-02-20 09:20:43 +01:00
zku 2d702ca073
If feasible, do not truncate float64 down to float32 in cstyle renderer (#3420)
* do not truncate float64 precision

* use l suffix to try avoid overload confusion

* long line, ruff bloats the function otherwise

* fmt

* remove long double suffix (l), it's sufficient to have the float32 (f) suffix to avoid function overload ambigouity; add test showcasing rtol=1e-12 precision increase, the test fails without the renderer changes

* use more reasonable test values, same as test_int_to_float_unary_func

* disable test for CUDACPU, does not support half and segfaults on some operations per dtypes_alu test

* disable test for HIP, renderer does not support f64 precision

* do not use noqa E501, break up condition
2024-02-16 10:08:59 +01:00
xarkes 28a8b72024
Remove Interpreted device & remaining CPU/TORCH ref (#3423)
* Remove Interpreted device & remaining CPU/TORCH ref

* Oops

* supports_device was useful

* Fix doc wording

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-02-16 00:30:21 -05:00
geohotstan 5eb4c902f6
correct division dtype casting (#3405)
* 新年快乐

* fix: exclude floordiv onnx tests

* fix: less weird if statements in div

* 龙年大吉

* fix: tempfix onnx div

* fix: use reference impl for div
2024-02-15 19:34:40 -05:00
qazal 27f4de2ce4
delete half_prekernel (#3388)
* generic rendering of half and bf16

hotfix

* fix uops + regression test

* fix the test for metal's half4

* uop.uop fixup

* mypy with --strict-equality, fix ops_gpu
2024-02-14 15:40:48 +01:00
chenyu 7c1c6efee5
exclude half with PYTHON in test_dtype.is_dtype_supported (#3351)
half memoryview only in 3.12+. rest of the test_dtype (bounty) seems to be legit issue in ops_python.
2024-02-08 20:10:25 -05:00
chenyu 02636ff62d
re-enable test_reduce_0d_default int test case in test_dtype (#3336) 2024-02-07 05:30:14 -05:00
Hristo Georgiev 3ae811af21
tests for Tensor init data dtype and resulting dtype (#3247)
Co-authored-by: Hristo Georgiev <6043312+hristog@users.noreply.github.com>
2024-01-27 00:13:42 -08:00
geohotstan d0e116c6d6
fix maximum/where Scalar casting (#3194)
* init

* test: added dtype tests for maximum

* fix: seperate maximum const and maximum tensors

* fix: del useless line

* fix: some dtypes

* CODE GOLF: we golfing at mar-a-lago golf club tonight boyyyys

* fix: add lil helper function

* fix: some test refactoring

* done

* sike: not done yet lol

* wtf I missed an assert, am I drunk

* yeah idk

* fix: line save from redundant check

* revert: line save

* fix: simplify test_broadcast cuz I'm stumped

* change some test name

* fix: bool max bool  works

* test: add a maximum bool test

* test: make sure minimum also works with bool

* fix: something like this? :s

* fix: maybe this?

* fix: how about this? tighter check

* fix: this.

* revert: nvm mul(0.5) and div(2) has the same kernel for backward

* fix: .is_floating_point() xD

* revert: maximum and minimum and add cast

* fix: cover negative const case in test

* fix: use eq because I don't understand clang :D

* WHOOOOPS
2024-01-25 12:26:04 -05:00
chenyu cb4cfc078a
parameterize multitensor tests for reduce (#3181)
uneven shards reduce is incorrect now
2024-01-19 14:03:01 -05:00
chenyu b2571d586c
hypothesis.st -> hypothesis.strat (#3179)
leave `st` for shapetracker
2024-01-19 11:55:26 -05:00
George Hotz ca0beeef38
Christopherm99 ptx (#3139)
* get basic ptx impl working

* test ops passing

* mypy

* dont hardcode target

* more walrus

* ptx in ci

* bool cast and f16 load/store

* weird numpy bug and f16 cast tolerance

* cast half to bool

* fix 1 byte load/store

* disable half for ptx

* fix args and enable xid

* fix non-ptr args

* allow bitcast

* mypy

* cleanups

* midcast use allclose

* add xor

* Revert "disable half for ptx"

This reverts commit 73391c05fde5f7811293f60d994417d97ab20613.

* enable float16

* mypy

* no more crashing in ci

* fix ci

* minor cleanups

* use new fn for ptx compiler

* no diskcache in ptx compile

* use rn instead of rz

* save some lines

* new DEFINE_GLOBAL syntax

* line length

* new llvm

* cmpeq

* minor fix

* cast in mulacc

* update test_recursive_add to check line count

* mypy

* remove llvmir.py

* fix bool const

* wip

* cleanups

* working

* llvm in separate pr

* cleanups

* more cleanups

* fix ci

* use in_features directly in nn.Linear.__init__ bound check (#3050)

* use in_features directly in nn.Linear.__init__ bound check

get rid of the unnecessary check of isinstance int

* that is always int

* long lines

* Device._buffers -> Device._devices (#3052)

backend devices used to be called buffers

* make Embedding device aware for multigpu (#3051)

* make Embedding device aware for multigpu

* split line instead of igore because that's cheating

* add test incomplete

* add test complete

* remove comment

* fix white space

* remove nn.Embedding

* remove unused reciprocal (#3053)

* remove unused reciprocal

* comment

* unit tests for Device.canonicalize (#3055)

* add multigpu test for RMSNorm (#3056)

* need all gather

* add two multigpu test scenarios for RMSNorm

* No extra vars call (#3054)

* remove unused reciprocal

* comment

* remove unneeded call to vars

* free speedup

* explicit lazybuffer caching (#3058)

* hotfix: remove useless slow assert from ShapeTracker

* Speed tweaks (#3059)

* base doesn't have to be a function

* no double fetch

* pop, don't check

* make the gc happy

* avoid hasattr

* cache canonicalize

* remove assert, faster base

* don't redefine that every time

* fix gpt2 attention with start_pos = 0 (#3061)

* fix gpt2 attention with start_pos size 1

test cases taken from ll_transformer branch

* fix interpreted

* Tensor.cat with 0 shape tensors (#3062)

* Tensor.cat with 0 shape tensors

supported both 0 in cat axis (for a subset of input), or 0 in non-cat axis (all needs to be 0)

* no shp

* test scaled dot product attention (#3063)

* add test

* add initial test for scaled dot product attention

* test pass for scaled dot product attention

* cached size (#3060)

* cached size

* simplify simplify

* 0 doesn't have base

* fix test

* cleaner cache

* hmm, metal is flaky on this...might be real(ish) but useless as test

* short circuit reshape/expand properly

* better reshape bypass

* hotfix: use is for enum compare

* hotfix: use is for enum compare, a few more

* speedtweaks3: apply shouldn't use the tensor constructor (#3065)

* speedtweaks3: apply shouldn't use the tensor constructor

* replace 0 size with CONST, not 0 in shape

* update gh actions (#3033)

* update checkout actions

* update upload artifact

* update setup python

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>

* unbind view or shapetracker also returns var_val (#3067)

* unbind view or shapetracker also returns var_val

4% faster for llama compile time

* one line less

* unbound_views

* hotfix: examples/transformer.py

* jit autorealizes output (#3069)

* early gate the graph (#3070)

* simpler idxs_to_idx (#3071)

* filter_strides -> canonicalize_strides (#3072)

* fix onehot and jit in examples/transformer (#3073)

trained to 0.999 in < 6 seconds on M1 Max consistently

* better test demonstration (#3077)

* a better test demonstration

* fix white space

* Tensor.expand resolves the new_shape before shortcut return (#3078)

similar to how reshape is done. also updated shrink shortcut criteria to read similar to pad

* minor cleanups of lazy.py (#3080)

* wmma: clean up device specific tensor core code (#3081)

* mem_estimate is always int, not symbolic (#3083)

* mem_estimate is always int, not symbolic

op_estimate can be symbolic, but mem_estimate is always int, thus we don't need to sym_infer it.
fixed some long lines too. update_stats is a very big function

* operator does not need underscores

* cat works (#3086)

* hotfix disable flaky mac runner wino cifar (#3087)

* remove the third merging state in view._merge_dims (#3085)

no logic depends on state == 0 or state == 2

* minor cleanup of View.reshape (#3088)

* minor cleanup of View.reshape

removed some redundant logic

* new_strides

* revert that

* use BEAM=2 instead of BEAM=4 in cuda ci gpt2 (#3089)

BEAM=2 is faster and less search time. investigating why BEAM2+BEAM4 is slower than BEAM2 alone

* use device from LinearizerOptions in kernel search (#3090)

* use device from LinearizerOptions in kernel search

removed all Device.DEFAULT in search.py

* pass device string for parallel pickle

* device for interpreted backends in LinearizerOptions

* update jit type annotation post lazy rewrite (#3091)

* add mutigpu support for llama attention (#3064)

* add llama attention test for multigpu

* test fails

* kv cache trying to shrink on sharded axis

* mask None works for scale dot product

* kv cache seems to be working but scale dot product breaks

* scaled dot product works, but the last linear layer failed

* running into the reshape case where it could be wrong for multigpu

* making sure it was the reshape

* adding contiguous doesn't solve

* need to shard more properly

* remove reshape test

* minor adjustment to scale dot product attention test

* weights are sharded wrong

* continue fix new weight sharding

* clean up

* fix attention when start_pos is 0

* remove print

* add TODOs for the best mutigpu interface

* bugfix do not reset shapetracker of 0 size lazybuffer (#3096)

it might be coming from an expand, and resetting results incorrect stride. caught by interpreted backend

* One hot in tensor.py (#3093)

* onehot in Tensor.py

* one_hot tests

* works for all shapes, not just 1

* pylint

* not a static method

* moved around, num_classes mandatory

* pylint

* pylint

* space & moving

* formatting

* moved tests

* fix broadcasted logic if there's 0 in shapes (#3097)

* fix broadcasted logic if there's 0 in shapes

should always expand into 0, not the other way around. fixed matmul with 0 in input shapes.
for forwards for now though, backward is more involved and would need to change 0 size shortcuts

* fix tests

* replace with tensor op (#3099)

* fix gpt2 with empty prompt (#3100)

logits would be empty so need to replace that with ones before sampling, also cannot reshape with -1 when there's 0 in other axes

* Revert "fix gpt2 with empty prompt" (#3101)

* fix gpt2 with empty prompt take 2 (#3102)

logits would be empty so need to replace that with ones before sampling, also cannot reshape with -1 when there's 0 in other axes

* wmma: enable METAL half tensor cores and clean up cstyle (#3095)

* wmma: enable METAL half tensor cores and clean up cstyle

* revert simple_matmul rand changes and break line in tensor

* added metal fp16->fp32 tensor core

* add half @ half to mac benchmark (#3103)

* flag to profile mixtral - 1.7 tok/s now (#3104)

* update NumNode.__hash__ to be hash(self.b) (#3105)

with this, `a:=NumNode(x) == b` implies `hash(a) == hash(b)`

* catch runtime error in search._time_program (#3106)

return inf if search encountered runtime errors.

* no exceptions in __del__ when module creation is failed in hip/cuda (#3107)

* failed test case due to cast resets shapetracker (#3109)

cast implicitly resets shapetracker and makes it contiguous (for disk tensor), which fails for Interpreted backend if inputs contain non-contiguous st.

* cleanup ops_disk type annotation and redundant str cast (#3110)

* minor cleanup of test_disk_tensor (#3112)

* add Tensor.var (#3114)

also updated MeanVarianceNormalization and made test_ops test tensors of var and std smaller

* move sample inside jit for beautiful_mnist (#3115)

also removed .realize() for jit functions since jit does it automatically now. a little more beautiful

* minor cleanups of onnx_ops (#3116)

* fix conversation: llama generates token not prob now (#3120)

* add device options for tests in multigpu (#3121)

* make DType a dataclass (#3111)

* remove np from DType

* convert to dataclass

* remove dunder hash, eq, ne overrides from ImageDType

* is dataclass required for PtrDType?

* fix GPU tests

* reduce lines

* revert changes to np

* minor cleanup

* hotfix: ptrdtype compare was broken

* move fromcpu out of lazy.py (#3122)

* move fromcpu out of lazy.py

* fix abstractions2

* remove numpy from device (#3123)

* remove numpy from device

* fix tests

* np item

* cleanups

* simplify with as_buffer

* no toCPU

* tinygradic

* cast to scalar

* remove numpy from ops_torch (#3124)

updated mnist test to cast label to int8 and avoid hacking cast issue of torch uint8

* Fix backward fn for `<` and `==` (#3037)

* fix no grad fn for < and ==

* remove 2 line breaks

* Remove deprecated autograd variable

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>

* separate try except blocks in onnx2torch in model benchmark (#3126)

exceptions can be raised from either model conversion or individual backend failed. openpilot on torch mps works, but does not work with torch cpu.
seperate the expcetion block so that the benchmark can inlcude torch mps for openpilot.

* update env_vars.md (#3127)

mostly removed deprecated ones. not clear how to maintain this especially for extra/examples

* update test_ptr_ne (#3130)

* remove np from metal graph (#3129)

* dtype fmt (#3132)

* dtype fmt

* three ways to access

* fix off-by-one error in st_equal (#3131)

* fix off by one error

* whitespace

* no numpy (#3134)

* fast resnet eval (#3135)

* fast resnet eval

* fix HIP multidevice graph

* neater expression for devices

* lines

* add decorator test

* remove LLVMOPT

* move ptx

* Update ops_cuda.py

---------

Co-authored-by: Christopher Milan <chrismilan@ucla.edu>
Co-authored-by: chenyu <chenyu@fastmail.com>
Co-authored-by: Yixiang Gao <yixiangg310573@gmail.com>
Co-authored-by: jxdv <virgoj@protonmail.com>
Co-authored-by: Francis Lam <flam@alum.mit.edu>
Co-authored-by: SnakeOnex <sheeproman@gmail.com>
Co-authored-by: nimlgen <138685161+nimlgen@users.noreply.github.com>
Co-authored-by: Jyotirmaya Mahanta <jyotirmaya.mahanta@gmail.com>
Co-authored-by: Guy Leroy <g.m.leroy@outlook.com>
Co-authored-by: Paul Gustafson <paul.gustafson@theambrusgroup.com>
2024-01-15 16:44:20 -08:00
Jyotirmaya Mahanta 2ef09ca641
update test_ptr_ne (#3130) 2024-01-15 11:36:29 -05:00
George Hotz c5a941d466
webgl backend in extra (#3041)
* WebGL WIP

* 84% of ops passing test

* tests passing 100%

* Cleanup, refactor

* Shave off some lines

* Work on dtypes

* TestOps at 100% again

* Efficient net shaders compile in browser webgl2

* Compile all efficientnet shaders in browser

* Create empty textures for tensor buffers

* Run program. Up next weight loading

* Exported WebGL model working

* Add tests, refactor

* Explicit cast alu for GLSL

* Fix CI tests

* WebGL efficientnet demo

* Compile and run yolov8 in browser

* Fix imports

* Simplify yolo compile

* Fix bool*bool and cast cmplt to float

* More tests

* Do std tests pass on CI?

* Skip std tests on CI

* Remove explicit_cast_alu hack, and solve it in code_for_op

* Move to new dtype-less alloc api

* Remove local size hack: optimize local_size only if device has local

* Remove glsl.py, and move content to cstyle

* dont_use_locals in opts

* Fix dtype tests

* type_map in CStyleLanguage

* Make core changes smaller, cleaner, refactor export_model and demo

* Skip pad_slice

* Simplify: render_const, render_conditional

* solve bool alu for other binops, cleaner ops_webgl

* Fix noopt hack

* Remove some skipIfs

* WebGL image hack

* type_names is a better name

* global_max

* Fix dtype import

* Fix type_names -> type_map

* Fix lint

* Remove webgpu, back to 5k lines (#3040)

* remove webgpu

* max 5000 lines

* revert those to master

* retain that cstyle

---------

Co-authored-by: Ahmed Harmouche <ahmedharmouche92@gmail.com>
2024-01-08 09:29:13 -08:00
George Hotz f432ec9c33
Bitcast hip fix + fix mixtral (#3022)
* fix bitcast in hip

* wrong dtype for precast, double COPY
2024-01-05 14:51:25 -08:00
chenyu 9f39165188
correct (dtype, device) in test_dtype.is_dtype_supported (#3007)
corrected dtypes for TORCH and float64 support
2024-01-04 00:25:37 -05:00
chenyu ff5399f053
move one last dtype test from test_helpers to test_dtype (#2975) 2024-01-02 12:37:56 -05:00
George Hotz a280cfe169
move dtypes to dtype.py (#2964)
* move dtypes to dtype.py

* fix urllib
2024-01-01 14:58:48 -08:00