* mockgpu nv
* works
* comment that out
* fix merge
* setup gpuocelot
* install packages
* not run all of them
* passes
* fix ci
* almost
* should pass
* linter
* linter 2
* try this?
* ugn, not supported
* ci
* remove ticket from description
* better descs
* fix mean underflow for half tensor
divide only the reduce factor. added unit test and non-nan assertion in resnet training. also added a failed test cast for symbolic shape var
* skip for python backend
* Shape changing bitcast
* only support it on disk
* basic test
* more tests
* RuntimeError instead of assert
* create unique temp files
* move tests that use disk to test_disk_tensor
* linter
* remove assert on error messages
* that's RuntimeError now
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
* env var to change default float to fp16 or bf16
looking for standard names for these. we have FLOAT16 that does something to IMAGE and HALF to convert weights.
working on default bf16 too.
```
RuntimeError: compile failed: <null>(6): error: identifier "__bf16" is undefined
__bf16 cast0 = (nv_bfloat16)(val0);
```
remove that in cifar
* DEFAULT_FLOAT
* default of default
* unit test
* don't check default
* tests work on linux
* initialize Tensor grad same type as self
* also test different default float
* check dtype + try/finally
* don't test_gradient_dtype if f16 is not supported
* fix bad merge
---------
Co-authored-by: chenyu <chenyu@fastmail.com>
* diverse test value in test_dtype DATA based on dtype
* eh fix typo
* that too?
* PTX does not support i8 and s8
* skip that
* unused line
* pus the hack back
* remove that
* use int32 instead of default_int in simplify_phi_loops
indices are in int32 now and is separated from buffer dtype. fix#3823
* return early if not supported
* it's not that
* why is it failing for RHIP
With bf16 creation and bf16 to numpy, we can test bf16 in test_dtype.
Only support HIP now as it needs bf16 buffer support. Also the rtoal is slightly larger
* Fix bug in login functionality
* Remove HSA backend test and add bfloat16 dtype tests that run in CI
* Skip tests on HIPCPU
* skip tests causing segfault on LLVM backend
* Exclude bfloat16 tests causing segfaults in LLVM backend
* move bf16 cast tests to only test on HIP
* hip bf16
* remu dev mac
* Revert "remu dev mac"
This reverts commit 465069a0dc3c7f2045f3348b312a1dcbf1587acd.
* skip disk tests in CI
* bring float8 back
* Fix numpy uint/int overflow
* lol
* Works
* Update
* Move overflow test to float64/float32
* One line
* Update
* One more
---------
Co-authored-by: Patrick Tsai <patosai@users.noreply.github.com>
* do not truncate float64 precision
* use l suffix to try avoid overload confusion
* long line, ruff bloats the function otherwise
* fmt
* remove long double suffix (l), it's sufficient to have the float32 (f) suffix to avoid function overload ambigouity; add test showcasing rtol=1e-12 precision increase, the test fails without the renderer changes
* use more reasonable test values, same as test_int_to_float_unary_func
* disable test for CUDACPU, does not support half and segfaults on some operations per dtypes_alu test
* disable test for HIP, renderer does not support f64 precision
* do not use noqa E501, break up condition
* generic rendering of half and bf16
hotfix
* fix uops + regression test
* fix the test for metal's half4
* uop.uop fixup
* mypy with --strict-equality, fix ops_gpu