* initial commit
* 81 passing
* 105 passing tests
* 148 passing
* CI tests
* install dep on ci
* try opencl pkgs
* try using vulkan
* down to only 6 failing
* refactor
* cleaning up
* another test skipped due to buffer limit
* linter
* segfault
* indent fix
* another segfault found
* small touchups
* Fix max and maxpool tests
* Add constant folding
* Add javascript export script
* better asserts in codegen
* manual upcasting
* reverted token type change
* skip safetensor test due to unsupported type
* FIx efficientnet and all other model tests
* Remove np copy
* fixed indent and missing import
* manually destroy the buffer
* revert back to length
* linter errors
* removed extra val
* skip broken tests
* skipping more tests
* Make the page pretty
* Save model weights as safetensor
* Fix imagenet to c test
* Fix second imagenet to c bug
* Async and paralel kernel compilation
* workgroup support
* reversed local size
* fixed non local bug
* correct local groups
* ci experiment
* removed typo
* Fix define local by using shared memory
* Refactor
* try running on mac
* match metal tests
* add more workers
* scope down tests
* trying windows runner
* fixed windows env
* see how many it can do
* merged master
* refactor
* missed refactor
* increase test suite coverage
* missing import
* whitespace in test_efficientnet.py
* getting there
* fixed reset
* fixed bufs
* switched to cstyle
* cleanup
* min/max rename
* one more linter issue
* fixed demo
* linter
* testing ci chrome
* add unsafe webgpu arg
* add build step
* remove WEBGPU from cmd line
* use module
* try forcing directx
* trying forced metal backend
* temp disable conv2d for CI
* disable conv_trasnpose2d
---------
Co-authored-by: 0x4d - Martin Loretz <20306567+martinloretzzz@users.noreply.github.com>
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
* Added test coverage for int32 in `test/test_dtype.py`
Tests for int32 include:
- testing that int32 can be converted into a numpy array
- testing that float and int64 can be cast into int32
- testing that int32 can be cast into float and int64
- testing addition, multiplication, and matrix multiplication with int32
- testing that addition, multiplication, and matrix multiplication with int32 and either float or int64 gets successfully cast into float and int64, respectively
Additional changes include testing that int8 casts into int32 and testing that float16 casts into int32
* Added type casting to the add, subtract, and divide binary operations
* Added automatic type casting when types differ to FusedOps.MULACC
I moved the match_types function back so that I could call it in einsum_mulacc where it would cast the types of the MULACC to be the same
* Added unit test for match_types and added type hints to the parameters
* Added tests for ops_cpu.match_types
* Changed ops_cpu.einsum logic to play nicely with PyTorch
Changed `tinygrad.runtime.ops_cpu.einsum_mulacc` logic to not perform type matching. Type matching was instead moved to the numpy_fxn_for_op dictionary in the ops_cpu file. Since ops_torch uses the same einsum_mulacc function, this should fix all the broken pytorch tests.
* empty commit to rerun ci
* reverting PR#1213 in attempt to fix broken test
* Removed all tests I added to see if they are causing CI issues
* Added back type matching tests
* removed type matching tests and added back int tests
* added back part of the type matching tests
* removed braking type matching tests
* empty commit for testing
* added test back but inside comment
* removed a test from the comment to see if it breaks CI
* removed another function
* more testing
* emptied test comment
* cleaned up comments
* Added optimize=True flag to einsum_mullac in cpu_ops.py
* Removed unnecessary imports from tests
* optimized match_types by removing unnecessary array copying
* initial commit
* added osx check for opencl
* added llvm f64 conversions
* typo in llvmir
* more tests and modified unsupported error
* fixed linting error
* added pragma fp64
* simplified exclusion for OSX
* fixed device check and also added it to cast func
* added ifdef check for fp16 in ops_gpu
* Revert "added ifdef check for fp16 in ops_gpu"
This reverts commit 92de754d48cba19c04ef20b3d4a1c3003046a9d0.
* f64 prekernel signature match f16
* moved condition to buffer init
* added metal int64 and some simple tests
* removed bool return type def
* typo in test
* also missing in clang and gpu runtimes
* switched order for opencl
* increased atol and removed new line in kernel prefix
* feat: int8 support
* feat: uint8 support
* feat: int8 tests
* fix: fix uint8 on clang
* feat: test casting between int8/uint8/float16/float32
* clean: way cleaner dtype tests
* feat: preprocess_imagenet using the correct dtype
* feat: add test for overflow between uint8 and int8
* linearizer outputs something
* working ish
* cstyle codegen
* clang mostly works
* fix load valid
* fix numberless loop
* fancy gen
* working
* fix enet compiler
* cleanups
* float4 upcasting
* less lines
* supports_float4
* constant folding
* mulacc
* internet tests flaky in CI
* 90% image support
* fix image generic
* bugs exposed with shapetracker and single view
* new llvm
* use vload, remove OLD
* that's really poorly done
* ending up being more lines
* add dtype class
* dtypes
* buffers are lazy
* dtype is tracked by lazybuffer and GenericShape
* fix types in llvm
* llvm store
* dtype tests
* fix tests maybe
* fix flop counter
* fix CI
* CI fix and check format
* fix dtype and dtype check
* fix custom test
* fix test graph