* bring hip graph back
* share with metal
* fix linter
* remove hasattrs
* Update ops_hip.py
* hip wrapper does not use _buf
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
* torch and numpy don't share ops anymore
* that should be filtered out elsewhere
* still const
* graph + enet example cleanup
* hmm, we do still need it because of symbolic
* add name support
* use fetch in gpt2
* remove requests from main lib, networkx also optional
* umm, keep that assert
* updates to fetch
* i love the walrus so much
* stop bundling mnist with tinygrad
* err, https
* download cache names
* add DOWNLOAD_CACHE_VERSION
* need env.
* ugh, wrong path
* replace get_child
* update HIP language
* vectorized render_cast with special treatment for hip only
* test coverage for all cases
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
* remove force_wait
* refactor
* get rid of stupid ASTRunner
* fix del in diskbuffer
* BufferOps.FROM_UNDERLYING
* put offset in the rawbuffer
* fix bugs
* use exec
* steal from https://github.com/PalauReq
* tests passing but not correct
* move _realize_from if statements to lib.py
* oneline
* cleanup
* remove imports & add P2P back in
* cleanup
* fromBuffer & call fromCPU rather than super().fromBuffer
* remove whitespace
* move RawBufferMapped.fromBuffer functionality to RawDiskBuffer
* remove classmethod and realize
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
* autopad shapetracker for BEAM
* OptOps.PADTO
* skip that test for now
* correct padding reduce axis
* just 32
* avoid more than double the FLOPs
* cleanups
* test case
* no support for triton and llvm yet
* typos
* symbolic shape would not work
* cannot PADTO with MAX kernel
* advance db version
* no breaking change - don't advance db version
* is triton just python?
* Revert "is triton just python?"
This reverts commit 17e776c25587615e33a3634c2fb0bb8591ce65d4.
* Revert "Revert "is triton just python?""
This reverts commit 6c434c01e1c4b0ea0431ec18632cd859fb3cf260.
* support llvm
* is it really passing in CI only?
* update tests
* oh triton test passed
* simpler
* revert that, with a test
* check if st are the same
* Revert "check if st are the same"
This reverts commit d2a5eac110a5da1af82a2728c883779ef69c3cad.
* update the db version
* rebase artifact
* replace all _dtypen with dtype.vec(n)
fix: print works
* conceptul refactor of cstyle render_load logic
* linearizer GEP is explicit that its dtype is the scalar version of localtype
* vectorized global_store and load don't need a conditional
* fixed hf convert and now it's working with tinyllama
* added tinyllama config
* refactored code and made it work with all llama models
* prettier order
* prettier order
* fixed suffix for tinyllama and refactored convert_from_hf
* dynamically update help if MODEL_PARAMS changes and default size is the 1st