* cache loads across buffers (since they may share rawbufs)
* typing
* add test
* fix test
* small changes to test
* fix test
* one big cache
* whitespace
* golf a line?
* invalid is RawBuffer(0)[0], valid 1.
* Symbolic Shape JIT
update tests
2 variables symbolic ops, adding more tests
test passing
cleanup
* more test cases
* single flag
* review update
* jit attention one piece
* realize
* symbolic_jit test for cuda
* old artifact
* works with cuda gpu but failed ci
* CUDACPU
* feat: train cifar using multigpu
* feat: split eval batch across 5
* feat: cleaner allreduce
* feat: 93.88%
* feat: cleaner batch chunking from bert
* feat: cleaner grad sync
* feat: tinygrad argmax
* feat: make it work with different gpu counts
* feat: move some stuff into the normal __init__
* feat: autodetect gpu count
* feat: move import inside
* move assembly, assembly_ptx
* successful but broken rendering of ptx asm
* clear ins before render asm
* slightly less broken :')
* we needed thread syncs
* fix float16 loading, rounding modifiers and other casting stuff, passing casts_from_half
* Fix runtime_args for gpuocelot
* our casts were flipped on both ends
* more casting
* add ternary where op
* dealing with storing/loading bool
* add test for casting to bool from negative
* Fix args.valid on ConstOp
* add to CI, TODO: fix runtime_args for test_uops
* fix placement of runtime_args to work with lazy.Device
* undo ci changes so I can push
* fix lints
* start cleanup and fix things we broke fixing lints
* add checks for PTX specifc asm instructions
* revert added test -- doesn't pass on llvm
* skip tests for underflow,overflow
* another fix for how we're setting runtime args
* Less broken cleanup
* add to CI
* add more env variables for ci test
* fix ci to install pycuda for ptx
* ci: copy cuda test command
* cleanup
* assert to make sure we're actually running ptx in ci
* remove test assert
* move is_ptx arg
* move assembly, assembly_ptx back to extras
* fix imports
* initial merge fixes
* clear registers, fix UOps.LOAD with invalid value
* draft merge fixes
* remove prints
* quick lint and merge fixes
* cleanup
* remove PTXProgram wrapper
* final cleanup
* temp change for ci rerun
* ci rerun
* rollback ISA version
* try to run commavq
* fix 0 dim, start implementing new ops
- Implement EmbedLayerNormalization
- Implement Attention
* SkipLayerNormalization and FastGelu
* use original torch model, cast inputs
* fix some ops:
- properly do Cast
- Attention: bi- and unidirectional
- FastGelu: add bias before gelu
* cleanup onnx_ops.py
* add validation option to benchmark
* cleanup imports
* add checks incase onnx2torch implements ops in future
* run onnx instead of original torch
* just skip gpu on m1
* reactivate the other models
* check for strange params & squash whitespace
* cleanup
* fix causal mask Attention
* Range doesn't need int cast
* embedding vocab_counter same dtype as input
* no need to cast
* always validate, fix PosixPath ort
---------
Co-authored-by: George Hotz <george@comma.ai>
* testing new memops
* better debugging
* testing padded conv
* branching with load
* refactoring a bit
* first try
* fixing bugs
* fixing some
* eq
* eq2
* do not use x's
* working
* fixing imm
* getting things working
* refactor
* pow not working
* working except one
* refactor: one store mem
* refactor: global load
* refactor: imm
* refactor: cleaning
* fixing big offsets
* refactor with ci
* try ci
* typo
* another typo
* ubuntu default
* forgot git
* do i need git?
* missing packages
* adding python-dev
* with cache?
* buildx action
* buildx name issue?
* maybe now?
* python3
* newline warning
* maybe now
* i actually need this
* ci should work now
* improved caching
* fixing cache
* maybe now it will cache
* this
* testing cache
* trying again
* load
* missing platform
* caching gha
* testing cache
* full testing
* typo
* now?
* why
* adding checkout back
* bad formatting
* fixing convention issues
* supporting python
* adding CI flag
* testing all
* better comments
* adding debugging
* takes 12x longer
* does it output progress now?
* ignore models for speed
* fixing merge
* excluding conv_transpose2d
* only 2 test cuz is to slow
* another approach
* let's see
* faster duh
* my bad
* T_T
* typo
* sup
* with output?
* comment test
* comment test
* comment test
* :?
* no comment
* with cache
* back to normal
* testing that ci works
* back to passing
* trying again
* does it create another entry
* does it create another entry?
* build local
* hey
* Revert "excluding conv_transpose2d"
This reverts commit cc7348de03033e032f47d69caff174e2f1a7bfea.
* does it cache if done before?
* does it cache?
* done
* adding test ops
* bad formatting
* no need for this
* working static mem
* sum 1d
* add ndim
* better reg import
* fix stack
* back to np
* working except for softmax
* 5 failing
* no pogress
* remove keystone
* remove keystone
* testops passing
* cleanups
* more cleanup
* typo
* ci
* ci2
* cond import
* ci3
* ci4
* ci4
* ci5
* ci5
* ci6
* aligment
* test all
* correct test
* err read_unmapped
* passing test
* ignore for speed
* ignore for speed
* ci7
* cleanup
* remove docker
* fixing merge
* fixing bugs
* add skipload for const ops
* comments
* First merge to master: Renderer
* fix emulation
* passing all tests arm64
* cleaning
* fix handcoded binary
* cleaning
* fix errs
* fix runtime arg binary
* clean git diff
* fix and clean
* fixing metal test
* cleaning
* fix metal test
* ci ~8 min
* fix pylint and clang
* cache the files in ops_clang
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>