* add some helpers
* I think it should all work..
* fixed get_set_tensor
* done
* del import
* bye bye typing
* style
* remove empty lines lol
* deleted dtype arg
* del trailing space
* new getitem
* go
* add temporary simple tests
* better
* comments
* WOW that took awhile
* save 1 line lol
* work
* still need to add comprehensive tests, but i think getitem looks nice :D
* GIMME GREEN CI CHECKMARK PLS
* try..
* k idk
* added tests for errors
* fixed small hack
* added tests
* almost good
* try no contig?
* yay no more contig + comments and spacing
* finishing touches (comments)
* revert regex unittests lol
* add suggested change
* oops I fell asleep yesterday
* handle reshape of contiguous subparts with explicit mask
* remove the add/remove ones logic in reshape
* accomodate ones in accumulate logic
* make multiply commutative
* fix linting
* make mypy happy
* add test for commutative mul
* merge dimensions in shape_strides for 1 range masks
* add offsets for merging
* fix linting
* add back explicit 1 reshapes
* fix mypy errors
* fix accumulate by includng state
* include non-zero stride dimension in acc
* small cleanup
* more compact to_shape_strides
* more logical cleanup
* compress more
* compress reshape mask
* adding some comments
* small bug fix
* improve test coverage
* remove explicit add remove ones
* small bug in test
* enable test_reshape_splitting_combining
* small fix
* 10 lines less to_shape_strides
* shorten reshape mask
* some more cleanup
* more cleanup
* introduce some symbols for compactness
* more symbols
* more cleaner
* lessen symbols, it became less readable
* remove merge_views from view.reshape
* change to_shape_strides to _merge_dims
* improve readability
* fix corner case
* cleanup
* better handling of 1 <= Variable('i',1,10) & new_dim = Variable('i',1,10)
* rewrite _reshape_mask for readability
* fix white space
* add comment
* nice shorthands for readability
* add proof in docs
* small nit
---------
Co-authored-by: chenyu <chenyu@fastmail.com>
* `global_load` and `global_store` using buffer dtype
* `UOps.PHI` in all dtypes
* `UOps.ALU` in all dtypes
* `UOps.CONST` & `UOps.DEFINE_ACC` in all dtypes
* -- endof implementation --
+tiny lint changes
* these tests require the fp16 extention
you can run them locally to confirm they're green: (GPT2 test is broken in master for mac, see [this](https://discord.com/channels/1068976834382925865/1069001075828469790/1177993277958533261)
`GPU=1 python3 -m pytest test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_dequantizelinear_e4m3fn_float16_cpu test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_max_float16_cpu test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_min_float16_cpu test/models/test_real_world.py::TestRealWorld::test_llama test/models/test_real_world.py::TestRealWorld::test_gpt2 test/models/test_whisper.py test/test_specific_conv.py::TestSpecific::test_big_vec_mul`
skip the new test_linearizer_failures in CI GPU because of the fp16 extention
This passes on a real GPU since the extention is available:
`GPU=1 python3 -m pytest test/test_linearizer_failures.py::TestLinearizerFailures::test_failure_8`
see CI logs [here](https://github.com/tinygrad/tinygrad/actions/runs/6996590597/job/19032641427#step:14:644)
* these tests fail in CI due to segfaults and CPU crashes
To confirm they're green locally, you can run the following commands:
1. For the tests skipped in test_ops.py (note: CLANG is very slow)
`for var in GPU CUDA CLANG; do export $var=1; for test in test/test_ops.py::TestOps::test_slice_fancy_indexing_no_dim_collapse test/test_ops.py::TestOps::test_slice_fancy_indexing_dim_collapse_int test/test_ops.py::TestOps::test_slice_fancy_indexing_dim_inject_none test/test_ops.py::TestOps::test_slice_fancy_indexing_dim_inject_and_collapse; do python3 -m pytest $test; done; unset $var; done`
2. For the ONNX tests skipped in CLANG:
```
CLANG=1 python3 -m pytest test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_ai_onnx_ml_array_feature_extractor_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_gather_elements_0_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_sce_mean_weight_ii_3d_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_gather_elements_1_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_sce_NCd1_mean_weight_negative_ii_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1_weight_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1d2d3_none_no_weight_negative_ii_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1_ii_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_sce_mean_weight_ii_4d_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_sce_mean_weight_ii_3d_log_prob_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_gather_elements_negative_indices_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_sce_NCd1d2d3d4d5_mean_weight_log_prob_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_sce_NCd1_mean_weight_negative_ii_log_prob_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1d2_no_weight_reduction_mean_ii_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_sce_NCd1d2d3d4d5_mean_weight_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1d2d3d4d5_mean_weight_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1_mean_weight_negative_ii_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_sce_mean_weight_ii_4d_log_prob_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1d2_with_weight_reduction_mean_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1_weight_ii_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1d2_with_weight_reduction_sum_ii_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1d2_with_weight_reduction_sum_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1d2_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1d2_reduction_sum_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1d2d3d4d5_none_no_weight_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1d2d3_sum_weight_high_ii_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1d2_reduction_mean_expanded_cpu \
test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_nllloss_NCd1d2_with_weight_expanded_cpu
```
3. The LLVM test I skipped here is already [skipped in master for all backends](https://github.com/tinygrad/tinygrad/blob/master/test/external/external_test_onnx_backend.py#L186), I just made it more specific
`LLVM=1 python3 -m pytest test/external/external_test_onnx_backend.py::OnnxBackendNodeModelTest::test_dequantizelinear_e4m3fn_float16_cpu`
* Revert "these tests fail in CI due to segfaults and CPU crashes"
This reverts commit 15db57014381a4449d563526ac6c870e36257658.
* merge with cleanup-vectorized-hip-renders
* barely working HIP P1, ALU ops need a refactor?
* manage the fact that in HIP [half2 is actually an unsigned int vec](f921880387/hip/include/hip/amd_detail/amd_hip_fp16.h (L59)) and half is a totally different __half that [has an unsigned int element in it](f921880387/hip/include/hip/amd_detail/amd_hip_fp16.h (L50)) but can't be accessed [because it's private](f921880387/hip/include/hip/amd_detail/amd_hip_fp16.h (L86)). If you just do this:
```
half2 val0 = // ...
half val1 = // ...
```
then you can't do:
```
val0.x + val1 // error: use of overloaded operator '+' is ambiguous (with operand types 'unsigned short' and 'half' (aka '__half'))
```
* update the sign definition to avoid division by zero in all dtypes
* diff cleanup p1: why were these in the diff anyways
* less hacky HIP, enable CIFAR fp16 benchmark, test ops for HIP in CI!
add ALU ops overloads for HIP
this will make HIP max work
handle mod
Revert "handle mod"
This reverts commit 370fd4b3fbe99b6ae8cc293d005b106628205933.
update max to use hmax
add HIP GEP render logic
enable CIFAR fp16 benchmark
test ops for HIP
back to store as float because this only works for float4 grouping right now
test_ops for hip!!
always sign
* back to the sign we had before because we cant do a backward pass on a Less node
* remove old hacks
HIP compiling test_ops in CI takes ~9 mins, not doing it for now
new HIP ALUs
* reduce accs done right
* refactor to function
* no device hacks
hacks p2
the other way
* LLVM ALU ops
half, float and double are all float
update max
* update test_uops, cmplt is always a bool in the real linearizer. assertAlmostEqual is wrong when ret is bool
* cleanup LLVM wrong code
* dummy change for the CUDA install glitch
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
* remove pytest marks
* test more stuff
* fine revert some
* add that mark back
* skip that
* hmm LLVM does not work on ubuntu
* too slow on CUDA CI
* dup test
* feat: working voice 2 text using whisper
* feat: added llama generation
* feat: vits init
* feat: more accurate voice conversion
* feat: support for tts and working pipeline for the first pass
* fix: linter checks
* refactored vits initialization and inference, added mmts-tts support
* fixed process sync and now we can have an infinite conversation
* reuse output stream to remove overhead of creating a new one each time
* added pre-prompt configuration with yaml files
* adjusted code to merge PR which changed whisper
* optimized whisper, now it's blazing fast and also reduced number of lines
* added better debug printing
* use jitted encode function for whisper, added timings and removed response delim to save speed on generating those tokens
* fixed hf convert and now it's working with tinyllama
* added tinyllama config
* refactored code and made it work with all llama models
* prettier order
* prettier order
* fixed suffix for tinyllama and refactored convert_from_hf
* added missing parameters
* fixed stream release and added missing params
* jitted dp and encoder
* jitted flow forward
* removed re-init of espeak on each call to save up time
* jitted generator forward for blazing fast tts
* added contextmanager for displaying a chat log
* removed whitespace for pylint
* updated code to support latest fetch func
* wait for llama eos token and pass params from cli to llama
* listen for not fixed amount of time
* refactored code a bit
* removed thresholding and now the output streams directly to whisper
* tokenize llama output for vits batch size to work and stream each sentence to a speaker
* changed speaker
* whisper is now printing on the same line
* don't trigger llama on whisper output in parens
* added tinyllama chat model
* adjusted code to work with tinyllama chat model
* removed unused cli arg
* autofetch tokenizer and tinyllama model. add 3 chat tokens to the tokenizer
* fixed issue with long sentences by chunking them
* support for multiline llama output
* prettified log output
* adjusted sentence length
* remove quote from response to avoid funny tts
* fixed prompts
* added missing parameter
* ops_gpu is go
* fix size 0
* fix image, and add more tests
* nerf openpilot test, doesn't test thneed
* run the schedule
* better
* oops, new inputs
* delete pyopencl
* Update ops_gpu.py
* update cstyle renderers to take a dtype in code_for_op
* implement NEG for bools in LLVM
* update triton
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>