chenyu
bd8ecf7fd6
remove NumNode ( #7035 )
2024-10-13 16:42:19 -04:00
George Hotz
a71bb09ec3
remove symbolic file [pr] ( #7012 )
2024-10-12 18:44:44 +08:00
George Hotz
5ae2de9845
UOp.variable ( #7010 )
...
* UOp.variable [pr]
* fix tests
* clean
* improve name rendering
* last bug
2024-10-12 18:20:44 +08:00
George Hotz
738a5794a9
last update for new symbolic [pr] ( #6877 )
2024-10-04 14:58:51 +08:00
George Hotz
7214450c23
little symbolic changes [pr] ( #6849 )
...
* little symbolic changes [pr]
* symbolic needs resolve too
* no resolve
* less change
2024-10-02 17:12:30 +08:00
George Hotz
67a03e72bb
remove expr_idxs [run_process_replay] ( #6567 )
...
* remove expr_idxs [run_process_replay]
* goodbye that test
2024-09-17 18:34:51 +08:00
chenyu
1941e66cc9
real strides with uops ( #6365 )
...
* real strides with uops [run_process_replay]
* compare with old
* Revert "compare with old"
This reverts commit f53a8d42768e0b95d37b1bae8e80e288a69c6e3f.
* make those @unittest.expectedFailure
2024-09-09 03:06:27 -04:00
chenyu
ad05302232
tests of real_stride of symbolic shape ( #6409 )
...
these would have failed in #6365
2024-09-08 21:37:19 -04:00
chenyu
99e7a1d5e9
support symbolic reshape with non-contiguous ( #4844 )
...
* support symbolic reshape with non-contiguous
pre-requisite for symbolic arange (make symbolic ones that can be folded).
* test cases
* typo
* shorter
2024-06-05 16:01:19 -04:00
chenyu
236390aafb
fix lazy r const folding with variable shape ( #4783 )
...
currently not supporting const fold symbolic shape. I think it's possible with a refactor to Tensor.from_node.
also added some failed required tests for symbolic arange.
2024-05-30 15:19:28 -04:00
chenyu
145718a90f
unbind view or shapetracker also returns var_val ( #3067 )
...
* unbind view or shapetracker also returns var_val
4% faster for llama compile time
* one line less
* unbound_views
2024-01-09 21:45:05 -05:00
George Hotz
c003be7309
Revert "track size in shapetracker" ( #3043 )
...
* Revert "track size in shapetracker (#3026 )"
This reverts commit a8ba1ac08f
.
* st.size
2024-01-08 13:13:39 -08:00
George Hotz
a8ba1ac08f
track size in shapetracker ( #3026 )
...
* track size in shapetracker
* shapetracker adapter
* size is an int
* create Buffer with st.size
* only compare the views for the jit
* fix webgpu
2024-01-05 20:15:53 -08:00
George Hotz
c6eb618013
tests from new lazy branch ( #2774 )
...
* tests from new lazy branch
* fix lin 11
* that was needed
* doesn't fail
* mark
* meant that
* llvm passes
2023-12-14 23:06:39 -08:00
George Hotz
6d6eb9302d
ruff checks the max line length is 150 ( #2734 )
...
* ruff checks the max line length is 150
* fix tensor.py
* a lot more
* done
2023-12-12 17:34:47 -08:00
chenyu
371005cb2d
use one kvcache tensor in gpt2 instead of two separate caches ( #2662 )
...
* use one kvcache tensor in gpt2
* test case
* is None
* better test cases
2023-12-06 20:59:17 -05:00
chenyu
7d26452305
call ruff with --preview ( #2522 )
...
some checks are ignored without --preview
2023-11-30 13:59:00 -05:00
Christopher Mauri Milan
7f01dd04f0
Apply ruff linting rules to tests ( #2473 )
...
* everything except F821
* enable F821 with noqa
* dumb fix
* fix remaining imports and (former) lambdas
* replace _ with noqa to avoid gc
2023-11-27 21:24:06 -08:00
George Hotz
8ff2e13550
From teeny ( #2426 )
...
* changes from teenygrad work
* support not supporting ImageDType/PtrDType
* fixups from teeny
2023-11-24 12:50:56 -08:00
chenyu
f02e17a967
Variable.num -> NumNode ( #2354 )
2023-11-18 15:45:52 -05:00
chenyu
ad3d7428fa
good line shaves in st and faster ( #2343 )
2023-11-17 11:00:26 -05:00
George Hotz
c0a033f01d
remove real_offset ( #2234 )
...
* remove real_offset
* pass in numnode
* remove that real_offset
* sample only for variable
2023-11-07 17:30:53 -08:00
chenyu
e2b83f1b42
Variable.bind newer ( #2017 )
...
* Variable.bind attempt 2
* ShapeTracker.unbind
* fix llama
* fix types
* test case
* View.vars cleanup
* include mask in symbolic source
* mask can be sint
* st.unbind in bufferops
* assert ast contain free Variable only
* cleanup
* conservative unbinding reduce op arg
* move reduceop unbind
* fix llama JIT arg behavior
2023-10-10 10:03:01 -07:00
George Hotz
20059dc55b
Make ShapeTracker Immutable ( #1909 )
...
* ugh
* ops test pass
* fix shapetracker tests
* sym shapetracker
* shapetracker is a tuple of views now
* from_shape
* fix has variable shape
* key isn't needed
* post init assert
2023-09-24 21:09:03 +08:00
chenyu
e67306ba04
symbolic shape type with TypeGuard ( #1852 )
2023-09-13 05:27:22 +08:00
chenyu
3ec301c2d7
apply view.py patch ( #1844 )
2023-09-10 17:32:15 -07:00
chenyu
ebcda8a714
Move var_vals from ShapeTracker to LazyBuffer ( #1819 )
2023-09-08 09:25:10 -07:00
chenyu
89e13f2f04
support symbols in shrink ( #1611 )
2023-08-22 09:08:21 -07:00
chenyu
be50b2fe8f
more symbolic symbolic ops ( #1564 )
...
* more symbolic symbolic ops
* handle NumNode in __mul__
2023-08-18 09:21:41 -07:00
chenyu
11dd9b1741
symbolic codegen and exec ( #1552 )
...
* symbolic codegen and exec
* fix and add test
* no sketchy
* merge_dicts type
* dtypes._arg_int32
2023-08-16 14:43:41 -07:00
chenyu
a89142e46f
ShapeTracker.var_vals ( #1540 )
2023-08-14 18:53:37 -07:00
chenyu
3e0c2d256f
symbolic shapetracker ( #1506 )
...
* symbolic shapetracker
* no need
* keep only symbolic and clean up
* explicit // and % Node support
* NumNode * Node
2023-08-12 12:22:58 -07:00
chenyu
34f348643b
Support constant expand to symbolic shape ( #1411 )
2023-08-02 21:21:22 -07:00
chenyu
6572ca6835
support symbolic expand ( #1407 )
2023-08-02 20:03:46 -04:00
chenyu
b2fde9ec36
reshape to register variable value ( #1386 )
...
* reshape to register variable value
* better error message
2023-07-31 17:10:02 -07:00
chenyu
aa05495620
symbolic stride ( #1326 )
2023-07-23 12:41:22 -07:00