* add networks to end, remove bad scroller restore logic that sometimes starts in the middle
* works
* almost
* wifi slash
* clean up
* clean up
* opactiy
* more clean up
* more clean up
* set enabled and network missing on regain network
* cmt
* Revert "Revert tgwarp again (#37161)"
This reverts commit 45099e7fcd.
* Weird uv sizes
* Fix interleaving
* Fix on CPU
* make CPU safe
* Prevent corruption without clone
* Claude knows speeed
* fix interleaving
* less kernels
* blob caching
* This is still slightly faster
* Comment for blob cache
* like c++ wifiman
* rename to scan
* can do this
can do this
* Revert "can do this"
This reverts commit 295f7f49d448c6aacdde2ef810904df86357840b.
* kinda useless now
* clean up
* fix recent connect regression from connection not being known yet
* always update connections in background, keep track via signals only. no getallconnections each time one is added/deleted. matches c++
* works
* clean up
* clean up
* clean up
* new/removed conns signal
* clean up
* only get connections when adding/removing not every refresh
* add debug
* block
* Revert "block"
This reverts commit 30bbffca8d2db21c53d7a3601ae46bf05e2a7cd5.
* rm debug
* block on any new message, faster conn rem/add reaction
* better names
* correct from bottom alignment
* temp
* fix scale animation w/ btn_y
* home settings are always 64
* cleanup
* some clean up
* make 23 const
* rev
* more
The pinned SHA was v6.0.4, which is incompatible with actions/checkout@v6
and causes a "Duplicate header: Authorization" 400 error during git
remote operations. See peter-evans/create-pull-request#4272.
v3 renamed inputs from kebab-case to snake_case (repo-token -> repo_token,
pr-message -> pr_message). The old names were silently ignored, causing
"Input required and not supplied: issue_message" errors.
Hi! The point of this pr is to make the model run easier to read. On the latest tinygrad numpy().flatten() empirically does the same thing as the internal contiguous().realize().uop.base.buffer.numpy(). numpy() is also documented (docstrings), which can assist new contributors in learning what each potential execution does. Torq_boi or yassine, I know you want proof in the code base, so here it is. As of tinygrad commit 2f55005:
in tinygrad_repo/tinygrad/tensor.py
Lines 316-318 (def _buffer): ensure the tenso is contiguous() and realized() before accessing the raw buffer.
Line 378 (def numpy): Wraps the buffer access and adds a reshape to match the tensor shape.
self._buffer() is what executes contiguous().realize() and returns the buffer object.
Calling numpy() on that buffer object returns a 1D array (defined in tinygrad/device.py:193 via np.frombuffer).
The reshape(self.shape) at the end of Tensor.numpy() then adds dimensions to that 1D array. The added .flatten() removes those dimensions, flattening it back to a 1D array. Effectively the same as what is currently done, but less complex.
* Revert "revert tg calib and opencl cleanup (#37113)"
This reverts commit 51312afd3d.
* power draw is a lil higher
* just don't miss a cycle
* fix warp targets
* fix tinygrad dep