mirror of
https://github.com/sunnypilot/sunnypilot.git
synced 2026-02-18 16:33:57 +08:00
Hi! The point of this pr is to make the model run easier to read. On the latest tinygrad numpy().flatten() empirically does the same thing as the internal contiguous().realize().uop.base.buffer.numpy(). numpy() is also documented (docstrings), which can assist new contributors in learning what each potential execution does. Torq_boi or yassine, I know you want proof in the code base, so here it is. As of tinygrad commit 2f55005: in tinygrad_repo/tinygrad/tensor.py Lines 316-318 (def _buffer): ensure the tenso is contiguous() and realized() before accessing the raw buffer. Line 378 (def numpy): Wraps the buffer access and adds a reshape to match the tensor shape. self._buffer() is what executes contiguous().realize() and returns the buffer object. Calling numpy() on that buffer object returns a 1D array (defined in tinygrad/device.py:193 via np.frombuffer). The reshape(self.shape) at the end of Tensor.numpy() then adds dimensions to that 1D array. The added .flatten() removes those dimensions, flattening it back to a 1D array. Effectively the same as what is currently done, but less complex.