* Less, LessOrEqual, Greater, GreaterOrEqual, Equal
* lint fix
* using built in functions
* overriding __eq__ breaks things
* backwards pass for less - foward only tests
* one other spot
* removing backwards for comparison ops to match pytorch
* raise runtime error
* more tests for comparison ops
* fixed the lineup
* added number upcast tests
* conv2d is an hlop
* shorter conv
* KOPT=-1
* alt imp
* MULACC
* smarter mulacc
* pop conv
* 7x7 -> 5x5
* didn't fix, that's not going to work
* this is faster and matches old behavior
* oh, non lazy just won't work with mulacc
* mulacc in torch
* bool types were creeping in
* optimizer is actually better with hlop conv
* fix pushing permutes issue
* refactor einsum_mulacc
* fix up readme
* update readme
* _image_conv2d
* fix bias addition location
* pushing permutes gets back to 200 kernels
* conv cleanup
* disable hlop conv
* don't hide that in helpers
This commit resolves issue https://github.com/geohot/tinygrad/issues/453
In the example code in the README.md, when it is run, it prints for Tiny
Grad the tensors as:
<Tensor <LB (3, 3) op:MovementOps.RESHAPE> with grad None>
<Tensor <LB (1, 3) op:MovementOps.RESHAPE> with grad None>
But to be equivalent to the output of the Torch example, we need
to use numpy() to get it to show:
[[ 2. 2. 2.]
[ 0. 0. 0.]
[-2. -2. -2.]]
[[1. 1. 1.]]
* make lazy the default
* always float32
* while the lazy framework should be default, lazyness itself shouldn't be (for now)
* bugfixes
* remove the need for the ops class
* fxn_for_op
* hmm, my contiguous asserts went away
* move small shape thing
* refactor reduce
* remove the weird unused new functions
* only that install works
* thats broken
* unused imports, should be good if it passes
* replace broadcasting with expand
* Tensor, not self
* remove broadcasting from mlops
* delete useless A operator
* expand, not repeat
* remove A op
* expand on gpu
* binary_op doesn't broadcast anymore
* expand is still total junk, but the tests should pass