Marcel Bischoff
5d46df638a
abs as non-first class operation using relu ( #171 )
...
* abs (non-first class)
* whitespace
2020-12-09 12:20:34 -08:00
George Hotz
4c55c7208f
no pow if mul will do
2020-12-09 08:19:29 -08:00
George Hotz
b85f17f247
more optim cleanup
2020-12-09 08:18:10 -08:00
George Hotz
9a64d13b94
add conv biases and max pool
2020-12-09 08:01:20 -08:00
George Hotz
99fa65f057
enable batchnorm in serious mnist
2020-12-09 03:29:40 -08:00
George Hotz
ffb96b2d0b
batchnorm by marcelbischoff
2020-12-09 03:23:04 -08:00
NeuralLink
00e376f36c
leaky relu as geohot suggested ( #167 )
2020-12-09 02:58:35 -08:00
George Hotz
c225e62dd2
touchups
2020-12-09 02:52:28 -08:00
Liam
89d0ff6989
Consistent testing ( #137 )
...
* Consistent GPU classes
Convert the existing GPU classes into one standard format.
Remove duplicated functions in `test_mnist` and create a TestMNISTGPU
class. This reduces line count and ensures consistency.
Use `@unittest.skipUnless(GPU, "Requires GPU")` instead of `if GPU:` to
skip GPU testing. This will ensure that skipped tests are displayed
accordingly in the pytest output.
* Optim Testing now supports GPU
* Tensor testing now supports GPU
jacobian and gradcheck auto skipped until GPU float64 support added.
* GPU support for custom constructor methods
* Remove GPU flag from Model constructors
It was requested that the `gpu` kwarg be removed from the model
constructor. GPU conversion is now handled in the train function.
This also required the conversion of Optimizer parameters as they are
constructed prior to execution of the `train` function and are dependant
on the model GPU state.
* Fix typo: float32->float64
* Clean `get_parameters` utility
Just a quick refactor w/ the new support for optimizers.
* Remove GPU kwarg from TinyNet
Remove `gpu` kwarg from tiny net to match test_mnist `train` function.
2020-12-09 02:25:27 -08:00
Liam
34b38dd4d0
Extra install requirements. ( #164 )
...
* Testing install requirements
* GPU install requirements
2020-12-09 02:22:47 -08:00
George Hotz
0e02f394ee
serious_mnist
2020-12-08 21:43:05 -08:00
Daulet
24d688c184
win more lines for core library ( #158 )
...
...and sacrifice test speed
2020-12-08 14:18:45 -08:00
NeuralLink
9f77fd6135
🔨 refactor optim ( #156 )
...
* 🔨 refactor optim
* 🔨 refactor optim
* 🔨 more clean up
2020-12-08 14:16:31 -08:00
George Hotz
4e1a0de392
fix rsub
2020-12-08 10:05:21 -08:00
George Hotz
c4540f1b8c
Support scalars by kartik4949
2020-12-08 09:52:07 -08:00
George Hotz
97fd9c1237
zero_grad there to match readme
2020-12-07 23:12:18 -08:00
George Hotz
c63f950348
need zero grad now
2020-12-07 23:10:43 -08:00
George Hotz
b355cd2571
Mean axis (doesn't work) ( #154 )
...
* mean axis
* fixed
2020-12-07 22:58:34 -08:00
George Hotz
38f97c8c80
prepare for ops_ane
2020-12-07 21:54:22 -08:00
George Hotz
7f249ec76d
touch up
2020-12-07 21:51:32 -08:00
Marcel Bischoff
58ccebd7cd
Sum with axis ( #153 )
...
* sum with axis and tests
* broken
* works again
* clean up
* Update test_ops.py
2020-12-07 21:49:18 -08:00
George Hotz
ac9fecb05d
lots of notes
2020-12-07 21:40:31 -08:00
George Hotz
8d1500f497
conv neuron
2020-12-07 21:12:52 -08:00
George Hotz
e4bb53b0e9
work out more
2020-12-07 20:32:50 -08:00
George Hotz
4927ad1897
float16 weights in min.weights
2020-12-07 20:15:15 -08:00
George Hotz
3aac9aefce
fix GPU profiling
2020-12-07 20:03:28 -08:00
James Roberts
b2eca6d45f
Format debug output ( #152 )
2020-12-07 14:07:14 -08:00
George Hotz
c7973cb0a1
ugh buffer_np is bad
2020-12-07 08:07:00 -08:00
George Hotz
088f280dc3
touchups
2020-12-07 07:50:27 -08:00
George Hotz
0cf21881b7
hwx parse w/o macho mods
2020-12-06 23:13:28 -08:00
Josh Smith
aa4161f63e
use classmethods for Tensor helper funcs ( #146 )
2020-12-06 22:35:43 -08:00
George Hotz
e75a6d1b4b
quadconv
2020-12-06 20:39:50 -08:00
George Hotz
23664c99bd
double conv
2020-12-06 20:26:02 -08:00
George Hotz
1a0f826dc6
highlight the commands
2020-12-06 20:03:21 -08:00
George Hotz
2f1f006003
we have docs
2020-12-06 19:54:03 -08:00
George Hotz
3531e81f0e
dumping ANE docs
2020-12-06 18:58:36 -08:00
George Hotz
6e793e96c3
deeebug
2020-12-06 17:49:17 -08:00
George Hotz
dced0cb44b
oops, path to weights
2020-12-06 16:33:42 -08:00
George Hotz
c57dc61ea7
simple op examples
2020-12-06 16:32:26 -08:00
George Hotz
7babf38617
found concat
2020-12-06 16:27:12 -08:00
George Hotz
ddd6778423
add neuron
2020-12-06 16:24:42 -08:00
George Hotz
f2f2d6aea3
docs and noop
2020-12-06 16:10:44 -08:00
George Hotz
6ba25834ee
found some plists
2020-12-06 15:57:28 -08:00
George Hotz
e2184c20ad
min weights, update golden
2020-12-06 15:29:15 -08:00
George Hotz
d4d8bd0337
make minimal plist for compare
2020-12-06 15:10:15 -08:00
George Hotz
0845ec43c6
compile takes in plist
2020-12-06 14:51:33 -08:00
George Hotz
00312b8ad1
batchnorm work
2020-12-06 14:40:07 -08:00
George Hotz
da514c2918
fix enet init
2020-12-06 13:52:07 -08:00
George Hotz
3b982f2f7a
get_parameters
2020-12-06 13:47:28 -08:00
George Hotz
102e6356e9
replace layer_init_uniform with .uniform
2020-12-06 13:44:31 -08:00