tinygrad/examples/mlperf
Nik 085c0bbf6b
add mlperf train subset of openimages (#4841)
2024-06-05 10:10:11 -04:00
..
training_submission_v4.0/tinycorp feat: more mlperf fixes (#4505) 2024-05-09 20:50:20 -07:00
README start on mlperf models 2023-05-10 16:30:49 -07:00
dataloader.py Multiprocessing UNet3D dataloader (#4801) 2024-06-02 11:30:47 -04:00
helpers.py Disable dropout (#4837) 2024-06-04 18:57:26 -04:00
initializers.py Optional half matmul (#4835) 2024-06-04 17:53:41 -04:00
losses.py [MLPerf][UNet3D] Add DICE loss + metrics (#4204) 2024-04-17 20:09:33 -04:00
lr_schedulers.py fp16 resnet (without expand backwards sum in float, doesn't work) (#3816) 2024-03-28 01:25:37 -04:00
metrics.py [MLPerf][UNet3D] Add DICE loss + metrics (#4204) 2024-04-17 20:09:33 -04:00
model_eval.py add mlperf train subset of openimages (#4841) 2024-06-05 10:10:11 -04:00
model_spec.py move globalcounters to ops (#2960) 2024-01-01 14:21:02 -08:00
model_train.py Disable dropout (#4837) 2024-06-04 18:57:26 -04:00

README

Each model should be a clean single file.
They are imported from the top level `models` directory

It should be capable of loading weights from the reference imp.

We will focus on these 5 models:

# Resnet50-v1.5 (classic) -- 8.2 GOPS/input
# Retinanet
# 3D UNET (upconvs)
# RNNT
# BERT-large (transformer)

They are used in both the training and inference benchmark:
https://mlcommons.org/en/training-normal-21/
https://mlcommons.org/en/inference-edge-30/
And we will submit to both.

NOTE: we are Edge since we don't have ECC RAM