mirror of
https://github.com/sunnypilot/sunnypilot.git
synced 2026-02-18 23:33:58 +08:00
* Added modeld.py (WIP)
* No more VisionIpcBufExtra
* Started work on cython bindings for runmodel
* Got ONNXModel cython bindings mostly working, added ModelFrame bindings
* Got modeld main loop running without model eval
* Move everything into ModelState
* Doesn't crash!
* Moved ModelState into modeld.py
* Added driving_pyx
* Added cython bindings for message generation
* Moved CLContext definition to visionipc.pxd
* *facepalm*
* Move cl_pyx into commonmodel_pyx
* Split out ONNXModel into a subclass of RunModel
* Added snpemodel/thneedmodel bindings
* Removed modeld.cc
* Fixed scons for macOS
* Fixed sconscript
* Added flag for thneedmodel
* paths are now relative to openpilot root dir
* Set cl kernel paths in SConscript
* Set LD_PRELOAD=libthneed.so to fix ioctl interception
* Run from root dir
* A few more fixes
* A few more minor fixes
* Use C update_calibration for now to exactly match refs
* Add nav_instructions input
* Link driving_pyx.pyx with transformations
* Checked python FirstOrderFilter against C++ FirstOrderFilter
* Set process name to fix test_onroad
* Revert changes to onnxmodel.cc
* Fixed bad onnx_runner.py path in onnxmodel.cc
* Import all constants from driving.h
* logging -> cloudlog
* pylint import-error suppressions no longer needed?
* Loop in SConscript
* Added parens
* Bump modeld cpu usage in test_onroad
* Get rid of use_nav
* use config_realtime_process
* error message from ioctl sniffer was messing up pyenv
* cast distance_idx to int
* Removed cloudlog.infos in model.run
* Fixed rebase conflicts
* Clean up driving.pxd/pyx
* Fixed linter error
old-commit-hash: 72a3c987c0
Neural networks in openpilot
To view the architecture of the ONNX networks, you can use netron
Supercombo
Supercombo input format (Full size: 799906 x float32)
- image stream
- Two consecutive images (256 * 512 * 3 in RGB) recorded at 20 Hz : 393216 = 2 * 6 * 128 * 256
- Each 256 * 512 image is represented in YUV420 with 6 channels : 6 * 128 * 256
- Channels 0,1,2,3 represent the full-res Y channel and are represented in numpy as Y[::2, ::2], Y[::2, 1::2], Y[1::2, ::2], and Y[1::2, 1::2]
- Channel 4 represents the half-res U channel
- Channel 5 represents the half-res V channel
- Each 256 * 512 image is represented in YUV420 with 6 channels : 6 * 128 * 256
- Two consecutive images (256 * 512 * 3 in RGB) recorded at 20 Hz : 393216 = 2 * 6 * 128 * 256
- wide image stream
- Two consecutive images (256 * 512 * 3 in RGB) recorded at 20 Hz : 393216 = 2 * 6 * 128 * 256
- Each 256 * 512 image is represented in YUV420 with 6 channels : 6 * 128 * 256
- Channels 0,1,2,3 represent the full-res Y channel and are represented in numpy as Y[::2, ::2], Y[::2, 1::2], Y[1::2, ::2], and Y[1::2, 1::2]
- Channel 4 represents the half-res U channel
- Channel 5 represents the half-res V channel
- Each 256 * 512 image is represented in YUV420 with 6 channels : 6 * 128 * 256
- Two consecutive images (256 * 512 * 3 in RGB) recorded at 20 Hz : 393216 = 2 * 6 * 128 * 256
- desire
- one-hot encoded buffer to command model to execute certain actions, bit needs to be sent for the past 5 seconds (at 20FPS) : 100 * 8
- traffic convention
- one-hot encoded vector to tell model whether traffic is right-hand or left-hand traffic : 2
- feature buffer
- A buffer of intermediate features that gets appended to the current feature to form a 5 seconds temporal context (at 20FPS) : 99 * 128
Supercombo output format (Full size: XXX x float32)
Read here for more.
Driver Monitoring Model
- .onnx model can be run with onnx runtimes
- .dlc file is a pre-quantized model and only runs on qualcomm DSPs
input format
- single image W = 1440 H = 960 luminance channel (Y) from the planar YUV420 format:
- full input size is 1440 * 960 = 1382400
- normalized ranging from 0.0 to 1.0 in float32 (onnx runner) or ranging from 0 to 255 in uint8 (snpe runner)
- camera calibration angles (roll, pitch, yaw) from liveCalibration: 3 x float32 inputs
output format
- 84 x float32 outputs = 2 + 41 * 2 (parsing example)
- for each person in the front seats (2 * 41)
- face pose: 12 = 6 + 6
- face orientation [pitch, yaw, roll] in camera frame: 3
- face position [dx, dy] relative to image center: 2
- normalized face size: 1
- standard deviations for above outputs: 6
- face visible probability: 1
- eyes: 20 = (8 + 1) + (8 + 1) + 1 + 1
- eye position and size, and their standard deviations: 8
- eye visible probability: 1
- eye closed probability: 1
- wearing sunglasses probability: 1
- face occluded probability: 1
- touching wheel probability: 1
- paying attention probability: 1
- (deprecated) distracted probabilities: 2
- using phone probability: 1
- distracted probability: 1
- face pose: 12 = 6 + 6
- common outputs 2
- poor camera vision probability: 1
- left hand drive probability: 1
- for each person in the front seats (2 * 41)