Files
sunnypilot/selfdrive/modeld/models
YassineYousfi 4c2bd853e4 (New) Lemon Pie Model 🍋 (#30209)
* 6f6e3749-1b7c-42e8-a33b-03929b7fc476/700

* oops deleted too much

* 1b4308b7-a659-4ebd-b4c6-c81c1c3890f8/700

* 1be192f3-f407-4217-9757-78b9ad92750a/700

* remove some todos

* more cleanup in lat planner

* vego > min_speed

* regen and update process replay refs

* update model replay ref

* update model replay ref commit again

* Revert "update model replay ref commit again"

This reverts commit 922cb796b8dfe264b0cce7a18206bb898b18bcb3.

* update again

* bump cereal
2023-11-02 14:01:00 -07:00
..
2022-12-02 13:14:30 -08:00
2023-06-30 18:52:20 -07:00
2023-06-30 18:52:20 -07:00
2023-11-02 14:01:00 -07:00

Neural networks in openpilot

To view the architecture of the ONNX networks, you can use netron

Supercombo

Supercombo input format (Full size: 799906 x float32)

  • image stream
    • Two consecutive images (256 * 512 * 3 in RGB) recorded at 20 Hz : 393216 = 2 * 6 * 128 * 256
      • Each 256 * 512 image is represented in YUV420 with 6 channels : 6 * 128 * 256
        • Channels 0,1,2,3 represent the full-res Y channel and are represented in numpy as Y[::2, ::2], Y[::2, 1::2], Y[1::2, ::2], and Y[1::2, 1::2]
        • Channel 4 represents the half-res U channel
        • Channel 5 represents the half-res V channel
  • wide image stream
    • Two consecutive images (256 * 512 * 3 in RGB) recorded at 20 Hz : 393216 = 2 * 6 * 128 * 256
      • Each 256 * 512 image is represented in YUV420 with 6 channels : 6 * 128 * 256
        • Channels 0,1,2,3 represent the full-res Y channel and are represented in numpy as Y[::2, ::2], Y[::2, 1::2], Y[1::2, ::2], and Y[1::2, 1::2]
        • Channel 4 represents the half-res U channel
        • Channel 5 represents the half-res V channel
  • desire
    • one-hot encoded buffer to command model to execute certain actions, bit needs to be sent for the past 5 seconds (at 20FPS) : 100 * 8
  • traffic convention
    • one-hot encoded vector to tell model whether traffic is right-hand or left-hand traffic : 2
  • feature buffer
    • A buffer of intermediate features that gets appended to the current feature to form a 5 seconds temporal context (at 20FPS) : 99 * 512
  • nav features
    • 1 * 150
  • nav instructions
    • 1 * 256

Supercombo output format (Full size: XXX x float32)

Read here for more.

Driver Monitoring Model

  • .onnx model can be run with onnx runtimes
  • .dlc file is a pre-quantized model and only runs on qualcomm DSPs

input format

  • single image W = 1440 H = 960 luminance channel (Y) from the planar YUV420 format:
    • full input size is 1440 * 960 = 1382400
    • normalized ranging from 0.0 to 1.0 in float32 (onnx runner) or ranging from 0 to 255 in uint8 (snpe runner)
  • camera calibration angles (roll, pitch, yaw) from liveCalibration: 3 x float32 inputs

output format

  • 84 x float32 outputs = 2 + 41 * 2 (parsing example)
    • for each person in the front seats (2 * 41)
      • face pose: 12 = 6 + 6
        • face orientation [pitch, yaw, roll] in camera frame: 3
        • face position [dx, dy] relative to image center: 2
        • normalized face size: 1
        • standard deviations for above outputs: 6
      • face visible probability: 1
      • eyes: 20 = (8 + 1) + (8 + 1) + 1 + 1
        • eye position and size, and their standard deviations: 8
        • eye visible probability: 1
        • eye closed probability: 1
      • wearing sunglasses probability: 1
      • face occluded probability: 1
      • touching wheel probability: 1
      • paying attention probability: 1
      • (deprecated) distracted probabilities: 2
      • using phone probability: 1
      • distracted probability: 1
    • common outputs 2
      • poor camera vision probability: 1
      • left hand drive probability: 1