openpilot0/selfdrive/test/process_replay
Shane Smiskol a878118b00 Revert "selfdrive/car: ban imports from external modules" (#32993)
Revert "selfdrive/car: ban imports from external modules (#32973)"

This reverts commit c1e8ff2dbc47ccb07fb41a4e88bfb66af2669254.
old-commit-hash: 35a4a773f1
2024-07-15 11:14:04 -07:00
..
.gitignore image processing refactor and test (#32249) 2024-04-18 21:45:59 -07:00
README.md rename carEvents -> onroadEvents (#30577) 2023-12-01 22:48:29 -08:00
__init__.py add openpilot prefix to imports (#29498) 2023-08-20 20:49:55 -07:00
capture.py use pyupgrade to update to new typing syntax (#31580) 2024-02-24 16:41:23 -08:00
compare_logs.py use pyupgrade to update to new typing syntax (#31580) 2024-02-24 16:41:23 -08:00
imgproc_replay_ref_hash image processing refactor and test (#32249) 2024-04-18 21:45:59 -07:00
migration.py manager: move to system/ (#32538) 2024-05-25 12:41:17 -07:00
model_replay.py drivingModelData: frameDropPerc (#32871) 2024-06-29 02:31:55 -07:00
model_replay_ref_commit Notre Dame model (#32884) 2024-07-01 13:59:44 -07:00
process_replay.py Revert "selfdrive/car: ban imports from external modules" (#32993) 2024-07-15 11:14:04 -07:00
ref_commit torqued: check steer override to current time (#32963) 2024-07-10 17:25:39 -07:00
regen.py process_replay: most messages valid check (#32521) 2024-05-23 15:30:21 -07:00
regen_all.py Revert "Process Replay: move to pytest (#30260)" (#30687) 2023-12-11 14:46:56 -08:00
test_fuzzy.py test_*.py files are no longer executable (#32610) 2024-06-03 15:48:56 -07:00
test_imgproc.py test_*.py files are no longer executable (#32610) 2024-06-03 15:48:56 -07:00
test_processes.py revert marking some files as unexecutable (#32613) 2024-06-03 17:19:27 -07:00
test_regen.py test_*.py files are no longer executable (#32610) 2024-06-03 15:48:56 -07:00
vision_meta.py Split cereal into cereal/msgq (#32631) 2024-06-06 14:31:56 -07:00

README.md

Process replay

Process replay is a regression test designed to identify any changes in the output of a process. This test replays a segment through individual processes and compares the output to a known good replay. Each make is represented in the test with a segment.

If the test fails, make sure that you didn't unintentionally change anything. If there are intentional changes, the reference logs will be updated.

Use test_processes.py to run the test locally. Use FILEREADER_CACHE='1' test_processes.py to cache log files.

Currently the following processes are tested:

  • controlsd
  • radard
  • plannerd
  • calibrationd
  • dmonitoringd
  • locationd
  • paramsd
  • ubloxd
  • torqued

Usage

Usage: test_processes.py [-h] [--whitelist-procs PROCS] [--whitelist-cars CARS] [--blacklist-procs PROCS]
                         [--blacklist-cars CARS] [--ignore-fields FIELDS] [--ignore-msgs MSGS] [--update-refs] [--upload-only]
Regression test to identify changes in a process's output
optional arguments:
  -h, --help            show this help message and exit
  --whitelist-procs PROCS               Whitelist given processes from the test (e.g. controlsd)
  --whitelist-cars WHITELIST_CARS       Whitelist given cars from the test (e.g. HONDA)
  --blacklist-procs BLACKLIST_PROCS     Blacklist given processes from the test (e.g. controlsd)
  --blacklist-cars BLACKLIST_CARS       Blacklist given cars from the test (e.g. HONDA)
  --ignore-fields IGNORE_FIELDS         Extra fields or msgs to ignore (e.g. carState.events)
  --ignore-msgs IGNORE_MSGS             Msgs to ignore (e.g. onroadEvents)
  --update-refs                         Updates reference logs using current commit
  --upload-only                         Skips testing processes and uploads logs from previous test run

Forks

openpilot forks can use this test with their own reference logs, by default test_proccess.py saves logs locally.

To generate new logs:

./test_processes.py

Then, check in the new logs using git-lfs. Make sure to also update the ref_commit file to the current commit.

API

Process replay test suite exposes programmatic APIs for simultaneously running processes or groups of processes on provided logs.

def replay_process_with_name(name: Union[str, Iterable[str]], lr: LogIterable, *args, **kwargs) -> List[capnp._DynamicStructReader]:

def replay_process(
  cfg: Union[ProcessConfig, Iterable[ProcessConfig]], lr: LogIterable, frs: Optional[Dict[str, Any]] = None, 
  fingerprint: Optional[str] = None, return_all_logs: bool = False, custom_params: Optional[Dict[str, Any]] = None, disable_progress: bool = False
) -> List[capnp._DynamicStructReader]:

Example usage:

from openpilot.selfdrive.test.process_replay import replay_process_with_name
from openpilot.tools.lib.logreader import LogReader

lr = LogReader(...)

# provide a name of the process to replay
output_logs = replay_process_with_name('locationd', lr)

# or list of names
output_logs = replay_process_with_name(['ubloxd', 'locationd'], lr)

Supported processes:

  • controlsd
  • radard
  • plannerd
  • calibrationd
  • dmonitoringd
  • locationd
  • paramsd
  • ubloxd
  • torqued
  • modeld
  • dmonitoringmodeld

Certain processes may require an initial state, which is usually supplied within Params and persisting from segment to segment (e.g CalibrationParams, LiveParameters). The custom_params is dictionary used to prepopulate Params with arbitrary values. The get_custom_params_from_lr helper is provided to fetch meaningful values from log files.

from openpilot.selfdrive.test.process_replay import get_custom_params_from_lr

previous_segment_lr = LogReader(...)
current_segment_lr = LogReader(...)

custom_params = get_custom_params_from_lr(previous_segment_lr, 'last')

output_logs = replay_process_with_name('calibrationd', lr, custom_params=custom_params)

Replaying processes that use VisionIPC (e.g. modeld, dmonitoringmodeld) require additional frs dictionary with camera states as keys and FrameReader objects as values.

from openpilot.tools.lib.framereader import FrameReader

frs = {
  'roadCameraState': FrameReader(...),
  'wideRoadCameraState': FrameReader(...),
  'driverCameraState': FrameReader(...),
}

output_logs = replay_process_with_name(['modeld', 'dmonitoringmodeld'], lr, frs=frs)

To capture stdout/stderr of the replayed process, captured_output_store can be provided.

output_store = dict()
# pass dictionary by reference, it will be filled with standard outputs - even if process replay fails
output_logs = replay_process_with_name(['radard', 'plannerd'], lr, captured_output_store=output_store)

# entries with captured output in format { 'out': '...', 'err': '...' } will be added to provided dictionary for each replayed process
print(output_store['radard']['out']) # radard stdout
print(output_store['radard']['err']) # radard stderr