Events: Refactor OnroadEventSP structure and add upstream cereal validation (#722)

* Refactor OnroadEventSP structure to contain list of events

A restructuring of the OnroadEventSP structure has been undertaken to accommodate a list of 'Event' substructures. The change is reflected in different files where OnroadEventSP is used. This update allows for more efficient management of multiple events by grouping them together under the revised OnroadEventSP structure.

* Rename `OnroadEventSP` to `OnroadEventsSP` across codebase.

Updated all references to `OnroadEventSP` to ensure consistency with the renamed struct `OnroadEventsSP`. This change improves code clarity and aligns naming conventions across modules.

* Add optional debug logging to schema validation script

Introduced a `DEBUG` flag and a `print_debug` function to streamline debug output management. This replaces direct `print` calls with conditional logging to control verbosity during execution.

Refactor structural validation logic in cereal test

Simplify the iteration over read_instances to streamline structural validation. Removed redundant comparisons and improved error handling to detect unreadable fields more effectively. Updated error messages for better clarity during debugging.

Update build command to include 'cereal' target in CI

Modified the scons build command in selfdrive_tests workflow to explicitly build the 'cereal' target. This ensures necessary components are included during the CI process, improving reliability and consistency.

Added workflow for cereal validation artifacts generation and validation against upstream

This commit encompasses significant changes to .github/workflows/selfdrive_tests.yaml, including the addition of two new jobs. One is responsible for 'Generating cereal validation artifacts' and the other for 'Validating cereal with Upstream'. This includes generating cereal schemas, building openpilot, and running validation schema instances against master. Furthermore, a new Python script (validate_sp_cereal_upstream.py) was also added to perform cereal schema instance generation and validation. These changes aim to enhance the testing process, ensuring schema compatibility and integration quality.

* Relocate cereal validation to a dedicated GitHub workflow

This commit introduces a distinct GitHub workflow for cereal validation named 'cereal_validation.yaml'. This workflow includes two jobs: one for generating cereal validation artifacts and another for validating cereal with the upstream project. Previously, these operations were included as separate jobs in 'selfdrive_tests.yaml'. However, the decoupling in this commit allows for a better organization of GitHub workflows within the project. Additionally, this separation allows these workflows to be individually configured and run, providing a greater degree of flexibility in managing our continuous integration activities.

* Rename workflow to "cereal validation" for clarity.

Updated the workflow name in the GitHub Actions configuration to better reflect its purpose. This change improves maintainability and ensures clearer identification of the workflow's function.

* Add LFS configuration and GitLab SSH setup to workflow

Integrate GitLab LFS handling by configuring LFS URLs and enabling SSH setup. This includes adding public GitLab keys and updating the workflow to support secure connections for LFS operations. Ensures proper handling of large files and seamless integration with GitLab.

* rename

* format

---------

Co-authored-by: Jason Wen <haibin.wen3@gmail.com>
This commit is contained in:
DevTekVE
2025-03-29 22:34:31 +01:00
committed by GitHub
parent 245605dc55
commit 4268d7a19c
6 changed files with 319 additions and 16 deletions

View File

@@ -0,0 +1,77 @@
name: cereal validation
on:
push:
branches:
- master
- master-new
pull_request:
paths:
- 'cereal/**'
workflow_dispatch:
workflow_call:
inputs:
run_number:
default: '1'
required: true
type: string
concurrency:
group: cereal-validation-ci-run-${{ inputs.run_number }}-${{ github.event_name == 'push' && (github.ref == 'refs/heads/master' || github.ref == 'refs/heads/master-new') && github.run_id || github.head_ref || github.ref }}-${{ github.workflow }}-${{ github.event_name }}
cancel-in-progress: true
env:
PYTHONWARNINGS: error
BASE_IMAGE: openpilot-base
BUILD: selfdrive/test/docker_build.sh base
RUN: docker run --shm-size 2G -v $PWD:/tmp/openpilot -w /tmp/openpilot -e CI=1 -e PYTHONWARNINGS=error -e FILEREADER_CACHE=1 -e PYTHONPATH=/tmp/openpilot -e NUM_JOBS -e JOB_ID -e GITHUB_ACTION -e GITHUB_REF -e GITHUB_HEAD_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_RUN_ID -v $GITHUB_WORKSPACE/.ci_cache/scons_cache:/tmp/scons_cache -v $GITHUB_WORKSPACE/.ci_cache/comma_download_cache:/tmp/comma_download_cache -v $GITHUB_WORKSPACE/.ci_cache/openpilot_cache:/tmp/openpilot_cache $BASE_IMAGE /bin/bash -c
jobs:
generate_cereal_artifact:
name: Generate cereal validation artifacts
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
with:
submodules: true
- uses: ./.github/workflows/setup-with-retry
- name: Build openpilot
run: ${{ env.RUN }} "scons -j$(nproc) cereal"
- name: Generate the log file
run: |
${{ env.RUN }} "cereal/messaging/tests/validate_sp_cereal_upstream.py -g -f schema_instances.bin" && \
ls -la
ls -la cereal/messaging/tests
- name: 'Prepare artifact'
run: |
mkdir -p "cereal/messaging/tests/cereal_validations"
cp cereal/messaging/tests/validate_sp_cereal_upstream.py "cereal/messaging/tests/cereal_validations/validate_sp_cereal_upstream.py"
cp schema_instances.bin "cereal/messaging/tests/cereal_validations/schema_instances.bin"
- name: 'Upload Artifact'
uses: actions/upload-artifact@v4
with:
name: cereal_validations
path: cereal/messaging/tests/cereal_validations
validate_cereal_with_upstream:
name: Validate cereal with Upstream
runs-on: ubuntu-24.04
needs: generate_cereal_artifact
steps:
- uses: actions/checkout@v4
with:
repository: 'commaai/openpilot'
submodules: true
ref: "refs/heads/master"
- uses: ./.github/workflows/setup-with-retry
- name: Build openpilot
run: ${{ env.RUN }} "scons -j$(nproc) cereal"
- name: Download build artifacts
uses: actions/download-artifact@v4
with:
name: cereal_validations
path: cereal/messaging/tests/cereal_validations
- name: 'Run the validation'
run: |
chmod +x cereal/messaging/tests/cereal_validations/validate_sp_cereal_upstream.py
${{ env.RUN }} "cereal/messaging/tests/cereal_validations/validate_sp_cereal_upstream.py -r -f cereal/messaging/tests/cereal_validations/schema_instances.bin"

View File

@@ -102,19 +102,23 @@ struct LongitudinalPlanSP @0xf35cc4560bbf6ec2 {
}
struct OnroadEventSP @0xda96579883444c35 {
name @0 :EventName;
events @0 :List(Event);
# event types
enable @1 :Bool;
noEntry @2 :Bool;
warning @3 :Bool; # alerts presented only when enabled or soft disabling
userDisable @4 :Bool;
softDisable @5 :Bool;
immediateDisable @6 :Bool;
preEnable @7 :Bool;
permanent @8 :Bool; # alerts presented regardless of openpilot state
overrideLateral @10 :Bool;
overrideLongitudinal @9 :Bool;
struct Event {
name @0 :EventName;
# event types
enable @1 :Bool;
noEntry @2 :Bool;
warning @3 :Bool; # alerts presented only when enabled or soft disabling
userDisable @4 :Bool;
softDisable @5 :Bool;
immediateDisable @6 :Bool;
preEnable @7 :Bool;
permanent @8 :Bool; # alerts presented regardless of openpilot state
overrideLateral @10 :Bool;
overrideLongitudinal @9 :Bool;
}
enum EventName {
lkasEnable @0;

View File

@@ -2579,7 +2579,7 @@ struct Event {
selfdriveStateSP @107 :Custom.SelfdriveStateSP;
modelManagerSP @108 :Custom.ModelManagerSP;
longitudinalPlanSP @109 :Custom.LongitudinalPlanSP;
onroadEventsSP @110 :List(Custom.OnroadEventSP);
onroadEventsSP @110 :Custom.OnroadEventSP;
carParamsSP @111 :Custom.CarParamsSP;
carControlSP @112 :Custom.CarControlSP;
backupManagerSP @113 :Custom.BackupManagerSP;

View File

@@ -0,0 +1,222 @@
#!/usr/bin/env python3
import argparse
import sys
from typing import Any, List, Tuple
DEBUG = False
def print_debug(string: str) -> None:
if DEBUG:
print(string)
def create_schema_instance(struct: Any, prop: Tuple[str, Any]) -> Any:
"""
Create a new instance of a schema type, handling different field types.
Args:
struct: The Cap'n Proto schema structure
prop: A tuple containing the field name and field metadata
Returns:
A new initialized schema instance
"""
struct_instance = struct.new_message()
field_name, field_metadata = prop
try:
field_type = field_metadata.proto.slot.type.which()
# Initialize different types of fields
if field_type in ('list', 'text', 'data'):
struct_instance.init(field_name, 1)
print_debug(f"Initialized list/text/data field: {field_name}")
elif field_type in ('struct', 'object'):
struct_instance.init(field_name)
print_debug(f"Initialized struct/object field: {field_name}")
return struct_instance
except Exception as e:
print(f"Error creating instance for {field_name}: {e}")
return None
def get_schema_fields(schema_struct: Any) -> List[Tuple[str, Any]]:
"""
Retrieve all fields from a given schema structure.
Args:
schema_struct: The Cap'n Proto schema structure
Returns:
A list of field names and their metadata
"""
try:
# Get all fields from the schema
schema_fields = list(schema_struct.schema.fields.items())
print_debug("Discovered schema fields:")
for field_name, field_metadata in schema_fields:
print_debug(f"- {field_name}")
return schema_fields
except Exception as e:
print(f"Error retrieving schema fields: {e}")
return []
def generate_schema_instances(schema_struct: Any) -> List[Any]:
"""
Generate instances for all fields in a given schema.
Args:
schema_struct: The Cap'n Proto schema structure
Returns:
A list of schema instances
"""
schema_fields = get_schema_fields(schema_struct)
instances = []
for field_prop in schema_fields:
try:
instance = create_schema_instance(schema_struct, field_prop)
if instance is not None:
instances.append(instance)
except Exception as e:
print(f"Skipping field due to error: {e}")
print(f"Generated {len(instances)} schema instances")
return instances
def persist_instances(instances: List[Any], filename: str) -> None:
"""
Write schema instances to a binary file.
Args:
instances: List of schema instances
filename: Output file path
"""
try:
with open(filename, 'wb') as f:
for instance in instances:
f.write(instance.to_bytes())
print(f"Successfully wrote {len(instances)} instances to {filename}")
except Exception as e:
print(f"Error persisting instances: {e}")
sys.exit(1)
def read_instances(filename: str, schema_type: Any) -> List[Any]:
"""
Read schema instances from a binary file.
Args:
filename: Input file path
schema_type: The schema type to use for reading
Returns:
A list of read schema instances
"""
try:
with open(filename, 'rb') as f:
data = f.read()
instances = list(schema_type.read_multiple_bytes(data))
print(f"Read {len(instances)} instances from {filename}")
return instances
except Exception as e:
print(f"Error reading instances: {e}")
sys.exit(1)
def compare_schemas(original_instances: List[Any], read_instances: List[Any]) -> bool:
"""
Compare original and read-back instances to detect potential breaking changes.
Args:
original_instances: List of originally generated instances
read_instances: List of instances read back from file
Returns:
Boolean indicating whether schemas appear compatible
"""
if len(original_instances) != len(read_instances):
print("❌ Schema Compatibility Warning: Instance count mismatch")
return False
compatible = True
for struct in read_instances:
try:
getattr(struct, struct.which()) # Attempting to access the field to validate readability
except Exception as e:
print(f"❌ Structural change detected: {struct.which()} is not readable.\nFull error: {e}")
compatible = False
return compatible
def main():
"""
CLI entry point for schema compatibility testing.
"""
# Setup argument parser
parser = argparse.ArgumentParser(
description='Cap\'n Proto Schema Compatibility Testing Tool',
epilog='Test schema compatibility by generating and reading back instances.'
)
# Add mutually exclusive group for generation or reading mode
mode_group = parser.add_mutually_exclusive_group(required=True)
mode_group.add_argument('-g', '--generate', action='store_true',
help='Generate schema instances')
mode_group.add_argument('-r', '--read', action='store_true',
help='Read and validate schema instances')
# Common arguments
parser.add_argument('-f', '--file',
default='schema_instances.bin',
help='Output/input binary file (default: schema_instances.bin)')
# Parse arguments
args = parser.parse_args()
# Import the schema dynamically
try:
from cereal import log
schema_type = log.Event
except ImportError:
print("Error: Unable to import schema. Ensure 'cereal' is installed.")
sys.exit(1)
# Execute based on mode
if args.generate:
print("🔧 Generating Schema Instances")
instances = generate_schema_instances(schema_type)
persist_instances(instances, args.file)
print("✅ Instance generation complete")
elif args.read:
print("🔍 Reading and Validating Schema Instances")
generated_instances = generate_schema_instances(schema_type)
read_back_instances = read_instances(args.file, schema_type)
# Compare schemas
if compare_schemas(generated_instances, read_back_instances):
print("✅ Schema Compatibility: No breaking changes detected")
sys.exit(0)
else:
print("❌ Potential Schema Breaking Changes Detected")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -509,9 +509,9 @@ class SelfdriveD(CruiseHelper):
# onroadEventsSP - logged every second or on change
if (self.sm.frame % int(1. / DT_CTRL) == 0) or (self.events_sp.names != self.events_sp_prev):
ce_send_sp = messaging.new_message('onroadEventsSP', len(self.events_sp))
ce_send_sp = messaging.new_message('onroadEventsSP')
ce_send_sp.valid = True
ce_send_sp.onroadEventsSP = self.events_sp.to_msg()
ce_send_sp.onroadEventsSP.events = self.events_sp.to_msg()
self.pm.send('onroadEventsSP', ce_send_sp)
self.events_sp_prev = self.events_sp.names.copy()

View File

@@ -26,7 +26,7 @@ class EventsSP(EventsBase):
return EVENT_NAME_SP[event]
def get_event_msg_type(self):
return custom.OnroadEventSP
return custom.OnroadEventSP.Event
EVENTS_SP: dict[int, dict[str, Alert | AlertCallbackType]] = {