From Prototype to Automotive-grade: Integrating Software Verification (WCET) into Your CI Pipeline
embeddedsoftware verificationCI/CD

From Prototype to Automotive-grade: Integrating Software Verification (WCET) into Your CI Pipeline

ddeploy
2026-01-29
11 min read
Advertisement

Integrate WCET and timing analysis into CI to catch real-time regressions early—practical patterns, scripts, and 2026 tooling trends (VectorCAST + RocqStat).

Ship faster without losing real-time guarantees: add WCET to your CI/CD

If you deliver embedded or automotive software, you already know the pain: test automation and CI speed up functional validation, but timing bugs still escape until late-stage integration or road testing. The result is costly rework, delayed releases, and failed safety audits. In 2026 this risk is no longer acceptable—safety standards, multicore complexity, and new regulations demand timing-aware verification early in CI. This article shows how to integrate WCET and timing analysis tools (for example, the newly unified VectorCAST + RocqStat toolchain) into CI/CD so you catch timing regressions before they reach HIL or vehicles.

Why timing verification must live in CI (2026 context)

Over the last two years the industry has moved timing verification left. In January 2026 Vector Informatik announced the acquisition of StatInf’s RocqStat technology and a roadmap to integrate timing-analysis and WCET estimation directly into VectorCAST. The move reflects a broader trend: teams need a single, automated workflow that combines functional testing, coverage analysis and timing analysis to meet ISO 26262, EN 50128 and domain-specific timing requirements.

Modern embedded systems are more complex—multi-core ECUs, deep pipelines, caches, and dynamic scheduling make execution time non-deterministic unless actively analyzed. Waiting for late-stage integration to discover a missed deadline is expensive. Integrating WCET into CI gives teams earlier visibility into timing regressions, enforces timing budgets as part of your pull-request gates and supports repeatable, auditable verification for safety certification.

What WCET in CI actually buys you

  • Automatic timing regression detection: Fail a build when a new commit increases worst-case execution time beyond your budget.
  • Traceability for certification: Artifacts, reports and auditable runs stored alongside test results.
  • Faster feedback: Developers get timing results in the same pipeline they already use for unit tests.
  • Unified toolchain: VectorCAST + RocqStat (post-acquisition) points toward an integrated UX for test & WCET workflows.
  • Operationalized timing policy: Implement organization-wide rules (e.g., margin >= 20%) enforced automatically.

WCET integration patterns for CI/CD

Choose an integration pattern that matches your risk profile, available hardware and certification targets. In practice teams use one of three common patterns—or a hybrid of them.

1) Static WCET analysis in CI (fast, conservative)

Static analysis tools compute a safe upper bound on execution time from the code and target model without running on hardware. This is fast and repeatable and fits well in containerized CI runners.

  • Pros: No hardware required, repeatable, conservative (suitable for early gating).
  • Cons: May be overly pessimistic for complex microarchitectures (caches, pipelines).

When integrating static WCET into CI, run the tool after compilation and before functional test reporting. Save its XML/JSON report and fail the build if the reported WCET exceeds your threshold.

2) Measurement-based WCET in CI (empirical, requires hardware)

Measurement-based approaches execute the instrumented binary on target hardware or a deterministic simulator and record execution times. This produces realistic numbers but requires deterministic hardware setups and careful environment control.

  • Pros: Realistic timing data, useful for micro-architectural effects.
  • Cons: Need hardware in CI runners or HIL setup; less repeatable if environment varies.

3) Hybrid (static + measurement)

Combine static analysis to get safe upper bounds and measurement-based runs to reduce pessimism where justified. Use static WCET to gate merges and measurement runs to optimize code later in a nightly or release pipeline.

Practical pipeline: stage-by-stage example

Below is a pragmatic CI pipeline that introduces WCET into an embedded CI/CD workflow. It is organized so developers get quick feedback for every commit while deeper timing verification runs in nightly builds.

  1. Build — Cross-compile the code with reproducible toolchain versions (pin compilers and linker flags).
  2. Unit tests — Run VectorCAST or your unit test framework; collect coverage artifacts.
  3. Static WCET — Run RocqStat (or your static WCET tool) to estimate WCET from the binary and target model.
  4. Timing gate — Parse the WCET report; fail on threshold breaches.
  5. Measurement / HIL (nightly) — Run measurement-based WCET on hardware-in-the-loop and update metrics in the dashboard.
  6. Release — Only artifacts that pass the timing gate are candidate releases; store WCET reports along with images for audits.

Example: GitHub Actions snippet (conceptual)

# .github/workflows/embedded-ci.yml
name: Embedded CI with WCET
on: [push, pull_request]
jobs:
  build-and-test:
    runs-on: ubuntu-22.04
    steps:
      - uses: actions/checkout@v4
      - name: Setup toolchain
        run: | 
          sudo apt-get update && sudo apt-get install -y gcc-arm-none-eabi # pin exact versions in real pipelines
      - name: Build
        run: make all CROSS_COMPILE=arm-none-eabi- 
      - name: Run unit tests (VectorCAST)
        run: |
          ./vectorcast_cli --project my_project --run-unit-tests --report-unit-tests=reports/unittests.xml
      - name: Run static WCET (RocqStat)
        run: |
          ./rocqstat_cli --binary build/target.elf --target-model targets/arm-m0.model --output reports/wcet.json
      - name: Fail on WCET breach
        run: |
          wcet=$(jq '.worst_case_time_ms' reports/wcet.json)
          budget=5.0
          echo "WCET=$wcet ms (budget ${budget} ms)"
          if (( $(echo "$wcet > $budget" | bc -l) )); then
            echo "WCET budget exceeded"; exit 1
          fi

The commands above are conceptual: your vendor tool CLI and flags will differ. The important pattern is: (1) run static WCET after build, (2) emit machine-readable report (JSON/XML), and (3) enforce a pass/fail rule in CI.

How to implement a reliable WCET stage

A naive WCET check gives false confidence unless you control the environment and model the target correctly. Follow these practical steps:

  1. Pin toolchain versions: Use the same compiler, linker and optimization flags that you will use for the target image. Small compiler changes can alter control-flow and timing.
  2. Model your hardware: Static WCET depends on an accurate target model (cache sizes, pipeline, timers). Keep models under version control and evolve them in a model-change review process.
  3. Lock runtime conditions: Disable CPU frequency scaling and dynamic power modes on CI hardware; for measurement runs, set deterministic test harnesses.
  4. Automate artefact storage: Save WCET reports, maps, CFGs and VectorCAST coverage reports to an artifact store for audits.
  5. Run nightly deep analysis: Use hybrid or measurement-based approaches overnight to refine WCETs and de-risks borderline static estimates.

Handling multicore and interference

In automotive-grade systems, multicore interference (shared caches, buses, memory controllers) often dominates worst-case behaviour. Static WCET must be combined with platform-level analyses:

  • Use platform models that include interference channels or assume maximum interference for safety-critical tasks.
  • Introduce WCET + scheduling analysis: integrate WCET numbers into your response-time analysis tool (RTA) or real-time schedulability checks.
  • Where possible, use time/power partitioning or software isolation to reduce interference and tighten WCET estimates.

Advanced strategies: data-driven and probabilistic approaches

By 2026, teams are adopting mixed strategies to keep WCET useful and actionable. Two advanced approaches are:

  • Probabilistic WCET (pWCET): Used where absolute worst-case bounds are too pessimistic. pWCET provides probabilistic guarantees (e.g., 10^-9 exceedance probability) and is appropriate when acceptable by your certification path.
  • Statistical validation & ML-assisted models: Use statistical analysis of trace data to refine micro-architectural models. Tool vendors (including the RocqStat team pre-acquisition) have started incorporating analytics to better predict cache/pipeline effects—treat these as supplements, not replacements for formal bounds when you need absolute safety margins.

CI runners, hardware labs, and HIL orchestration

Measurement-based WCET requires controlled hardware. Options in CI include:

  • Dedicated hardware runners: Self-hosted GitLab/GitHub runners attached to benches where the power state and clock are controllable.
  • HIL farms: Use scheduler software to queue tests on shared HIL racks; provide isolation to reproduce tests.
  • Simulation with cycle-accurate models: For early iterations, use a deterministic cycle-accurate simulator in CI (slower, but reproducible). See also approaches to integrating on-device simulation and cloud analytics for end-to-end trace collection.

Integrating VectorCAST + RocqStat: a practical note

Vector's 2026 acquisition of StatInf’s RocqStat unifies two powerful capabilities: unit and integration test automation (VectorCAST) with advanced timing/WCET estimation (RocqStat). This integration simplifies report correlation—coverage vs. WCET—and reduces manual artifact wrangling during audits.

"Timing safety is becoming a critical ..." — Eric Barton, SVP of Code Testing Tools, Vector Informatik (Automotive World, Jan 2026)

Practically, expect these benefits as the integration matures:

  • Single CLI and reporting format for unit test results and WCET estimates.
  • Built-in policies to fail builds when a test increases WCET beyond configured margins.
  • Traceability from test case to timing evidence for certification packs.

Example fail-on-regression script (bash)

Use a small script in your pipeline to gate merges on timing. This example assumes your tool writes a JSON field worst_case_time_ms.

#!/usr/bin/env bash
set -euo pipefail
REPORT=$1   # reports/wcet.json
BUDGET_MS=$2
wcet=$(jq '.worst_case_time_ms' "$REPORT")
echo "WCET: $wcet ms (budget: $BUDGET_MS ms)"
if (( $(echo "$wcet > $BUDGET_MS" | bc -l) )); then
  echo "ERROR: WCET exceeds budget" >&2
  exit 1
fi
echo "WCET within budget"

Metrics and dashboards to track

Add timing metrics to your CI dashboards so teams can visualize trends and hotspots:

  • WCET value per build and per commit
  • WCET margin = (budget - wcet) / budget
  • Trend of WCET deltas by author or component
  • Correlation of coverage improvements with WCET changes

Common pitfalls and how to avoid them

  • Pitfall: Running WCET only on main branch. Fix: run quick static WCET on PRs to catch regressions early.
  • Pitfall: Using different build flags in CI vs release. Fix: centralize build flags in one file and pin compiler versions; consider a patch orchestration runbook for consistent deployments.
  • Pitfall: Ignoring multicore effects. Fix: include platform-level interference models or partitioning strategies.
  • Pitfall: Storing WCET results only in emails. Fix: archive JSON/XML results into your artifact repository for audits.

Short runbook: Get WCET into your CI in 6 steps

  1. Choose a WCET approach: static, measurement or hybrid.
  2. Pin compilers, linker flags and tool versions in CI images.
  3. Model your target platform accurately for static tools; version-control models.
  4. Integrate WCET run and report generation into your normal pipeline after build.
  5. Implement automated gating script to fail the build when WCET > budget.
  6. Set up nightly measurement/HIL runs and maintain dashboards for trends and audits.

By 2026, expect the following developments:

  • Unified verification toolchains: Mergers (like Vector + RocqStat) drive unified UX for functional and timing verification, simplifying CI integration.
  • Better modeling for multicore: Tool vendors will ship improved interference models, making static WCET more practical on complex platforms.
  • Automated evidence packs: CI systems will generate pre-formatted certification artifacts (coverage + WCET + trace) to accelerate audits.
  • Edge-AI assisted analysis: Emerging research and vendor features will use ML to suggest hotspots, but regulators will still expect formal or conservative bounds for safety-critical tasks.

Actionable takeaways

  • Start small: add static WCET analysis to PR pipelines today to catch regressions quickly.
  • Automate pass/fail rules: treat timing budgets like tests—fail on violation.
  • Keep artifacts: store WCET reports, models and test traces in your CI artifact storage for traceability.
  • Mix methods: use static analysis for gating and measurement-based analysis in nightly runs to refine margins.
  • Prepare for audits: combine VectorCAST test evidence with RocqStat WCET reports to build certification-friendly evidence packs.

Conclusion — Move timing left, keep releases reliable

Integrating WCET and timing analysis into CI moves timing validation from ad-hoc late-stage validation to an automated, auditable part of your delivery pipeline. In 2026, unified toolchains and better platform models make it practical to enforce timing budgets on every pull request. Start by adding static WCET checks in PR pipelines, then expand to measurement-based HIL runs and full certification evidence automation. That practice reduces risk, shortens feedback loops and makes automotive-grade timing assurance part of normal developer workflows.

Next steps

Ready to add timing gates to your CI? Trial a static WCET analysis step on a representative module this week: pin your toolchain, run a WCET tool after build, and add a simple fail-on-regression script. If you want a template pipeline or an audit-ready evidence pack that links VectorCAST tests with WCET reports, contact us for a hands-on workshop.

Want a ready-made pipeline? Reach out to schedule a consultation or get a sample GitHub Actions/GitLab CI template that wires VectorCAST and WCET analysis into your build and release flow.

Advertisement

Related Topics

#embedded#software verification#CI/CD
d

deploy

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-06T21:07:15.168Z