VectorCAST + RocqStat automation: sample scripts to batch WCET runs and produce compliance reports
Reusable Python/Bash scripts to batch VectorCAST+RocqStat WCET runs and generate audit-ready reports for certification.
Stop re-running manual WCET workflows — automate VectorCAST + RocqStat for repeatable certification reports
If you spend days stitching together WCET analyses for different builds, configurations, and hardware targets, you're paying with time and audit risk. In 2026, with Vector's acquisition of RocqStat and tighter regulatory scrutiny across automotive and avionics, teams need repeatable, auditable automation that runs VectorCAST + RocqStat at scale and emits certification-ready reports.
Why this matters in 2026
Late 2025 and early 2026 saw two trends accelerate: (1) vendors (notably Vector) integrating advanced timing-analysis (RocqStat) into mainstream test toolchains, and (2) regulators demanding more explicit, machine-readable evidence of timing analysis for safety standards like ISO 26262 and avionics equivalents. Vector announced the StatInf/RocqStat acquisition in January 2026, signalling unified toolchains for test+timing. That means teams must adapt their automation so WCET outputs are traceable and repeatable across the consolidated toolchain.
"Timing safety is becoming a critical..." — Vector (acquisition announcement, Jan 2026)
What you'll get from the reusable scripts in this guide
- Sample bash and Python scripts that run VectorCAST builds and invoke RocqStat runs in batch
- Patterns to collect outputs (JSON/XML), aggregate WCET numbers, and generate formatted HTML/CSV reports
- CI/CD pipeline examples (GitHub Actions) to plug into release pipelines and produce certification artifacts
- Compliance checklist and report templates suitable for certification folders
Design principles — what an automation should guarantee
Before code, design to make WCET automation:
- Reproducible: fixed environment (container/VM), pinned tool versions and build flags
- Auditable: every run includes metadata (git commit, toolchain versions, machine ID) and signed artifacts
- Machine-readable: request tool outputs in JSON/XML to avoid brittle parsing of human console logs
- Traceable: map WCET results to requirements and test cases for certification
- Secure: protect credentials, sign reports, control who can run timing tools
How the sample workflow works (summary)
- Checkout code and determine build matrix (configs, CPU frequency, optimization flags)
- For each configuration: run VectorCAST build/test via CLI to generate instrumented binary and unit test mappings
- Invoke RocqStat (or Vector-integrated timing tool) to run WCET estimation; export machine-readable output
- Aggregate outputs into a canonical JSON and generate an HTML/CSV report with timestamps and provenance
- Sign and upload artifacts to an artifact store (S3, Nexus) and publish a release note with summarized WCETs
Prerequisites and environment recommendations
- Licensed installations of VectorCAST and RocqStat, or VectorCAST which includes RocqStat integration (post-acquisition)
- Access to a build machine with the same CPU architecture used for analysis; containerization simplifies reproducibility
- CLI access to VectorCAST/RocqStat — configure tools to export JSON/XML results where possible
- Python 3.10+ and common packages (requests, jinja2) for report generation
Example: Bash driver to run a build matrix
The bash driver below is deliberately generic. Replace CLI calls with the exact VectorCAST & RocqStat commands your licensed tools expose. The script:
- Iterates build configurations
- Invokes VectorCAST build/test and requests machine-readable artifacts
- Invokes RocqStat to estimate WCET and writes JSON
# run_wcets.sh - example driver
#!/usr/bin/env bash
set -euo pipefail
BASE_DIR="$(pwd)"
OUT_DIR="$BASE_DIR/wcet_runs"
mkdir -p "$OUT_DIR"
# matrix: list of configurations (example: optimization levels, CPU frequencies)
CONFIGS=("O2_200MHz" "O3_400MHz")
# Replace these with your actual tool binaries or wrapper scripts
VECTORCAST_CLI="/opt/vectorcast/bin/vcast_cli" # placeholder
ROCQSTAT_CLI="/opt/rocqstat/bin/rocqstat_cli" # placeholder
# metadata helper
git_commit=$(git rev-parse --short HEAD || echo "no-git")
start_ts=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
for cfg in "${CONFIGS[@]}"; do
run_dir="$OUT_DIR/$cfg-$(date +%s)"
mkdir -p "$run_dir"
echo "[INFO] Running configuration: $cfg"
# Build / prepare VectorCAST project for this config
# Example: specifying optimization and instrument flags
# NOTE: replace --project and flags to match your VectorCAST CLI
echo "[INFO] Invoking VectorCAST for $cfg"
"$VECTORCAST_CLI" --project my_project --config "$cfg" --export-results "$run_dir/vectorcast_results.xml"
# Run RocqStat for timing/WCET estimation; request JSON output
echo "[INFO] Invoking RocqStat for $cfg"
"$ROCQSTAT_CLI" --input "$run_dir/vectorcast_results.xml" --cpu-frequency "$(echo $cfg | awk -F_ '{print $2}')" --output "$run_dir/rocqstat.json"
# Collect provenance
cat > "$run_dir/provenance.json" </dev/null || echo 'unknown')",
"rocqstat": "$("$ROCQSTAT_CLI" --version 2>/dev/null || echo 'unknown')"
}
}
EOF
echo "[INFO] Stored run in $run_dir"
done
# After runs, call Python aggregator
python3 scripts/aggregate_wcet_reports.py --input "$OUT_DIR" --output "$OUT_DIR/summary.html"
Python aggregator: parse outputs, create an HTML/CSV report, and sign artifacts
This Python script reads per-run directories, extracts WCET data (assumed JSON from RocqStat), and writes a human-friendly HTML page and a CSV for CI systems.
# scripts/aggregate_wcet_reports.py
#!/usr/bin/env python3
import argparse
import json
import os
import csv
from datetime import datetime
try:
from jinja2 import Template
except Exception:
print('Please pip install jinja2')
raise
HTML_TEMPLATE = """
WCET Summary
WCET Summary (generated {{ ts }})
Config WCET (us) Path Confidence Git Tools
{% for r in runs %}
{{ r.config }}
{{ r.wcet_us }}
{{ r.path }}
{{ r.confidence }}
{{ r.git_commit }}
{{ r.tools }}
{% endfor %}
"""
def parse_rocqjson(path):
# Expected RocqStat JSON schema (example): {"wcet_us": 123.4, "path": "...", "confidence": 0.999}
with open(path, 'r') as f:
data = json.load(f)
return {
'wcet_us': data.get('wcet_us'),
'path': data.get('worst_path', data.get('path', 'n/a')),
'confidence': data.get('confidence', 'n/a')
}
def main():
p = argparse.ArgumentParser()
p.add_argument('--input', required=True)
p.add_argument('--output', required=True)
args = p.parse_args()
runs = []
for d in sorted(os.listdir(args.input)):
run_dir = os.path.join(args.input, d)
if not os.path.isdir(run_dir):
continue
rocjson = os.path.join(run_dir, 'rocqstat.json')
prov = os.path.join(run_dir, 'provenance.json')
vcxml = os.path.join(run_dir, 'vectorcast_results.xml')
if not os.path.exists(rocjson):
print('Skipping', run_dir, 'no rocqstat.json')
continue
r = parse_rocqjson(rocjson)
prov_data = {}
if os.path.exists(prov):
with open(prov, 'r') as f: prov_data = json.load(f)
runs.append({
'config': prov_data.get('config', d),
'wcet_us': r['wcet_us'],
'path': r['path'],
'confidence': r['confidence'],
'git_commit': prov_data.get('git_commit', 'n/a'),
'tools': prov_data.get('tools', {})
})
# write CSV
csv_out = args.output.replace('.html', '.csv')
with open(csv_out, 'w', newline='') as csvf:
w = csv.writer(csvf)
w.writerow(['config', 'wcet_us', 'confidence', 'git_commit'])
for r in runs:
w.writerow([r['config'], r['wcet_us'], r['confidence'], r['git_commit']])
# render HTML
tmpl = Template(HTML_TEMPLATE)
with open(args.output, 'w') as f:
f.write(tmpl.render(ts=datetime.utcnow().isoformat()+'Z', runs=runs))
print('Wrote', args.output, 'and', csv_out)
if __name__ == '__main__':
main()
CI/CD integration example (GitHub Actions)
Run the bash driver on a self-hosted runner with licensed tools. The workflow below demonstrates artifact collection and a release-note step.
# .github/workflows/wcet.yml
name: WCET Batch
on:
workflow_dispatch:
jobs:
wcet:
runs-on: self-hosted
steps:
- uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Run WCET batch
run: |
chmod +x run_wcets.sh
./run_wcets.sh
- name: Upload reports
uses: actions/upload-artifact@v4
with:
name: wcet-reports
path: wcet_runs
- name: Create Release Note
run: |
python3 scripts/aggregate_wcet_reports.py --input wcet_runs --output wcet_runs/summary.html
echo "WCET summary generated" > release-note.txt
# optionally add steps to publish release notes
Making reports certification-ready
Certification bodies expect evidence that ties results to requirements and to a reproducible environment. Include the following in every report bundle:
- Tool versions: VectorCAST, RocqStat, compiler, libc, OS kernel
- Exact CLI commands and configuration files used to produce the WCET
- Provenance: git commit hashes, build timestamps, build machine IDs — and solid file management to preserve those artifacts (see file management best practices).
- Input artifacts: binary, source snapshots or hashes, test vectors used
- WCET data: estimate, worst path, confidence interval/assumptions, measurement method (static/measurement-based/hybrid)
- Limitations and assumptions: e.g., caches disabled, tight loop bounds specified manually
- Signed artifacts: cryptographic signature of reports and hashes for tamper-evidence. For signing and compliance-first deployment approaches, consider serverless and compliance-first patterns.
Sample compliance metadata JSON
{
"project": "brake_controller",
"git_commit": "abc123",
"toolchain": {
"vectorcast": "2026.1",
"rocqstat": "1.4.0",
"compiler": "gcc 12.2"
},
"reproducibility": {
"container": "sha256:...",
"build_flags": "-O2 -fno-builtin"
},
"wcet": {
"config": "O2_200MHz",
"wcet_us": 1250.4,
"confidence": 0.999
}
}
Advanced strategies and best practices
1) Use containers for reproducibility
Create a base container that includes VectorCAST and RocqStat CLIs (observe licensing). Save the image digest and include it in the provenance so auditors can recreate the environment. For storage and reproducibility, pairing containers with reliable object stores and NAS solutions keeps artifacts available; see reviews of object storage and cloud NAS.
2) Prefer machine-readable exports
Configure VectorCAST and RocqStat to export JSON or XML. If a tool only offers human text outputs, wrap it in a parsing layer and push to the vendor for a stable API.
3) Parallelize with care
WCET analyses are CPU and memory intensive. Run parallel jobs only when hardware and licensing capacity permit; otherwise queue runs to protect result integrity.
4) Automate signing and tamper-evidence
Use CI job keys (stored in a key manager) to sign the final report bundle (e.g., using GPG or an internal signing service). Store signatures alongside artifacts in the artifact store.
5) Traceability to requirements
Link each WCET entry to requirement IDs and test case IDs. Produce a traceability matrix CSV that maps requirement -> test case -> WCET measurement. This dramatically reduces verification review time. For compliance checklists and templates you can adapt, see our compliance checklist reference: Compliance Checklist.
Security and licensing notes
- VectorCAST and RocqStat are licensed tools — automation must respect license servers and token secrets. Do not store licenses in source control.
- Protect access to WCET reports; they contain sensitive design and timing assumptions useful to attackers seeking to craft denial-of-service or timing attacks. See guidance on patch and vulnerability communication in the device space: Patch Communication Playbook.
- Validate and sanitize any third-party plugins or parsers you add to your automation pipeline.
Troubleshooting common issues
- Missing JSON output: check tool CLI flags and ensure export paths exist and are writable by the CI user
- Inconsistent WCETs between runs: check hardware frequency scaling, disable turbo, and ensure deterministic build flags
- Long run-times: profile RocqStat options — some algorithms expose heuristics/timeout parameters that bound execution time
Example: What a certification-ready report section looks like
Below is an abridged example entry for a single control loop. Include it as a page in your PDF or HTML bundle.
Control Loop: braking_control_main
Config: O2_200MHz
Git: abc123
Tools: VectorCAST 2026.1, RocqStat 1.4.0, gcc 12.2
WCET: 1250.4 us
Worst Path: init->sensor_read->filter->control_calc->actuate
Confidence: 0.999
Assumptions: cache flush on input, loop bound set to 8.
Reproducibility: container sha256:..., build flags -O2 -fno-builtin
Signed: wcet_summary.html.sig
Future-proofing: what to expect as Vector integrates RocqStat
Vector's acquisition of RocqStat (announced Jan 2026) drives closer integration between timing analysis and unit/integration testing. Expect:
- Tighter APIs between VectorCAST and RocqStat for exporting test mappings and path coverage directly into timing tools
- Unified reporting flows where a single CLI or SDK drives both testing and timing analysis
- Better artifact formats (structured JSON and formal metadata) aimed at certification workflows
Actionable takeaways
- Start by standardizing your build matrix and capturing exact CLI commands used for VectorCAST and RocqStat runs.
- Adopt the bash + Python pattern above: a light driver that runs the tools per-config and a Python aggregator that produces HTML/CSV artifacts.
- Containerize tools where feasible and include container digests in provenance for reproducibility.
- Automate signing and artifact upload so the release bundle is verifiable and audit-ready. For CI/CD and hosted-runner patterns that support secure pipelines, consider hosted tunnels and zero-downtime release patterns: Hosted Tunnels & Zero-Downtime Ops.
Where to go from here (next steps)
- Turn the example scripts into your project's CI job, adapt CLI flags to your licensed VectorCAST/RocqStat setup
- Create a minimal container image with the approved tool versions and pin its digest in the provenance
- Build a traceability matrix generator that links WCET rows to requirement IDs
- Start archiving signed report bundles to your compliance artifact store for auditors — pick a reliable object store or NAS and include SHA digests in the bundle (see object storage reviews and cloud NAS options).
Final thoughts & call-to-action
In 2026, timing analysis is no longer an isolated step — it's part of the verification pipeline. Automating VectorCAST + RocqStat runs and producing auditable, machine-readable reports saves time, reduces risk, and makes certification reviews straightforward. Use the patterns and scripts here as a starting point and adapt them to your toolchain and compliance needs.
Download the starter scripts, CI templates, and compliance JSON schema from our repo, adapt them to your VectorCAST/RocqStat setup, and share your improvements. Want help tailoring the automation to your CI environment or to sign artifacts for DO-178C/ISO 26262 traces? Contact us or open an issue on the project repo to get collaborative support.
Related Reading
- Review: Top Object Storage Providers for AI Workloads — 2026 Field Guide
- Case Study: Using Cloud Pipelines to Scale a Microjob App — Lessons from a 1M Downloads Playbook
- Field Review: Cloud NAS for Creative Studios — 2026 Picks
- Compliance Checklist for Prediction-Market Products Dealing with Payments Data
- Community Volunteering for Caregivers: How to Build Local Support Networks
- Non-Alcoholic Cocktail Syrups & Table Styling for Eid and Iftar
- Budget E-Bike Picks: Is the Gotrax R2 Worth the Hype at Its Low Price?
- Smart Plugs for Renters: Affordable Automation That Won’t Void Your Lease
- From 10,000 Simulations to Markets: How Sports Models Teach Better Financial Monte Carlo
Related Topics
Unknown
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Building the Next Big Thing: Creating Apps for Mentra's Open-Source Smart Glasses
State-Sponsored Tech: The Rise of Official Smartphone Platforms
Policy and licensing for AI‑generated code: what non‑developers must know before shipping an app
Classifying Customer Loyalty: Understanding the Shakeout Effect in CLV Modeling
Designing ephemeral UX for micro apps: patterns, analytics and feedback loops
From Our Network
Trending stories across our publication group