SodiusWillert Blog

ASPICE Traceability: What Evidence Assessors Typically Review?

Written by Célina Simon | Jan 28, 2026 1:45:09 PM

In a previous article, we outlined the main principles of Automotive SPICE, also known as ASPICE.  Above all, we explained why traceability is such a central aspect of this framework and how assessors use it to evaluate whether engineering processes are consistently applied and outcomes are verifiable over time.  If you haven’t read it yet, or would like a refresher, you can access it via the link above.

This new article, also dedicated to ASPICE, moves from principles to practice. We explain where ASPICE explicitly expects traceability across the development lifecycle, from requirements to verification and support processes, and what concrete evidence assessors typically review. We also highlight common traceability gaps that can be observed during assessments and how engineering teams can avoid them. And we conclude by outlining a few practical recommendations on how to establish traceability as a continuous part of engineering activities, not just an assessment requirement. 

TABLE OF CONTENTS

Where ASPICE Expects Traceability Across the Lifecycle?

Checklist of the Traceability Evidence Assessors Expect

Typical Traceability Gaps in ASPICE Assessments and how to avoid them 

How to Make Traceability an Ongoing Engineering Practice? 

How Sodius Can Help You Build ASPICE Compliant Traceability?

Finale Thoughts

FAQ

Where ASPICE Expects Traceability Across the Lifecycle?

1. Stakeholder, system, and software requirements (SYS.1, SYS 2, SWE.1) 

ASPICE expects traceability across all requirement levels, including: 

  • Stakeholder needs (OEMs specifications, market requirements, etc.) to system requirements. 
  • System requirements to software requirements, and hardware requirements.  

Typical links demonstrate how requirements flow from one level to the next. For example, a system requirement references the stakeholder requirement it refines, and a software requirement references the system requirement it decomposes. 

➡️ It makes the refinement process clear and transparent, preventing gaps or missing functionality.  

What evidence is required? 

Here, evidence typically comes from requirements management tools such as IBM DOORS Next, Siemens Polarion, etc., where each requirement has a unique ID and change history, and where trace views or exported matrices display coverage and reveal any “orphan” requirements with no links.

 

Fig. 1 –  ASPICE System Engineering Process Structure

 

2. Architecture and Design (SYS.3, SWE.2)

For system and software architecture and design, ASPICE expects traceability between:  

  • Requirements and UML/SysML model elements  
  • Safety or performance constraints and the architecture decisions addressing them 

➡️ The architecture must indeed demonstrate how each requirement (including safety or performance constraints) is allocated to concrete components, interfaces, and functions. It ensures that design decisions fully address and justify the requirements. 

What evidence is required?

  • Model elements tagged with requirements IDs, 
  • Navigation between requirements and model elements,  
  • Design documents that explicitly reference the relevant requirements. 

3. Implementation and Units (SWE.3)

In ASPICE, units must be traceable to software requirements, and the software's detailed design must be traceable to unit test specifications.  

➡️ It ensures that each software requirement is properly realized in the code and that the corresponding unit tests fully justify and verify this implementation.  

What evidence is required? 

  • Links between code items and the requirements and the design they implement 
  • Naming conventions or metadata allowing automated linking 
  • Unit design documents or comments that reference requirement IDs  

4.  Integration, Verification, and Validation (SYS.4, SWE.4-SWE.6, VAL.*1) 

For verification and validation, ASPICE expects traceability between: 

  • System and Software requirements, and the corresponding tests and results at each level 
  • Integration and system tests, and the architecture elements they verify 

➡️ It ensures that every requirement is verified by the appropriate tests, that integration and system tests are aligned with the architecture they validate, and that teams can demonstrate complete coverage. 

What evidence is required? 

  • Test specifications that reference requirements IDs 
  • Test management tools or spreadsheets linking each test to at least one requirement 
  • Covering reports showing requirements coverage,  and at a higher maturity, code coverage. 


5. Change, Problem Resolution, and Configuration Management (SUP.1, SUP.9, and SUP.10)

ASPICE support processes stipulate that every change and issue must be requested, analyzed, approved, implemented, and verified. They must also be traceable back to the affected work products.  

➡️ It ensures that changes or defects are understood, assessed for impact, and verified after implementation. It also ensures that each baseline or release can be linked to the exact versions of requirements, code, and tests it contains 

What evidence is required? 

  • Change requests and defect records linked to the impacted requirements, code items, and regression test cases. 
  • Configuration baselines and release records demonstrating which version of requirements, code, and tests belong to a particular build or release.  

 

Checklist of the Traceability Evidence Assessors Expect

Fig. 2 A practical checklist of evidence types engineers can prepare before an ASPICE assessment.  

 

Typical Traceability Gaps in ASPICE Assessments and how to avoid them 

Several common issues often arise during an ASPICE assessment. These include orphan requirements or tests with no links, and links created once but never updated after changes. We can also mention partial coverage, where only safety-related items are traced, or the classic traceability scattered across multiple and non-synchronized Excel sheets.  

These obstacles can be mitigated by defining a minimal but complete set of mandatory links for each artifact type. This approach should be supported by tools that embed traceability into daily workflows rather than relying on manual matrices, along with regular “traceability health checks” to identify uncovered requirements, broken links, or inconsistent coverage. 

 

How to Make Traceability an Ongoing Engineering Practice? 

1. Implementing traceability you can sustain over time 

Traceability must be maintained in a continuous and consistent manner. Most of the time, creating links just before an audit is a sign of a flawed approach. In practice, effective and above all, lifecycle-oriented traceability means keeping vertical linkage to support impact analysis and completeness checks across requirements, design, and implementation. It also means maintaining horizontal linkage so that every requirement is connected to at least one verification activity.  

Keep in mind that the objective is to trace what brings clear value for safety, quality, compliance, and change impact. Overly complex link types or workflows should be avoided as they are hard to keep under real project pressures.  

To sum up, there's a guiding principle you can follow: automate where you can, and keep the rules clear enough so they can be followed consistently. 

 

2. The role of tooling and integrations 

ASPICE is explicitly tool-agnostic. But let’s be realistic: maintaining manual traceability across hundreds or thousands of artifacts is not sustainable. Modern ALM, PLM, Requirements Management,  and MBSE ecosystems, therefore, rely on integrations that create consistent links across a wide range of ecosystems and tools. 

These include: 

  • Linked Data technology that links artefacts across tools without importing or duplicating data 
  • Open standards such as OSLC (Open Services for Lifecycle Collaboration), based on Linked Data  
  • Integration platforms that aggregate and connect engineering data from multiple tools for a unified view of requirements, models, tests, and change information. Each artifact is kept in its authoritative source system. 

These integrations allow requirements, models, tests, and change records to be linked and traced. Connecting data directly from their native sources rather than replicating or synchronizing them prevents divergence between copies. It also ensures that teams always work with the most current and consistent information. It improves traceability, supports compliance reporting, and preserves data integrity. 

Consistent and end-to-end traceability across tools is what makes ASPICE audits more manageable and enables real, reliable impact analysis during daily engineering work.  

 

How Sodius Can Help You Build ASPICE Compliant Traceability? 

 

1. Building Cross-tool Traceability with SECollab 

Our Digital Thread Platform, SECollab,  aggregates engineering data from requirements tools, modeling environments, and document repositories into one web interface. It allows teams to link, visualize, analyze information, and review.  

For ASPICE, SECollab builds end-to-end traceability across requirements, architecture, tests, and change records. It also enables the definition and application of a consistent ASPICE-aligned data model across tools and artifacts. It ensures that traceability links, attributes, and relationships are interpreted consistently, no matter where the data originates. Versions and baselines are managed to maintain an authoritative source of truth. You can also generate traceability and compliance reports directly from live data. 

For instance, when requirements sit in IBM DOORS Next, architecture in Cameo, and test cases in a separate test management tool, SECollab lets you configure views and reports over the connected and linked data.  Assessment preparation can then rely on consistent, cross-tool traceability views rather than exporting and manually merging data into spreadsheets.  

SECollab supports ASPICE-mandated review activities

Beyond evidence preparation, SECollab supports ASPICE-mandated review activities by providing a shared, live view of the artifacts under review. ASPICE explicitly mandates reviews through the Supporting Process called Joint Review (SUP.7) and through the Quality Assurance (SUP.1) and Verification (SUP.4) processes. These review activities can be prepared and supported using linked requirements, models, tests, and change records. Review comments, findings, and decisions remain connected to the underlying engineering data, thus providing traceable and auditable review evidence aligned with ASPICE expectations.  

2. Displaying real-time engineering data in Confluence with OSLC Connect 

OSLC Connect for Confluence embeds live engineering artifacts from IBM ELM, Siemens Polarion, and MagicDraw (via Teamwork Cloud) directly into Atlassian Confluence without copying data.  

For ASPICE traceability, this means: 

  • Project and technical review pages in Confluence can show live requirements and test information 
  • Each link always points to the latest version of the artifact in the source tool 
  • Reviews and decisions in Confluence remain connected to the underlying requirements and tests, improving auditability 

In practice, teams insert links to requirements, Jira issues, test cases, or model elements directly into Confluence pages. They are displayed as live previews while still governed within their source tools. This approach is particularly effective when Jira is used for planning, change management, or testing, as Jira issues and test assets (including Jira Xray) remain traceable to requirements and verification results.

Reviews and decisions captured in Confluence remain connected to the underlying engineering data, allowing for auditable review evidence.  

Other OSLC connectors (e.g., Jira, Windchill, etc.) extend this approach to change and PLM data, supporting SUP.1 and SUP.10 traceability between change requests, requirements, and implementation. 

 

Finale Thoughts 

ASPICE often highlights more than process gaps. It reveals where teams struggle to maintain continuity and a shared understanding of what they produce and why. Addressing these issues inevitably leads to examining the role of traceability. 

And every time we discuss with our customers and peers, we observe that traceability goes far beyond just linking artifacts. In fact, it helps teams improve their engineering judgment. In concrete terms, it means that traceability helps them to really see and understand how a change propagates, spot inconsistencies early, and above all, make decisions with proper context. 

As systems keep growing and architecture increasingly becomes distributed, those who treat traceability as a daily and constant engineering aid rather than an audit task are more likely to adapt faster and face very few late surprises.  

The real benefit of ASPICE emerges when traceability becomes a routine and not just another deliverable.  

 

FAQ

What traceability granularity does ASPICE expext

ASPICE does not prescribe granularity. However, assessors expect links at the level where engineering decisions are made. For example, requirements must be traceable to the architectural elements, units, and tests that implement or verify them. Tracing at too fine a level (e.g., individual lines of code) adds unnecessary detail without improving assessment value. However, a too coarse level hinders reliable impact analysis. The right granularity is the one that consistently supports completeness checks and change evaluation. 

How do assessors evaluate the “quality” of trace links? 

Assessors look for correctness, consistency, and coverage. A link must be justified (not arbitrary), persistent across baselines, and updated when either side changes. They also check for missing, circular, or outdated links, and whether traceability genuinely supports impact analysis. 

Can Agile development fit ASPICE traceability expectations?  

Yes. Agile changes the cadence, but it does not change expectations. Each increment still requires traceable requirements, design decisions, implementation items, and tests. What matters is that traceability remains current across iterations.  

What is the most common misunderstanding about ASPICE Traceability? 

Teams often assume that traceability is only here “for the audit”. In reality, traceability is evaluated because it improves engineering predictability. Weak traceability usually correlates with unclear responsibilities, inconsistent design decisions, or late defect discovery.