The Real Bottlenecks Are Silos: Why Interoperability Must Start Inside the Engineering Toolchain?

By Célina Simon | 3/03/2026 | Reading time: 16 min

Conversation with our experts #3

Today, many engineering organizations heavily invest in modern toolchains. Yet they still experienced challenges with fragmented data, duplicated artifacts, and late discoveries of inconsistencies, including traceability gaps.

The reality is that interoperability is often seen as a tooling challenge. But what if the real bottleneck was not the technology itself? What if it lies in how engineering work is structured across domains, how data ownership is defined, and how configuration flows, or fails to flow, throughout the lifecycle?

As systems grow more complex and regulatory pressures increase, it’s becoming clear that interoperability cannot be seen as just a tool integration effort.

To explore this shift in perspective, we spoke with Thomas Capelle, President of SodiusWillert. In this conversation, he explains why silos persist despite significant investments in tools and processes, how fragmented engineering information flow affects quality and compliance, and why interoperability must be addressed at the organizational level before even becoming a tooling discussion

This third episode of the "Conversations with our experts” series invites our readers to reconsider interoperability as a structural and strategic challenge.Enjoy the read!

An illustration for our series of articles "Conversation with our experts" episode 3 with Tom Capelle, president of SodiusWillert.

1) Interoperability is often discussed as a tooling challenge. You told me earlier that the real bottleneck was elsewhere.

From your perspective, where do silos appear the most in today’s systems and software engineering programs?

Thomas:  Silos first show up in how engineering work is structured and executed across domains, not in the tools themselves. You can find them between systems engineering and software teams, between architecture and verification activities, or between engineering and quality. You can even see them between programs that reuse the same technologies or platforms, but manage their engineering data, processes, and toolchains independently.

At a practical level, these silos translate into disconnected artifacts. Requirements live in one place. Architecture models in another. Test cases somewhere else. Reviews or decisions are documented in yet another system. Each of these domains operates with great rigor, but the problem is that they work in parallel, with really limited visibility into how their work feeds downstream activities.

"When information is copied between tools, you introduce ambiguity. Which version is correct? Which artifact is authoritative? Engineers start double-checking everything, which slows down work and increases the risk of errors."

Communication friction is another clear symptom that we often see. I’m not saying that this is a lack of commitment and competence from engineers. They simply lack shared context. To give you a concrete example, when teams rely on static documents, exports, or screenshots to exchange information, alignment becomes fragile. Why? Because when a requirement changes, but the impact on architecture or test coverage isn’t immediately visible, teams compensate with spreadsheets, manual checks, meetings, emails, etc.

I believe there’s also a structural aspect. Over the years, organizations have specialized in order to cope with scale and growing system complexity. Teams are now organized around domains, each with its own expertise, processes, and tools.

This solid expertise by domain is an excellent thing, but it also creates boundaries. It reinforces isolated tools and processes, creating local optimization while the overall system-level performance degrades. And when those boundaries are not bridged by interoperable data and shared visibility, silos become part of the operating model.

 

2) Many organizations invest heavily in modern tools and in formal processes. Why do silos persist despite these investments?

Thomas: They persist for several reasons. First, because tools are often deployed to solve domain-specific problems in isolation. For instance, a requirements team selects a tool to manage complexity. Systems architects select a modeling environment that fits their methods. And V&V teams adopt platforms aligned with certification needs. There are many patterns like these, and each decision makes sense, but again, in isolation.

And the problem is that these tools are rarely selected or deployed as part of a coherent interoperability strategy, with a strong focus on continuity across domains. Engineering data integration approaches are treated as a secondary concern, something to address later, if needed. And by the time teams realize they need end-to-end traceability or impact analysis, the toolchain is already fragmented. I may be overstating it slightly, but the reality is not far off.

Processes contribute as well. Many organizations document workflows, but those workflows essentially depend on manual handoffs. Then, they rely on scheduled reviews, document exports, or milestone checkpoints to realign information, rather than on continuous, shared data. This can work on a small scale. However, it undeniably breaks when systems grow, when teams are distributed, and when regulatory pressure increases.

There’s also a cultural factor. Engineering teams are naturally protective of their tools and practices, and I can easily get that. These tools represent years, if not decades, of investment, training, and domain expertise. So, when you frame interoperability as a requirement for everyone to converge on the same tool or platform, resistance is inevitable. What happens next? Organizations avoid the topic or postpone it.

 

3) What are the concrete consequences of poor interoperability on system quality, compliance, and delivery timelines, especially in regulated industries?

Thomas: The first impact I can think of is the loss of confidence in data. When information is copied between tools, you introduce ambiguity. Which version is correct? Which artifact is authoritative? Engineers start double-checking everything, which slows down work and increases the risk of errors.

From a quality standpoint, disconnected tools make it difficult to reason about completeness. Are requirements fully implemented and verified? Are functions verified? Are safety constraints consistently applied across models and tests? Without reliable traceability, these questions are hard to answer with proper evidence.

In the regulated industries we work in (aerospace and defense, automotive, etc.), this becomes critical. We have to understand that compliance was never about producing documents at the end of the development lifecycle. It’s about demonstrating continuous traceability, configuration control, and change management throughout the lifecycle. Auditors expect traceable links, consistent baselines, and clear change histories. If your toolchain does not support that natively, teams spend months reconstructing evidence. And let’s face it, it’s a nightmare.

As a result, delivery timelines are severely impacted. Late discovery of gaps leads to rework, certification issues appear late in the program, and integration problems surface during system-level testing, when they are most expensive to fix. Again, none of this is caused by a lack of engineering competence, but by a fragmented information flow.

"Compliance was never about producing documents at the end of the development lifecycle. It’s about demonstrating continuous traceability, configuration control, and change management throughout the lifecycle."

4) How can organizations foster alignment and trust across domains like requirements, architecture, and testing, without forcing teams to leave their tools, especially their legacy tools?

Thomas: The starting point is to respect domain autonomy. Engineers perform best when they use tools that fit their work. Forcing convergence on a single environment can disrupt established workflows and create resistance.

True and long-term alignment comes from shared visibility. Teams need to see and understand how their work connects to others, in real time, without duplicating data. That requires linking information at the artifact level, not synchronizing copies.

This is where technical approaches like linked data and open interoperability standards play a major role. When requirements, models, and test cases remain in their native repositories, but are connected through explicit links, teams can navigate context without losing ownership. A systems engineer can trace a requirement to a model element to a test case, while each artifact remains managed by its owning tool.

Trust is built only if the data is consistent and up to date. If teams are assured that what they see reflects the current state of the system, they rely less on manual checks and informal communication. Reviews become more focused, and decisions are better informed.

And in practice, this approach also means defining what is authoritative. For each type of artifact, there must be a clear source of truth. Not one database, but one owner per artifact type. And interoperability should preserve that clarity, not undermine it.

 

5) Interoperability is often reduced to a tooling or integration problem. Why is that, and what mindset shift is needed?

Thomas: It’s mainly because tool-related matters are more tangible and more concrete. It’s easier to discuss connectors, formats, or APIs than organizational alignment. Yet interoperability is not just an IT activity. It’s broader than that. Interoperability is a way of structuring collaboration.

So, the appropriate mindset shift is to consider interoperability as an enabler of engineering strategy. Because such a strategy defines how an organization can manage complexity and risk. It also determines how fast teams can adapt, how safely they can introduce changes to systems, and how confidently they can demonstrate compliance. Interoperability directly influences how teams coordinate, how changes propagate, and how evidence is produced across the lifecycle.

When managers and leaders ask, “Which tool should we choose?”, they usually react to existing fragmentation. And by doing so, they address the symptoms rather than the root causes. The more relevant questions should be, “How does information flow across our lifecycle?” and “At which points do decisions rely on data from other domains?”

Once you see interoperability through that lens, open standards like OSLC (relying on the linked data technology I mentioned earlier) and digital thread architectures are no longer technical details. They become governance mechanisms. And believe it or not, that immediately changes the game. They define how data is shared, how responsibility is preserved, and how traceability is maintained.

 

6) For organizations that are aware of these issues but feel overwhelmed, where should they begin?

Thomas:  They need to start small, and above all, start where pain is most visible. Redesigning the entire toolchain at once is not realistic.

A good entry point is to set up traceability around a critical concern: safety requirements, certification objectives, or change impact analysis. They should identify one end-to-end question that’s currently hard to deal with. Then, they need to map which tools and teams are involved in answering that question.

From there, they must focus on linking artifacts, not migrating them. That distinction is important. Then, they should avoid copying data unless absolutely necessary. If you try to centralize everything on one platform, you take the risk of disrupting existing workflows. And if the objective is interoperability, domains should be connected, not replaced.

"Trust is built only if the data is consistent and up to date. If teams are assured that what they see reflects the current state of the system, they rely less on manual checks and informal communication. Reviews become more focused, and decisions are better informed."

To achieve this, another practical step is to clarify ownership. They need to explicitly define which tool is authoritative for which artifact type. We noticed that many interoperability issues usually stem from conflicting assumptions about ownership and authority that were never formally defined across teams. The result is duplicated data, baseline divergence, and discrepancies discovered late.

Finally, organizations should invest in shared views. Engineers, managers, and quality teams don’t need the same interfaces, but they do need consistent information. So, aggregation platforms or dashboards that pull live data from multiple tools can help create that shared understanding while preserving existing workflows.

To sum up, starting small builds momentum. Once teams experience the benefits in a limited scope, it becomes easier to extend interoperability incrementally.

7) Any final thoughts for engineering stakeholders facing increasing complexity and pressure?

Thomas: Well, complexity and regulation are not going away. Same thing for tool diversity. In this scenario, there's no point in trying to simplify complexity. I think that the most pragmatic approach is to manage this reality in the most coherent way possible. And interoperability, seen as a real strategic foundation, is the answer to that.

It’s simple: if you reduce silos inside your toolchain, then you reduce friction everywhere else. And that's where measurable and meaningful improvements begin.

 

🔹 As Thomas pointed out, interoperability doesn't mean trying to simplify systems. That would be vain. Above all, interoperability is about structuring information flow across domains.
🔹By minimizing silos within toolchains, companies reinforce traceability, protect data integrity, and improve coordination at the system level, regardless of how complex systems become.

 

 

Célina Simon

Célina is a Content Marketing Writer at SodiusWillert. Prior to joining the team, she wrote a wide range of content about software technology, IT, cybersecurity, and DevOps. She has worked in agencies for brands such as Dell, Trend Micro, Bitdefender, and Autodesk.

Leave us your comment