Why is linking data more valuable than synchronizing?

By Robert Baillargeon | 29/05/2020 | Reading time: 5 min

With products being more complex and embedding technology like never before, the use and need for engineering data have drastically increased over the past decade and has led to an unprecedented supply of data to manage securely

Data that needs to move forward, backward, and across the engineering lifecycle through Application Lifecycle Management (ALM) and Product Lifecycle Management (PLM) systems from conception and design to engineering and customer delivery. This use of a massive amount of engineering data has accentuated the importance of data accuracy, consistency, and availability across engineering domains.

When working on mission-critical systems, a minor data error can have a major negative impact on the design of the software or product, the decision-making process, and customer retention. A simple error impacts all domains of the engineering lifecycle from design, requirements management, test management, etc. Sorting through all the data collected and syncing it with existing tooling while trying to maintain data integrity and data quality can be a tedious process.

To our great credit, both tools and engineering processes have changed - opening new ways to perform complete tool interoperability and data access across the engineering domains. Tools are now shared repositories and processes are now more agile and dynamic. This means the idea of interoperability can no longer be founded on the ability to exchange the information, but rather present dynamic information, files and folders as it is available by linking data and not synchronizing it.

synchronizing or linking data ?

Synchronizing vs Linking: what’s the difference?

Synchronization of engineering data

When integrating tools, the synchronization of data implies that two or more copies of information are transported from one data repository to another on a periodical basis. The synchronization of data can help resolve problems with manually copying data, such as ensuring the correct value is entered into the target repository, and that the value is up-to-date from a known moment in time.

Data synchronization usually involves a 3rd-party application that resides between the two repositories and is set up to know how to map between fields as well as the frequency of synchronization. Synchronization engines are typically strongly tied to the applications being connected, meaning that synchronization between Tool A and Tool B cannot be used to synchronize data between Tool B and Tool C.

Linking of engineering data

Contrary to data synchronization, data linking does not require a 3rd-party application to make the connection between tools. Linking data allows data to be directed to the source of information, the latest updated and accurate information. Linked tools are inherently sharing the most recent and accurate data. The target application or repository does not have a copy of the data source, but rather a pointer to the information in the owning repository. There is not an issue of synchronization or outdated information when using linking technologies.

The issue of the replication of information

Data Validity

Groups of engineers have different sets of tools so they often copy information to different repositories. But copies and replication of information in multiple places can have very negative effects. In fact, copies get out of date over time, they decay in the quality of the information. We don’t know anymore what (or who) was the original source (golden source) of the information. Also, the accuracy of information depends on how often information is synchronized - when was the last time that information was synchronized?

Data Security

When we copy data, we lose the ability to enforce our security model on the data. Any access controls such as authentication we had for the original data must be replicated in all of the target systems to ensure that only authorized users have access. With the complexity of access control rules and the size of the user models, it is impossible to monitor that the correct access is given in all users in all repositories where data is copied - especially if access controls are changed after the initial copy. In addition, it becomes impossible to perform audits of all access to a piece of data as that data is moved across repositories, as frequently required in defense and other high-security environments.

The solution for linking data

With linked data, the true creator of the information remains the owner of it. The consumers can simply link to the updated information. It eliminates uncontrolled copies.

Linking enables keeping the information where it was originally created. Linking allows the source repository to validate user permissions before sharing the data, thus respecting the audit and security requirements.

Linking ensures that the data is up-to-date. When a consumer requests the data, the actual data in the target repository is read, meaning that it is the current value for that piece of information - always.

How to link data: the concept of OSLC for linked data

The internet was built on the ability to link information across servers. Using the fundamental standards that drove the growth of the internet, yet adding additional capabilities to structure how to link information and adding more semantic information, we can link engineering data across repositories. The open standard driving this interoperability is called OSLC - Open Services for Lifecycle Collaboration.

> Related article: What is the Open Services Lifecycle Collaboration standard?

Using the OSLC standard, we can define which repositories can connect with each other, the type of engineering data exposed or consumed, and a common format for the exchange of the engineering data.

What OSLC brings us is not only the ability to link the data but also to do it in a standardized way which is independent of the endpoints. Therefore, an application can implement OSLC interfaces once and can link data across all other OSLC-compatible applications, without the need to write a new interface.

SodiusWillert’s solutions for linked data

At SodiusWillert’s linking data has been a core practice of our tools for years. What is now unique is the linking across tools, across repositories, and across configurations. It is the natural next step as we evolve out of the practice of synchronizations and exchanges.

The OSLC Connect product range accurately links your data from your favorite ALM and PLM tools to the engineering lifecycle. The SodiusWillert OSLC Connect offering allows you to connect your ALM and PLM applications to IBM Engineering Lifecycle Management tools (IBM DOORS, DNG, RTC, RQM). With OSLC Connect, it is now possible to easily link your engineering repositories between your favorite engineering tooling and guarantee that everyone is using accurate information across your engineering teams.


Robert Baillargeon

Robert Baillargeon is the Chief Product Officer at SodiusWillert. Before his role at SodiusWillert, Robert has led engineering and research teams developing systems and deploying tools in the Automotive industry. Robert is a provisional ASPICE assessor and has a Masters of Science degree in Software Engineering from Carnegie Mellon University.

Leave us your comment

Discover our webinar

OSLC Connect for Jira: integrate Jira with IBM Engineering Lifecycle Management.

“ OSLC Connect for Jira leverages Open Services for Lifecycle Collaboration (OSLC) technology to collaboratively allow linking across design and implementation teams and better manage requirements and compliance. ”

Icon_OSLC Connect for Jira_color_144*144px_SodiusWillert_2020_RVB

Most read articles

Sign up to the newsletter