Team for Capella Guide

Disclaimer: This documentation has been extracted from the Team for Capella Guide which available in the Team for Capella client from the menu Help > Help Contents. Some links referencing topics from Capella, Sirius or other component with documentation in the embedded help will not work.

Contents

1. Introduction to Team for Capella
1.1. Overview
2. Release Note
3. User Guide
3.1. Overview
4. Project Administrator Guide
4.1. Overview
4.2. Jenkins Configuration
5. System Administrator Guide
5.1. Overview
5.3. Server Configuration
5.3.7. Audit Mode
6. Developer Guide
6.1. Overview
7. TEAM FOR CAPELLA Software User Agreement

1. Introduction to Team for Capella

00. Team for Capella overview

1.1. Overview

Team for Capella is an add-on that allows users to collaborate on remotely shared models and representations. For this collaboration between users to operate smoothly, Team for Capella relies on the following features:

00. Roles differentiation

1.2. Roles Differentiation

01. Rationale and Concepts

1.3. Rationale and Concepts

  1. Rationale and Concepts
    1. History: Collaborative Work Based on SCM Capabilities
    2. Team for Capella Solution
    3. Shared Repositories and Configuration Management

History: Collaborative Work Based on SCM Capabilities







Relying on a SCM tool to manage concurrent accesses is possible, but clearly limited.

This main reason is that the needs for managing model versions (genuine objective of a SCM tool) and concurrent accesses are deeply different:

  • Model versioning: The need is to identify key intermediate baselines (for review, publication, validation, etc.), manage branches allowing maintaining several versions in parallel (development, maintenance, etc.), identity in which version a PCR is fixed, etc. Fragmentation of models should be limited to what has to be versioned.
  • Concurrent accesses: The need is a granularity as fine as possible. From the end user point of view, the locking / unlocking mechanisms have to seamless (i.e. as transparent as possible) so that they do not interfere with their engineering activity. For example, there is often no need for associating each individual model modification to a UCM activity.

Here, fragments are created to manage concurrent accesses and not anymore because their content has to be versioned.

The global idea of Team for Capella Solution is to separate the management of both needs:

  • SCM tools are perfect for managing versions.
  • Team for Capella solution only focuses on managing concurrent accesses.



Team for Capella Solution

Team for Capella Solution: 3 products.

  • Team for Capella Client: it is a standard Capella client with additional functionalities:
    • to work on a shared remote model,
    • to perform administrative tasks on the Team for Capella Server:
      • Import/Export a model from/to the Team for Capella Server,
      • Manage access rights,
      • Manage locks,
  • Team for Capella Server: manages the repository, the locks and the access rights,
  • Team for Capella Scheduler: a Jenkins server can be used to manage the Team for Capella Server:
    • Start/Stop the Team for Capella Server,
    • Do periodic imports of models and backups of the server’s database.









Shared Repositories and Configuration Management

2. Release Notes

API_Changes

2.1. What's new and API changes

The release note is updated for each new version and contains descriptions on changes visible by users, new or modified APIs accessible for developers. The change log can also be found online: Team for Capella Change Log

  1. What's new and API changes
    1. Changes in Team for Capella 6.1.0 (from 6.0.0)
      1. UX enhancement
      2. Packaging, installation and deployment
      3. Server
      4. Tools
    2. Changes in Team for Capella 6.0.0 (from 5.2.0)
      1. UX enhancement
      2. Packaging, installation and deployment
      3. Server changes
      4. Tools
    3. Changes in Team for Capella 5.2.0 (from 5.1.0)
      1. UX enhancement
      2. Locks management
      3. Packaging, installation and deployment
      4. Server
      5. Tools
    4. Changes in Team for Capella 5.1.0 (from 5.0.0)
      1. UX enhancement
      2. Scheduler
      3. Server
      4. Tools
      5. Experimental
    5. Changes in Team for Capella 5.0.0 (from 1.4.2)
      1. UX enhancements
      2. Packaging, installation and deployment
    6. Changes in Team for Capella 1.4.2 (from 1.4.1)
    7. Changes in Team for Capella 1.4.1 (from 1.4.0)
      1. Change Management
      2. Scheduler
      3. Changes in com.thalesgroup.mde.melody.collab.importer
      4. Server / Repository configuration
      5. Compatibility with other add-ons
    8. Changes in Team for Capella 1.4.0 (from 1.3.1)
      1. Partial support for internationalization
      2. Changes in com.thalesgroup.mde.melody.collab.importer
    9. Changes in Team for Capella 1.3.1 (from 1.3.0)
      1. Changes in com.thalesgroup.mde.melody.collab.importer
      2. Changes in the Team4Capella Scheduler
      3. Repository Information Properties Page
    10. Changes in Team for Capella 1.3.0 (from 1.2.1)
      1. Representation lazy loading
      2. xmiids resource usage has been removed
        1. Changes in com.thalesgroup.mde.cdo.emf.transaction
        2. Changes in com.thalesgroup.mde.melody.team.xmisupport
      3. Diff/Merge in Team for Capella
      4. Audit Mode
      5. User Profile
      6. Change Management
    11. Changes in Team for Capella 1.2.1 (from 1.2.0)
      1. Uid can be used instead of xmi:id to identify a representation
      2. Diff/Merge in Team for Capella in case of deactivating (by default) the XMIID synchronization
      3. Durable locking is now disabled by default
    12. Changes in Team for Capella 1.2.0 (from 1.1.x)
      1. Changes in com.thalesgroup.mde.cdo.emf.transaction
      2. Viewpoint native/legacy CDO mode
      3. CDO 4.6

Changes in Team for Capella 6.1.0 (from 6.0.0)

Compatibility with Capella 6.1.0

UX enhancement

Packaging, installation and deployment

Server

Tools

Changes in Team for Capella 6.0.0 (from 5.2.0)

Compatibility with Capella 6.0.0

UX enhancement

Packaging, installation and deployment

Server changes

Tools

Changes in Team for Capella 5.2.0 (from 5.1.0)

Compatibility with Capella 5.2.0

UX enhancement

Locks management

Packaging, installation and deployment

Server

Tools

Changes in Team for Capella 5.1.0 (from 5.0.0)

Compatibility with Capella 5.1.0

UX enhancement

Scheduler

Server

Tools

Experimental

As experimental features:

Changes in Team for Capella 5.0.0 (from 1.4.2)

Compatibility with Capella 5.0.0

UX enhancements

Packaging, installation and deployment

Changes in Team for Capella 1.4.2 (from 1.4.1)

Changes in Team for Capella 1.4.1 (from 1.4.0)

Change Management

Scheduler

Changes in com.thalesgroup.mde.melody.collab.importer

Server / Repository configuration

Compatibility with other add-ons

Changes in Team for Capella 1.4.0 (from 1.3.1)

Please also refer to Sirius Release Notes, Capella Release Notes and Sirius Collaborative Mode Release Notes

Partial support for internationalization

Team for Capella 1.4.0 introduces partial support for internationalization: all literal strings from the runtime part of the Team for Capella add-on are now externalized and can be localized by third parties by providing the appropriate "language packs" as OSGi fragments. Note that this does not concern the server components, the user profile component, the maintenance and importer applications, the administration components or the parts of the UI inherited from Eclipse/EMF/GEF/GMF/Sirius/CDO and other libraries and frameworks used by Team for Capella.

Some API changes were required to enable this. Most breaking changes concern the plug-in/activator classes from each bundle. They are:

Additional non-breaking changes:

Changes in com.thalesgroup.mde.melody.collab.importer

Changes in Team for Capella 1.3.1 (from 1.3.0)

Changes in com.thalesgroup.mde.melody.collab.importer

Arguments Description
-exportCommitHistory Whether the Commit History metadata should be exported (default: true). If the value is false, all other options about the commit history will be ignored.
-includeCommitHistoryChanges imports the commit history detailed changes for each commit (default: false). This option is applied for all kinds of export of the commit history(xmi, text or json files).
-importCommitHistoryAsJson import commit history in a json file format. The file has the same path as the commit history model file, but with json as extension.
-overrideExistingProject if the output folder already contains a project with the same name this argument allows to remove this existing project.
-logFolder defines the folder where to save logs (default : -outputFolder). Note that this folder needs to exist.
-archiveProject defines if the project should be zipped (default : true). Each project will be zipped in a separate archived suffixed with the date.
-outputFolder defines the folder where to import projects (default : workspace). Note that this folder needs to exist.

Changes in the Team4Capella Scheduler

Repository Information Properties Page

The properties page (contextual action) on aird files of shared modeling project has a tab named Repository Information. This presents the connected repository information (location, port and name) as well as a list of connected users on the same repository.

Changes in Team for Capella 1.3.0 (from 1.2.1)

Please also refer to Sirius Release Notes, Capella Release Notes and Sirius Collaborative Mode Release Notes

Representation lazy loading

A new mode allowing lazy loading of representations is activated for shared modeling projects. It translates into much faster project opening because none of the representation data are loaded. The data of a representation are loaded only when the application requires it. Examples: open representation, copy representation, export representation as image etc... Warning: Passing from one mode to the other requires to clean the database. Indeed, the lazy loading of representations is linked to the fact that the representations are split in many resources in the database. Nevertheless, the application will work properly with a mix of split or non split representations.

Technically, the lazy loading of representations is activated with the preference CDOSiriusPreferenceKeys.PREF_CREATE_SHARED_REP_IN_SEPARATE_RESOURCE set to true by Team for Capella. It can be disabled with the use of a system property: -Dcom.thalesgroup.mde.cdo.emf.transaction.enableRepresentationLazyLoading=false. The representation content is stored in a dedicated srm shared resource. Note that representations in local Capella projects are still stored in the aird resource.

xmiids resource usage has been removed

uid is a new attribute on Sirius elements that are serialized in aird (and srm) resources. It is used as technical id for any element from the Sirius model which are stored in the aird (or srm) resources except for GMF notation elements. The old xmiids shared resource is no more used. Its role was to ensure that the xmi:id of elements were kept after export/import on the Team for Capella server.

Changes in com.thalesgroup.mde.cdo.emf.transaction

Changes in com.thalesgroup.mde.melody.team.xmisupport

Diff/Merge in Team for Capella

The limitation that came out in Team for Capella 1.2.x is no more effective. While comparing a local project to a connected project or between two connected projects, no differences will be shown between representations if they are identical.

Please have a look at Capella Model Diff/Merge Documentation for more details.

Audit Mode

The Audit mode is now active by default in the Team for Capella server. This mode aims to keep tracks of all versions of each object in the server database. It is required for comparing different versions of the model for example.

Please have a look at Audit mode for more details.

User Profile

User profile resource permission now can use a regular expression with spaces. If you used the %20 encoding to bypass this problem, then you must replace it by a standard space to make it work with the new version.

Change Management

The Commit History View has been improved to display a commits list related to the selection and also displaying the impacted elements of one or several selected commits. See the Commit View section in the user documentation of Sirius Collaborative Mode for more details about those changes: Commit History View.

The commit description dialog box is displayed if there is a warning associated to the commit description. A warning occurs when:

Please have a look at Change Management for more details.

Changes in Team for Capella 1.2.1 (from 1.2.0)

Uid can be used instead of xmi:id to identify a representation

Uid can be used as technical id for representations in case when the XmiId synchronization is disabled.

Please have a look at Capella release note for more details about the usage of uid and the migration of models from previous versions to update uids.

Diff/Merge in Team for Capella in case of deactivating (by default) the XMIID synchronization

Because of the abandonment of using XmiID as the identification for representations and their elements while performing a Diff/Merge operation between 2 Capella projects, the graphical internal elements between two representations are technically not possible to be matched. It causes an impact while comparing and merging 2 projects in Team environment:

Please have a look at Capella Model Diff/Merge Documentation for more details.

This XmiidsResource creation during export and its synchronization mechanism are now disabled by default. The system property "-Dcom.thalesgroup.mde.cdo.emf.transaction.disableXmiidsSynchronization=false" allows to re-enable it if needed.

Please have a look at VM Arguments > Disable XmiId synchronization for more details.

Durable locking is now disabled by default

The durable locking mechanism is now disabled by default.

Please have a look at Durable locks management view for more details.

Changes in Team for Capella 1.2.0 (from 1.1.x)

Changes in com.thalesgroup.mde.cdo.emf.transaction

Viewpoint native/legacy CDO mode

Please have a look at Release note for Sirius Collaborative Mode for more details.

CDO 4.6

Team for Capella is now based on CDO 4.6 (previous versions used CDO 4.4).

M2_Changes

2.2. Metamodel changes

  1. Metamodel changes
    1. Changes in Team for Capella 6.x (from 5.x)
      1. Metamodel changes in Capella
    2. Changes in Team for Capella 5.x (from 1.4.x)
      1. Metamodel changes in Capella
    3. Changes in Team for Capella 1.4.x (from 1.3.x)
      1. Metamodel changes in Capella
    4. Changes in Team for Capella 1.3.1 (from 1.3.0)
      1. Metamodel changes in Capella
    5. Changes in Team for Capella 1.3.0 (from 1.2.1)
      1. Metamodel changes in Capella
    6. Changes in Team for Capella 1.2.0 (from 1.1.x)
      1. CDO generation mode for feature delegation
      2. Metamodel changes in Capella

Changes in Team for Capella 6.x (from 5.x)

Metamodel changes in Capella

Please have a look at the Capella release notes.

Changes in Team for Capella 5.x (from 1.4.x)

Metamodel changes in Capella

Please have a look at the Capella release notes.

Changes in Team for Capella 1.4.x (from 1.3.x)

Metamodel changes in Capella

Please have a look at the Capella release notes.

Changes in Team for Capella 1.3.1 (from 1.3.0)

Metamodel changes in Capella

Please have a look at the Capella release notes.

Changes in Team for Capella 1.3.0 (from 1.2.1)

Metamodel changes in Capella

Please have a look at the Capella release notes.

Changes in Team for Capella 1.2.0 (from 1.1.x)

CDO generation mode for feature delegation

The default strategy for CDO generation concerning Capella meta-model has been changed from reflective feature delegation to dynamic feature delegation.

Metamodel changes in Capella

Please have a look at the Capella release notes.

3. User Guide

00. User overview

3.1. User Overview

Team for Capella provides to its users additional functionalities on Capella projects allowing to collaborate easily thanks to:

02. Export-Import to-from the Team for Capella Server

3.2. Export/Import to/from the Team for Capella Server

  1. Export/Import to/from the Team for Capella Server
    1. Export
    2. Import
    3. Dump to local

Export

Import a file-based model in a workspace. The model can be indifferently fragmented or not.

On the Capella Project containing the model, use the contextual menu to launch the Export wizard.

Choose " Capella Project to Remote Repository"

The "Export model to repository" wizard opens. The repository information is initialized with the default settings defined in the Preferences.

Before continuing, the server information have to be verified. To do so, click on " Test connection"

A login dialog pops up. Enter valid login and password (see Server Administration for more information about User management).

If the identification is successful, the " Finish" button becomes active.

If you do not click on " Finish" but on " Next", the following options are available:

If you click " Next" again, you will be able to choose the images you want to export to the repository in this new wizard page.

Refer to Export images to the server when exporting the project for more details.


Then, after having clicked Finish, a progress bar is displayed.


When the export is completed, a dialog shows the result of the process by listing the newly created or overridden resources, as well as the not found resources, already existing resources, or the non-discovered resources.
Note that the "discover" mode is not yet implemented, but this dialog allows to inform the user about what has been done during the export.



Import

In the Capella Project Explorer, use the contextual menu to launch the Import wizard.

Choose " Capella Project from Remote Repository"

A wizard opens. The repository information is initialized with the settings defined in the Preferences. These information can be overridden. Before continuing, the server information have to be verified. To do so, click on " Test connection". Follow the login instructions as when login to Export the model. When the test is successful, the " Next" button becomes active.





A second Wizard page proposes to chose the model to Import (a Shared Repository can hold several models).

Optionally change the name of the Capella Project going to be created.

The behavior of the wizard can be configured with the following options:

If you click on Next you will be able to choose options about which images will be imported.

Refer to Import images from the server when importing the project for more details.

Images that already exist on the workspace will be overridden automatically.

A progress bar appears.

When the import is completed, a dialog shows the result of the process by listing the newly created or overridden resources, as well as the not found resources, already existing resources, or the non discovered resources.
Note that the "discover" mode is not yet implemented, but this dialog allows to inform the user about what has been done during the import.

Once the import is finished, the imported model is automatically opened.

The model files can then be pushed back to Git if necessary.



Dump to local

This command will dump the connected project into a new local Capella project. The local project will contain only the already loaded representations.

It is available in contextual menu on aird file of an opened connected project.

This command is useful if you encounter a Save fail issue. You can then use the tool to have a new Capella project, compare it with the project on server and make some merge.

03. Connect to remote model

3.3. Capella Connected Project

  1. Capella Connected Project
    1. First Connection
    2. Connection Using an Existing Connection Project
    3. Overriding Sirius refresh preferences for a particular connected project
    4. Tips and Tricks
      1. Secure Storage (Remember me) and Roaming User Profiles
      2. How to Clear the Secure Storage

First Connection

Connecting to a remote model is similar to opening a file-based model. The result of a connection is an opened model ready to be modified.

Using the contextual menu on the Capella Project Explorer, click on New / Capella Connected Project

A dialog pops up, asking to specify the information of the remote repository holding the model. By default, these fields are initialized with the values set in the Preferences.

At this stage, the server information have to be verified. To do so, click on " Test connection".

A login dialog pops up. Enter valid login and password (see Server Administration for more information about User management).

By checking "Remember me", you have the option to store your user name and password in the Eclipse’s Secure Storage. If you do so, your user name and password will not be asked for future connections.

Once the connection is verified, click on " Next". Select one of the model hold in the repository.

The connection will create a new Capella project to hold the local proxy for the remote model. A suffix like ".team" is added by default at the end of the project name, in order to distinguish local and shared projects at the first glance.

Click on " Finish". According to the size of the model, the duration of the connection may vary.

Warning: it is longer than opening a file-based version of the same model.

The connection can fail, for example if a Viewpoint used by the remote model is missing on client side. In this specific case, the following error will be displayed:

Known issue: if this error occurs, it is advised to restart Capella before trying to reconnect (even if you want to connect to another model for which there are no missing Viewpoints).

If the connection is successful, the model is opened in the Capella Project Explorer. Note there is no semantic file ".capella". The ".aird" file contains both information about the remote model and the local diagrams on this model.

At the end of a working session, the model can be closed exactly like file-based model.

Connection Using an Existing Connection Project

When a connected project already exists, connecting again simply requires a double click on the ".aird" file. If necessary, the login dialog will be displayed.

Overriding Sirius refresh preferences for a particular connected project

Both "Automatic Refresh" and "Do refresh at representation opening" can be specified for a given aird. Refer to Sirius documentation: Preference associated to the aird file

For any new local Capella project, the preferences are not overridden for the aird file and the preference values are those displayed in Windows/Preferences/Sirius

For a connected project, to define specific Refresh preferences, a page has been added in the "Capella Connected Project" wizard to allow users to override refresh preferences for the being created connected project local aird. By default, "Enable project specific settings" is checked and both "Automatic Refresh" and "Do refresh at representation opening" preferences are set to false.

It is nevertheless possible to change the default value using the preference fr.obeo.dsl.viewpoint.collab/PREF_ENABLE_PROJECT_SPECIFIC_SETTINGS_DEFAULT_VALUE. If set to false, then, by default, "Enable project specific settings" is unchecked.

Note: The preference values are not shared between two connected users. The preferences are associated to the local aird of the "Connected project" but not with the shared aird.

Tips and Tricks

Secure Storage (Remember me) and Roaming User Profiles

When "Remember me" is used, the login/password couple is stored in an encrypted file (located here: %USERPROFILE%\.eclipse\org.eclipse.equinox.security\secure_storage).

The key used to encrypt this file is generated and depends on the computer, the current Windows account and the Team for Capella architecture (32 bits or 64 bits).

So by default, this file can only be decrypted and used using the same computer/windows account/Team for Capella architecture (32 bits or 64 bits) than those used to create the file.

Because of this, it is not possible to use the Secure Storage feature with roaming user profiles.

Example: if the file was created using "Computer1"/User Account/Team for Capella 32 bits, it won’t be possible to reuse the Secure Storage with "Computer2" or with another user account or with Team for Capella 64 bits.

In the cases described above, the following error will appear in the "Error Log":

A workaround for this problem is to provide, by configuration, the key to use to encrypt the Secure Storage file. To do that:

  1. Create a text file and put a key in it (you are free to choose any key),
  2. Add the following parameter in the capella.ini file (before -vmargs):
    -eclipse.password <path to your key file>

  3. Then clients must clear their existing Secure Storage (if any) by using the procedure below and restart Team for Capella.

How to Clear the Secure Storage

In the following cases, it could be useful to clear the Secure Storage:

To clear the Secure Storage:

Note: It is not possible to just reset a stored username and/or password for a single repository. By performing these actions, the entire password store will be deleted and you will then have to re-enter your username and password for each repository, the first time you wish to use it.

04. Connect to remote model airdfragments

3.4. Aird Fragments Connection

  1. Aird Fragments Connection
    1. Introduction
    2. Model Preparation
    3. Restrictions
    4. Connect to Airdfragments
    5. Diagrams Moving
    6. Airdfragments Management

Introduction

The purpose of this functionality is to be able to connect to airdfragments in order to work with the whole semantic model but only a subset of representations (diagrams or tables).

It can be useful when working with a big model to shorten connection time and memory consumption.

Model Preparation

The model to prepare must be a local model in file format (do an import if necessary). The session must be open.

2 actions can be used:

It must be added in the project (in the project root or in a directory of the project, "fragments" for example).

Model organization after an execution of this action:

Restrictions

- The .airdfragment file path must not contain spaces.

- The project containing the airdfragments must not host many semantic models. (only one semantic model is allowed)

Connect to Airdfragments

When the model is well organized, export it to the server.

You can create connection projects to several .airdfragments thanks to the dedicated wizard:



The second page of the connection wizard allows selecting .airfragments to use.

Connection to fragments belonging to different models is not allowed since it does not make sense.



Connections to fragments example:

As previously, it is still possible to connect to the .aird, all diagrams will be accessible.

Diagrams Moving

It can be needed to move diagrams between aird and airdfragments and between 2 airdfragments.

This can be done on a local model or on a remote model (the source and destination resources must be visible from the same connection project).

To move a diagram to another resource, use the "Move Diagrams" sub menu:

In addition, to ease diagrams management, the "Representations per resource" item can be useful. To display it, uncheck it in the "Customize View…" dialog.

Airdfragments Management

airdfragments can only be managed in a local model (do an import if needed).

Do not use directly the Eclipse delete command, all content would be lost.



05. Working on a Remote model

3.5. Working on a Remote Model

  1. Working on a Remote Model
    1. Locks and Update on Model Elements
    2. Locks and Updates on Diagrams
    3. Local vs Shared Diagrams
    4. Explicit Locks
    5. Dissociated local Saves and Commits
    6. Commit Descriptions and History
    7. Session Details Properties Pages

Several users access the model held by the Team for Capella Server repository through their Team for Capella Client. The Capella project on the client side only consists in one ".aird" file which is both a proxy towards the shared repository and a container for the local diagrams.



Fundamental principles

  • The semantic model is always integrally shared
  • Representations (Diagrams, Tables, Trees) can be shared on the repository or can be local to one user
  • Locks are taken automatically as soon as an element or a representation is modified.
  • When a user has a lock (displayed with a green lock decoration) he can edit the element (rename an attribute, add/remove sub-elements). The other users cannot edit this element (displayed with a red lock decoration).
  • Locks are automatically released when committing.
  • By default, any Save action triggers a commit.
  • It is possible for a user to set explicit locks (i.e. force the lock of an element or set of elements before modification). Explicit locks are not released when saving the modifications. The elements stay locked until the user explicitly unlock them.
  • From a diagram editor, modifying an element property visible on the diagram will lock the diagram.
  • Locking a diagram does not lock the semantic elements presented on this diagram.
  • Locking a diagram prevents others from modifying this diagram, but does not prevent other users to modify non-locked semantic elements represented on this diagram.
  • Adding an element A in an element B requires a lock on both A and B
  • Newly created elements are not locked



Locks and Update on Model Elements

Red locks indicate another user is currently modifying the element (this modification might be a deletion). The identification of the user holding the lock is added between brackets as a suffix.

Green locks indicate the current user has reserved or modified the current element.

Below is an example of the decorations in the Project Explorer.

When an element is locked by another user, its editor dialog is still accessible but cannot be modified (all fields are disabled).

Lock decorations are visible in any View of Capella, such as the Semantic Browser, the selection dialogs or the delete confirmation window.







On diagrams, the semantic locks are represented on the graphical artifacts (containers, nodes, ports, links) representing the locked model elements.

Updates of modified semantic elements are performed automatically.

Locks and Updates on Diagrams

Two users cannot work simultaneously on the same diagram. As soon as a user modifies a diagram, the whole diagram is locked for the other users.

When creating, cloning or moving a representation, the associated semantic target element is automatically locked. This is useful to avoid that, on a connected project, the current user saves the newly created representation with a null target in case an other user had deleted the target just before the current user saves. Note that a warning is displayed in the dialog box to ask the user to save as soon as possible so that to release the lock.
This behavior can be deactivated using the preference CDOSiriusPreferenceKeys.PREF_LOCK_SEMANTIC_TARGET_AT_REPRESENTATION_LOCATION_CHANGE with a false value.


This behavior has a particular impact when using User Profile. If the user has only a read only right on the semantic element, he can not create/clone/move a representation on it.

The lock diagram decorations are visible both on the tab bar of the diagram editor and in the Project Explorer.



When a diagram is locked by another user:

However, even though another user locks a diagram, semantic elements appearing on this diagram can still be modified by anyone. This is the case for example of the Function "Acquire Images" on the above example. The opposite is true as well: one can have a green lock on a diagram despite some semantic elements appearing on this diagram are locked by other users.

Once the user modifying a diagram saves and commits its modifications, the diagram is not locked anymore. For the other users currently displaying the diagrams, two different alternatives:

After the refresh is performed, the new layout becomes visible.

Note: on the above example, one semantic element ("Acquire Images") was currently being renamed by the user. The consequence is that the refresh induces a new change (and thus a green lock) on the diagram to reflect the label update.

In Capella, the background of diagrams always represents a semantic element (which is the element under which the diagram is located in the Project Explorer). In case this semantic element is locked (hereunder the Root System Function), a specific decorator is put on the background of the diagram. This means for example that even though the diagram is locked for edition (green lock), adding a new element on the background of the diagram is not possible.



Local vs Shared Diagrams

Diagrams can be local or shared in the repository. Shared diagrams have specific decorators.

When creating a new diagram, a dialog pops up asking the user to choose whether the diagram should be shared (cdo://) or local (platform:/resource…).

It is possible to move diagrams from the repository to the local project and vice versa.

From the local project to the shared repository.


From the repository to the local project.

Note that there is a warning when the selected target is local.



Important note: semantic elements created on a local diagram are instantaneously shared with other users as soon as a commit is performed. Local diagram does not mean local elements.

Explicit Locks

It is possible to explicitly lock an (or a set of) element(s) by using the contextual menu.

Note that only semantic elements are locked. Diagrams can also be locked explicitly, but individually.

The behavior of the locks when they are set manually is a bit different than the one of automated locks: while automated locks are systematically released at each commit, elements locked explicitly have to be unlocked explicitly as well.

Consider the following use case



Dissociated local Saves and Commits

Currently not available.



Commit Descriptions and History

A Preference allows specifying whether a description is required when committing or not. In case this option is enabled, the following dialog is prompted on each commit action.

Dialog buttons:

Another preference allows the user to pre-fill the commit description using various strategies. The default strategy exploits the previous commit description, while the Mylyn strategy relies on the content of the currently-active, non-completed Mylyn task using the template defined in the Mylyn > Team preferences. Below is an example of such a template:

${task.description}
User Information:
Key: ${task.key}
URL: ${task.url}

For more information about these templates, refer to the Mylyn documentation.

A dedicated view allows displaying the commit history. This window can be opened with the contextual menu called on the semantic model.

This view is particularly useful to monitor the current changes on the shared model. The objective of this history is also to attached as a change log when pushing back file-version of the model to Git.

This view is divided in two parts :

The Commit History View contains several buttons to modify the context of the commits list, filter those commits or modify the changes viewer tree layout/content.

In particular, a "Filter" button is present in the Commit History view toolbar and allows the user to filter the content of the impacted elements.

This button is represented by the following icon :

By activating or deactivating this button, the user can apply or not the selected filter.

Selected filters can be customized into the menu icon > Filters...

A new selection dialog is opened. From this dialog, the user can select filters to activate for the Commit History view. Filters provided in this selection dialog are the same than filters available in the Capella Project Explorer.

Session Details Properties Pages

The properties page (contextual action) on aird files of Capella connected project has a tab named Collaborative Session Details. It presents the repository information (location, port and name) and information about connected users and locked elements for this connected project. For more details, refer to Collaborative Session Details of the Sirius Collaborative Mode user documentation.

The properties page (contextual action) on aird files of local or connected Capella projects has a tab named Sirius Session Details. It provides a lot of usefull information about the project (used viewpoints, information about representations and capella models). For more details, refer to Sirius Session Details of the Sirius user documentation.

06. Use images in remote models

3.6. Use Images in Remote Models

  1. Use Images in Remote Models
    1. Manage images on remote repository
      1. Manage images for an existing remote project
        1. Uploading images from file system
        2. Uploading images from the workspace
        3. How to Change an Image Already on the Server
      2. Export images to the server when exporting the project
        1. Export images wizard page
        2. Images used before exporting the project to the server
      3. Import images from the server when importing the project
        1. Import images options
      4. Images on the Team for Capella Server: What to retain in few words
    2. Images used in diagrams
    3. Images used in Capella description editor

Images can be used

To use images in remote models, only images that exist on the repository can be used. Images from the workspace or from a local directory must be uploaded to the server in order to be used in a remote model.

Manage images on remote repository

Manage images for an existing remote project

Once the project is exported, it is still possible to manage images on the server with the Manage Images from Remote Server dialog.
This dialog is available from the contextual menu on a shared aird file or an open connected project.

Uploading images from file system

Uploading images from the workspace

It is also possible to upload whole sets of images by selecting project, folders or single images from the workspace

The image hierarchy of uploaded images(project and folders) is identical to the selection in the workspace.

How to Change an Image Already on the Server

An existing image can be overridden on the server. All the diagram elements, in the shared diagram, using the replaced image, will be automatically updated.

Export images to the server when exporting the project

Export images wizard page

On the Export project wizard, you will be able to choose the images you want to export to the repository in this new wizard page.

The images used by the exported projects will be automatically exported to the repository to keep the consistency of the shared representations. This means that if you explicitly use an image in one of your projects to export, this image will be exported even if you didn't select it.

The left panel shows the existing images in the open workspace projects, and the right panel shows the images you have chosen to export from the left panel. The " Override already existing images" checkbox allows you to override existing images on repository that have the same path as those added to the right panel.

Images in JPEG, JPG, PNG and SVG format are supported.
The maximum size of uploaded images through the export wizard is 10 MB per image. If greater, images are not displayed in the selection UI and cannot be exported to the server. This value can be changed by overriding the preference PREF_MAX_KILOBYTES_IMAGE_SIZE.

If the referenced images do not exist when exporting the project to the server, an error appears in the "Error Log" listing all missing images.

Open the error details to see all affected images:

If an image that has been exported to the server is afterwards not used anymore in a remote diagram, then this image will not be imported when importing the project if you choose the Import only used images option in the import wizard.

Images used before exporting the project to the server

When a model is exported to the Team for Capella Server, referenced images which are available in the workspace will be exported along with the model. In the local project, it is important to select images in the right project because it will drive the way the image is recreated when importing the project locally (after it has been exported to the server).

Local project where images, image1 and imageLib1, have been used as workspaceImage before exporting:

Projects after exporting then importing the remote project:
Note that only used images have been exported then imported

Import images from the server when importing the project

Importing images is done when importing a remote project in the workspace using the Team for Capella import wizard.

When importing the remote project locally, the imported images will be created in local projects that correspond to their location on the server.

The import wizard allows you to choose from 3 different options for importing images:

Images that already exist on the workspace will be overridden automatically.

Import images options

Starting from a local project, all images in the workspace have been exported to the server with the project.
Suppose that /ImageLibrary/imageLib1.png is referenced by the project, and /In-Flight Entertainment System/image1.png has been exported because explicitly chosen in the export wizard page.

Let's consider that the local workspace is then completely cleaned up to import the remote projects.

The result of the import will be different according to the selected option:

Import all images

Import only used images

Do not import images


  • When importing the project locally, it will also create projects containing the referenced images. This projects are also zipped by the importer job. See archiveProject parameter in Importer Parameters chapter.
  • By default, the importer job uses the Import all images option, this option is not yet configurable with a specific parameter.

Images on the Team for Capella Server: What to retain in few words

What to retain in few words:

  • Only images that exist on the repository can be used.
  • To upload images to the server, they must be selected manually when exporting a project from the Select images to export on the repository page.
  • It is also possible to manage images on the server from the Manage Images from Remote Server context menu, available from a shared aird file or an open connected project.

Images used in diagrams

To use images in remote models, only images that exist on the repository can be used. Images from the workspace or from a local directory must be uploaded to the server in order to be used in a remote model.

In a diagram it is possible to associate an image to a node using "Set style to workspace image"

Select the project or folder where your image is located and select it in the image gallery:

From this dialog it is also possible to manage remote images. Refer to "Manage images on remote repository" documentation

Images used in Capella description editor

It is possible to add a description with images, for any element of a Capella project, using the description tab in the Properties view.

Like in remote models, only images that exist on the repository can be used. There are two ways to add an image in the description

To add an image with the selection dialog, click on Add image button and choose the image.

Images are then added to the description:

07. Working with libraries in a multi-user context

3.7. Working with Libraries in a Multi-user Context

  1. Working with Libraries in a Multi-user Context
    1. Export Procedure
    2. Project/Library Usage
    3. Limitations and Known Issues

Export Procedure

One classical pitfall is to export models (libraries and projects) that are linked by "reference" relationship one by one. Rather, export of linked models must be done at the same time because doing it one by one may lead to the export of still exported models. For the sake of illustration, having two projects P1 and P2 referencing library L1 may lead to one re-export of L1 if one tries to export P2 after having exported P1. The following section describes the correct procedure.

We assume in this section that a Team for Capella Client is opened and its workspace contains a set of models (projects and libraries) that are interconnected by the way of reference links.

In that context, the export procedure is as follow:

  1. Select all AIRD of interlinked models,
  2. Right-click on the selection, and click on export,
  3. Choose Export model in Team for Capella category,
  4. Test the connection, authenticates if required and click on finish.
  5. You can afterwards connect to the models you want as usual.

Figure below illustrates the four steps described above in the given context:

Project/Library Usage

Libraries can be accessed as classic remote projects with Team for Capella and have almost the same behavior as with Capella standalone:

It is allowed to open, in the same client, a project and some libraries it references. Thus it is possible to have 2 views (or more) of the same semantic elements:

If a library is referenced with a "readAndWrite" access policy, it is allowed to change its semantic model from the project connection, from P1.team in this example:

Even if the user is logged with the same login to L1 and to P1, if a change is done on one side, there will be a green lock on this side and a red lock on the other (so concurrent changes are forbidden on library’s elements).

Limitations and Known Issues

08. Client Configuration

3.8. Client Configuration

  1. Client Configuration
    1. Preferences
      1. Team Preferences
      2. Other Preferences
      3. Configuration Project
      4. VM Arguments

Preferences

Team Preferences

Team Preferences are available in Window / Preferences / Sirius, section Team Collaboration.

The Registered Repositories section contains all saved server information. There is a default saved repository that can be overridden only in this preference page. Registered repositories can be edited, duplicated or removed and new repository configurations can be added. All these configurations can be retrieved in the Connection / Import / Export wizards.

The check box " Require description for commit actions" specifies whether a dialog allowing to input a description when committing should be displayed systematically or not.

By activating the preference " Pre-fill commit description", any time the user is asked for entering a commit description, the framework will compute one using a list of registered participants (see description below). This description will be presented to the user so he can modify it or simply reuse it for its current commit.

By activating the preference " Automatically use the pre-filled description when none is provided", any time the user commits and do not specifically provides a commit description, the description computed from the mechanism described above will be used.

Other Preferences

Please check the following settings in the other sections of the Preferences.

For a better reactiveness of the whole workbench, the synchronization of the Semantic Browser should be disabled. Reminder: when the Semantic Browser is not permanently synchronized, typing F9 focuses the Semantic Browser on the currently selected element.


"Automatic refresh" and "Do refresh on representation opening" are activated by default as it is in Capella.

They can nevertheless be overridden at the project level.


Automatic synchronization of Semantic Browser is deactivated by default.

Configuration Project

A Capella Configuration Project cannot be shared through several users by exporting it to the Server.

To use the Capella Configurability feature in Team for Capella, the Capella Configuration Project need to be referenced on each Team for Capella connection project.

VM Arguments

The client behavior can also be set using VM arguments added to the capella.ini or in a launch config.

16. Change Management

3.9. Change management

  1. Change management
    1. Introduction
    2. Main documentation
    3. Filling up extra information
      1. Using CDO History
      2. Using Mylyn
    4. Export user activities
    5. Use exported activities
    6. Comparing commits

Introduction

Change management is about adding extra information about users activities while modeling. They can be related to any aspect of the modeling session (current tasks, current teams, a more detail explanation etc...). Its integration in Team for Capella provides a way to:

Those information are attached to a commit. They can be visualized in the Commit History View by selecting each commit. Be aware that some commits are made by modeler itself. They do not represent commits that users would have made. They are tagged with the property team-technical-commit : true.

Main documentation

The main documentation of the Commit History View is available in the corresponding section of the Sirius Collaborative Mode user documentation.

Note that some actions has been hidden in Team for Capella, such as Create Branch... and Checkout popup menus. You can enable the CDO Actions capability in the Preferences page to access them.

Filling up extra information

In Team For Capella there is 2 ways to fill up the extra information attached to a commit.

The following sections explain the different facilities used to compute a commit description.

Using CDO History

This strategy uses the history of the Team for Capella Server to guess what information the user wants to enter. Before each commit, it will look for the last commit done by the current user (that is not a Technical commit ). For example, lets says the current user is user1 and the server has the following history:

Date User Description
31/08/2017 16:00 User1 Update Xmi Ids

team-technical-commit : true

31/08/2017 15:59 User2 Activity 2

Doing some work

31/08/2017 16:58 User1 Activity 1

Doing some other work

31/08/2017 16:57 User1 Activity 1Doing some other work

If user1 saves the model, the framework would compute the following commit description:

Activity 1

Doing some other work

If he has activated the preference " Require description for commit actions" a dialog will open suggesting this message.

If not activated and the preference " Automatically use the pre-filled description when none is provided " is activated the commit will be made using this message as commit description.

In order to activate this strategy go to the preference page: Sirius > Team collaboration. Select Pre-fill commit description and select CDO History. Be aware that this mode only works on an authenticated Team for Capella Server.

Using Mylyn

This strategy uses Mylyn tasks to compute a commit description. Using the template defined in " Preference > Mylyn > Team", it computes a commit description from an active and not completed task. This strategy is really handy when using " Automatically use the pre-filled description when none is provided " preference. Indeed, with this configuration the user only has to activate or deactivate Mylyn tasks to have a clean history filled up with extra information.

In order to activate this strategy go to the preference page: Sirius > Team collaboration. Select Pre-fill commit description and select Mylyn.

Export user activities

Once history filled up with a meaningful information, the user might want to use it. To do so, he can export it to a model format using the " Export Metadata actions" from the Commit History view.

Another way to export metadata is by using the importer.

Use exported activities

Once the information exported to a file, a model editor can be used to browse the different activities that occurred on the server. Using the " text " tab, he has access to a textual representation of the current model. He can even request it using Aql requests (more documentation here). Here is a representation of the metamodel:

For example he might want to request all users that have participated to a given activity. To do so he could use the following AQL request:

aql:self.activities->select(a|a.description.contains('Activity 1'))->collect(a|a.userId)

Using a dedicated format in the commit description (defined here), the user can even creates its own custom properties. Each one of them will be transformed into ActivityProperty. It might be used to create more advanced Aql requests .

Comparing commits

When using a server that is configured with Audit mode it is possible to compare commits between each other. To do so the user should open the Commit History view. From there he can select one or two commits and use "Compare with each other" or "Compare with previous" menus. The comparison is done using Diff/Merge framework (see document here).

Limitation: The Commit History View allows to merge consecutive commits with the same user and description in only one visible commit. The Diff/Merge actions are not enabled on this kind of commit. You have to deactivate first the "Merge Consecutive Commits" option to make those actions enable.

In the picture above the differences are stored under 2 roots each representing a resource.

Be aware that at this time the integration between Team for Capella and Diff/Merge do not offers merge functionalities.

4. Project Administrator Guide

00. Project administrator overview

4.1. Project Administrator Overview

Team for Capella installation can be completed with Jenkins used as a scheduler for various job managing the Capella project shared on a CDO server. Indeed, Project Administrators will find functionalities concerning:

13. Jenkins configuration

4.2. Jenkins Configuration

  1. Jenkins Configuration
    1. Team for Capella Scheduler
      1. Server Management
        1. Server - List active repositories
        2. Server - List connected projects and locks
        3. Server - Start
        4. Server - Start repository
        5. Server - Stop
        6. Server - Stop repository
        7. License Server - Start
      2. Backup and Restore
        1. Database - Backup
        2. Database - Restore
        3. Projects - Delete
        4. Projects - Export
        5. Projects - Import
        6. User profile - Import model
      3. Diagnostic and Repair
        1. Repository - Diagnostic
        2. Repository - Maintenance
      4. Credentials
        1. Server - Rest Admin - Manage User Tokens
        2. Server - Rest Admin - Manage Users
        3. Tools - Clear credentials
        4. Tools - Store credentials
      5. Templates
    2. How to Start the Team for Capella Scheduler
      1. Windows
      2. Linux
      3. How to start the Server when Scheduler starts
    3. How to change job scheduling
    4. How to Stop the Team for Capella Scheduler
    5. Activate Security in Jenkins
    6. Azure AD authentication for Jenkins
    7. How to Change Backup and Import Files Purge Policy
    8. How to Dissociate Multiple Projects in Jenkins
      1. Purpose
      2. Jobs Creation
      3. Access Rights Definition (whole Jenkins instance level)
      4. Access Rights Definition (job/project level)
      5. Result
      6. Known Limitations
        1. Inter-project Information Sharing
    9. Tips and Tricks
      1. Configure Number of Scheduler Build Processes
      2. Create Scheduler Job Environment Variables
      3. Create a Server - Start Job from Template
      4. Create a Server - Stop Job from Template
      5. Create a Database - Backup Job from Template
      6. Create a Projects - Import Job from Template
    10. Troubleshooting
      1. Jenkins window service is not launched when there are multiple versions of Java installed
      2. Connection timeout is too short

Team for Capella Scheduler

Team for Capella provides many applications (Backup, diagnostics...) manageable by Jenkins jobs in order to have a web interface for managing your shared projects. You can refer to the documentation for the installation of Jenkins.

The full Jenkins documentation can be found at the following address: https://www.jenkins.io/doc/.

By default it is available on the port 8036: when logged on the computer running the Scheduler, type the following address in a web browser:

http://localhost:8036

By default, for all jobs, the last 100 job executions (called "builds" in Jenkins) results are kept by Jenkins (build’s artifacts and logs). Note that all these jobs can be changed with the Jenkins application.

The default view is the "Server Management" one.

Server Management

Server – List active repositories

This job lists the currently active repositories on the server.

The list result is logged in the console output of the job.

These repositories can be stopped by using the Server – Stop repository job.

Server – List connected projects and locks

This job lists :

Server – Start

This job starts the server. By default, this job starts the server every Saturday at 06:00, It never stops (and must not be aborted) except if "Server – Stop" is launched.

Server – Start repository

This job starts a repository on the server, that was previously stopped by the job «Server - Stop repository». When a server starts, all its repositories starts as well.

Server – Stop

This job stops the server. By default, this job stops the server every Saturday at 05:00 (and is restarted one hour later by the previous job).

Server – Stop repository

This job stops an active repository on the server.

Use Server – List active repositories to lists all active repositories.

The stopped repository cannot be reached and remote projects existing in this repository cannot be modified. Using the Database – Backup job will not backup the stopped repository.

The server will still be running and the other non-stopped repositories will still be reachable.

License Server – Start

This job is only present in the commercial versions of Team for Capella.

It allows to manage the license server directly from the Scheduler. It is disabled by default.

Backup and Restore

Database – Backup

This job does a dump of the database into a zip file and keep it as an artefact of the build. By default, it is launched automatically 3 times a day (07:30, 12:30 and 20:30) from Monday to Friday.

Note that this job will perform a backup of the whole server. If several repositories are started, it creates one zip file per repository.

We strongly recommend to have one database path per repository. See How to Add a New Repository

Database – Restore

This job is intended to restore the database from a previously backed up database.

The backup folder is a result of the "Database – Backup" job.

If you want to restore only one repository, move all other archives out of the backup folder to keep the one specific to your repository.

Projects – Delete

It executes the exporter application to delete a project from the given repository without any user interaction.

This job will delete a project according to its name on the server, given as parameter.

Projects – Export

It executes the exporter application to export projects automatically from a local folder (or archive) on the server without any user interaction.
This job will export the projects from a specific source. This source can be

This job needs to be configured to specify the folder.

If the job fails, you may have a wrong folder path or none representation files have been found in folder.

Projects – Import

It executes the importer application to import projects automatically from a server without any user interaction and archives them as Job’s artifacts. By default, it is launched automatically every hour from 07:00 to 21:00 Monday to Friday.

This job will import the projects for a specific repository. It needs to be configured to specify the repository and optionally, a specific project list to import. If you have many repositories, you ought to have as many "import projects" jobs that may start at the same time. So you need to configure the number of job executors. Go to Manage Jenkins > configure systems menu if number of T4C repository have been extended: # of executors ≥ =nb of repo +3

This job is by default configured to use the Snapshot import strategy. Refer to the Importer strategies documentation for more details.

If the job fails, you may have corrupted data in your database that could prevent you to get imported projects. Then you could have data loss if one day you really need those imported projects. In that case, you may:

User profile – Import model

This jobs extracts the user profile model from the database and saves it locally in the archiveFolder.

It is disabled by default and must be enabled only if the repository is configured to use the "User Profiles" access control mode.

Diagnostic and Repair

These jobs can not be started if the authenticator is based an OpenID Connect. You must start the server with another mode of authentication or no authentication.

Repository – Diagnostic

This maintenance job needs to be manually launched. This job runs a diagnostic in order to detect inconsistencies described in Server Administration / Administration Tools / Repository maintenance application.

The diagnostic result is logged in the console output of the job. It is kept as an artifact of the job result.

The diagnostic is run for a specific repository and need to be configured according to your repository name.

Repository – Maintenance

This maintenance job needs to be manually launched. It is recommended to launch the Repository – diagnostic job first.

It runs a diagnostic in order to detect inconsistencies described in Server Administration / Administration Tools / Repository maintenance application. Then, it launches the maintenance tasks if some managed issues are detected: it will backup the server with capella_db command, perform the required changes on the database and close the server. The steps are logged in the console output of the job and the corresponding log file is kept as an artifact of the job result.

The maintenance is run for a specific repository and need to be configured according to your repository name.

Credentials

Server – Rest Admin – Manage User Tokens

This jobs executes the Tools Credentials Application to manage the access tokens to the Rest API, for a specific user.

Launching a build requires setting values for four parameters:

Server – Rest Admin – Manage Users

This jobs executes the Tools Credentials Application to manage the Rest API registered users.

Launching a build requires setting values for five parameters:

Tools – Clear credentials

This job executes the credentials application to clear credentials in Eclipse Secure Storage, allowing the importer application to connect to the rest admin server or to connect to a CDO repository.

As credentials needs to be associated with a repository, when this job is executed it will start by asking to fill the following parameters:

Note that credentials are required only with the Connected import strategy. See Importer strategies for more details.

Tools – Store credentials

This jobs is the opposite of the previous one, it stores the credentials in Eclipse Secure Storage, allowing either to connect to the rest admin server or to connect to a CDO repository.

As credentials needs to be associated with a repository, when this job is executed it will start by asking to fill the following parameters:

Note that credentials are required only with the Connected import strategy. See Importer strategies for more details.

Templates

This view contains templates of jobs which are disabled by default. They are provided as an example to show how to create backup jobs whose result is pushed to a Git repository.

See each job description in the Scheduler to see how to use them.

How to Start the Team for Capella Scheduler

The Jenkins installation should have included the creation of a new service (named Jenkins) that automatically starts Jenkins with the system.

Windows

If you do not have the Jenkins service, go to Jenkins (or start it manually from its installation folder), go to the Manage Jenkins configuration page and select Install as a Windows service.

Linux

The Jenkins service can be started or stopped by using the systemctl command:

systemctl start jenkins

How to start the Server when Scheduler starts

To start the Team for Capella Server automatically when the scheduler starts (i.e.: launch the Start server job), go to the configuration page of the Start server job and then check the box "Build when job nodes start", the "Quiet period" parameter allows to delay the start:

How to change job scheduling

Every job contains in its configuration page a text field called "Schedule". Use this field to change the Job’s scheduling configuration. It is visible on the previous screenshot.

How to Stop the Team for Capella Scheduler

To stop the Jenkins scheduler, go to the Manage Jenkins page and select Prepare for Shutdown

This allow to send a warning to anyone currently connected to the scheduler and end the jobs currently running or in queue. After that, you can simply go to the Windows services and stop the Jenkins service.

Activate Security in Jenkins

By default in the scheduler, the security checks are disabled. This means that Jenkins is available to anyone who can access Jenkins web UI without asking for their login and password.

It is possible to configure security within Jenkins in order to define a group of users, which are allowed to log in to Jenkins or to check user passwords against the username in LDAP or in Jenkins' own user database. To do that, the procedure is the following:

  1. Connect to Jenkins as a user with administration rights.
  2. Select Manage Jenkins
  3. Select Configure Global Security .
  4. Select the Jenkins' own user database security realm radio button to register users in Jenkins or select the LDAP radio button to register configurations for the LDAP servers that Jenkins should search.
  5. To configure an LDAP server, select the corresponding radio button and then the Advanced... button underneath the Server text field.
  6. Enter the LDAP settings as shown in the following diagram:
  7. Note: The group specified in Group search base and the username specified in Manager DN may need to be changed. The password specified in Manager Password is the password for the user in the Manager DN field.
  8. To ensure that only logged-in users can perform actions, select Authorization -> Logged-in users can do anything.
  9. Save the configuration changes.
  10. Log in to Jenkins via the log in link in the top right-hand corner of the screen.

You can also decide to use the Jenkins' own user database:

  1. Connect to Jenkins as a user with administration rights.
  2. Select Manage Jenkins .
  3. Select Configure Global Security .
  4. Select the Enable security checkbox, the Jenkins' own user database security realm radio button and then place a check mark next to Allow users to sign up .
  5. Save
  6. Create a user (menu in top right corner)
  7. Log in to Jenkins via the log in link in the top right-hand corner of the screen and go back to http://localhost:8036/configure (or select Manage Jenkins and then Configure Global Security ).
  8. In the security realm section, remove the check mark next to Allow users to sign up
  9. In the Authorization section, select the Matrix-based security mode,
  10. In the text box below the matrix, type your user name and click Add
  11. Give yourself full access by checking the entire row for your user name
  12. Configure other users
  13. Click Save at the bottom of the page. You will be taken back to the top page.
  14. Restart Jenkins

More details can be found in https://www.jenkins.io/doc/book/system-administration/security/ .

Azure AD authentication for Jenkins

A Jenkins plugin allows the authentication to be handle by MS Azure AD. This plugin is automatically installed by the Jenkins plugins for Team for Capella installation script but if you have installed Jenkins by another mean, it can be installed as follows:
First, go to Manage Jenkins > Manage Plugins. On the Available tab, look for Azure AD Plugin. Before installing it, hover your mouse over the label and open the link on a new tab. This will open a documentation page useful later. Now, check the plugin and press the download and install button. Restart Jenkins.
Once restarted, Jenkins is ready to be configured for an authentication with Azure AD. For that, go to the tab that was opened previously and follow the documentation. There are two parts for this configuration, one in Azure AD and one in Jenkins.
Note that on the Jenkins setting part, when asked to fill the Tenant this correspond to the Directory (tenant) ID in your Azure AD application. It is not necessarily the same value as in the CDO server configuration files (for instance, the value "organizations" can be used instead of Tenant ID for the purpose of OpenID discovery mechanism). Also, a test user is asked in order to verify the authentication parameters. This is not the name that is needed here but the User Principal Name or the Object ID of this user. Note that, if you want to have a different list of users having access to Jenkins (compared to the users that have access to the CDO server), you can create a new application on Azure dedicated to the scheduler access (Jenkins).

How to Change Backup and Import Files Purge Policy

How to Dissociate Multiple Projects in Jenkins

Purpose

I have 2 modeling projects (or more) working with Team for Capella and I want to isolate them in Jenkins (a person logged in Jenkins must see only Jenkins jobs dedicated to its project).

The proposed solution uses the internal Jenkins user database but is applicable with some changes to use a LDAP server.

Note that this section be adapted for different situations: multiple projects, multiple repositories or even multiple servers managed yby the same Scheduler.

Jobs Creation

When Jenkins is started for the first time, it contains all necessary jobs:

Let’s say the "Projects – Import" job will be used for Project 1. So, rename it to "Project 1 – Import":

Now we will create jobs for Project 2. Click on the "New Item" in the "Backup and Restore" tab.

Then select "Copy existing Job"). Copy the "Project 1 – Import" job and rename it into "Project 2 – Import".

The result is the following:

Project 1 and Project 2 jobs have to be configured correctly to be used (their build step must be modified to add -projectName ProjectXName) and number of executors has to be increased.

Access Rights Definition (whole Jenkins instance level)

Go to "Manage Jenkins" / "Configure Global Security", set parameters as shown in the screenshot:

Do the following changes in the table:

The table must be as follows:

Click on "Save".

Access rights are now activated:

Create the "SuperAdmin" account and use it to log in Jenkins.

Access Rights Definition (job/project level)

Go to the "Configuration" page of a job dedicated to Project 1 and check "Enable project-based security":

Do the following changes in the table:

Do the same work on all jobs linked to Project1.

Repeat all above actions with "Project2Admin" and all jobs linked to Project2.

Result

Known Limitations

Inter-project Information Sharing

An admin/user dedicated to a project will not be allowed to see information on jobs of other projects.

For example, when logged as Project2Admin and with Project1’s server running. Project2Admin will see:

Tips and Tricks

Configure Number of Scheduler Build Processes

The Team for Capella scheduler (Jenkins) can be configured for a maximum number of build processes that can execute concurrently.

In order to ensure the correct operation of all Team for Capella server jobs it is vital to set this maximum number of build processes correctly!

  1. Select Manage Jenkins .
  2. Select Configure System .
  3. Locate the setting # of executors and set the value according to the following rule:

For example, if the server machine is to run 5 Team for Capella server processes, then the value of # of executors would need to be set to 6 .

WARNING: setting this configuration parameter incorrectly can lead to complete system hangs, no Capella backups, etc!

Create Scheduler Job Environment Variables

Each Team for Capella server process relies on two network ports – a server port and a console port. In order to avoid confusion by using "magic" numbers for the ports within the scheduler jobs, it is best to create environment variables for these.

  1. Select Manage Jenkins .
  2. Select Configure System .
  3. Within the section Global properties -> Environment variables , press the Add button in order to add a new variable.
  4. Enter the server port environment variable name and value as follows: Set name to TEAMFORCAPELLA_SERVER_PORT_<repoName> , where <repoName> is replaced by the name of the repository, e.g. TEST_01 Set value to the configured server port value, e.g. 2036 .
  5. Press the Add button in order to add a new variable.
  6. Enter the console port environment variable name and value as follows: Set name to TEAMFORCAPELLA_CONSOLE_PORT_<repoName> , where <repoName> is replaced by the name of the repository, e.g. TEST_01 Set value to the configured console port value, e.g. 12036 .

Note: the hyphen character is not allowed within the names of environment variables. Therefore, in the above example, although the repository names is test-01, within the environment variable name the hyphen is replaced by an underscore, i.e. Test_01

Create a Server – Start Job from Template

  1. From the main page of the Team for Capella scheduler, select the New Item link from the menu on the left-hand side of the screen.
  2. Enter the job name and source job template as follows: Set the Job name to " Start server <serverPort> (<repoName> )", where <serverPort> is replaced by the configured server port number, e.g. 2036 and <repoName> is replaced by the repository name, e.g. TEST-01 . Activate the Copy existing job radio button. In the Copy from text field, start typing the word " TEMPLATE" and then from the drop-down list that appears, select the entry " __TEMPLATE – Start server serverPort (_repoName___ )". Press OK .
  3. In the job configuration screen, amend the Description text by replacing the placeholders <serverPort> and <repoName> with the actual server port and repository name respectively.
  4. Activate the job by de-selecting the Disable this project checkbox.
  5. Modify the Team for Capella server path within the Command field of the Build section, replacing serverPort and repoName within the path name with the configured server port and repository name respectively, for example:
  6. Upon saving the changes to the job the main screen for the new job appears.

Create a Server – Stop Job from Template

  1. From the main page of the Team for Capella scheduler, select the New Item link from the menu on the left-hand side of the screen.
  2. Enter the job name and source job template as follows: Set the Job name to " Server – Stop <serverPort> (<repoName> )", where <serverPort> is replaced by the configured server port number, e.g. 2036 and <repoName> is replaced by the repository name, e.g. TEST-01 . Activate the Copy existing job radio button. In the Copy from text field, start typing the word " TEMPLATE" and then from the drop-down list that appears, select the entry " __TEMPLATE – Server – Stop serverPort (_repoName___ )". Press OK .
  3. In the job configuration screen, amend the Description text by replacing the placeholders <serverPort> and <repoName> with the actual server port and repository name respectively.
  4. Activate the job by de-selecting the Disable this project checkbox.
  5. Modify the Team for Capella console port environment variable within the Command field of the Build section, replacing TEAMFORCAPELLA_CONSOLE_PORT_repoName with the appropriate console port environment variable for this Team for Capella server/repo, for example:

    cd TEAMFORCAPELLA_APP_HOME/tools
    command.bat -consoleLog localhost TEAMFORCAPELLA_CONSOLE_PORT_TEST_01 cdo stopserver

  6. Upon saving the changes to the job the main screen for the new job appears.

Create a Database – Backup Job from Template

  1. From the main page of the Team for Capella scheduler, select the New Item link from the menu on the left-hand side of the screen.
  2. Enter the job name and source job template as follows: Set the Job name to " Database – Backup <serverPort> (<repoName> )", where <serverPort> is replaced by the configured server port number, e.g. 2036 and <repoName> is replaced by the repository name, e.g. TEST-01 . Activate the Copy existing job radio button. In the Copy from text field, start typing the word " TEMPLATE" and then from the drop-down list that appears, select the entry " __TEMPLATE – Database – Backup serverPort (_repoName___ )". Press OK .
  3. In the job configuration screen, amend the Description text by replacing the placeholders <serverPort> and <repoName> with the actual server port and repository name respectively.
  4. Activate the job by de-selecting the Disable this project checkbox.
  5. Modify the Team for Capella console port environment variable within the Command field of the Build section, replacing TEAMFORCAPELLA_CONSOLE_PORT_repoName with the appropriate console port environment variable for this Team for Capella server/repo, for example:

    del *-sql.zip
    cd TEAMFORCAPELLA_APP_HOME/tools command.bat -consoleLog localhost TEAMFORCAPELLA_CONSOLE_PORT_TEST_01 capella_db backup ' WORKSPACE'

  6. Upon saving the changes to the job the main screen for the new job appears.

Create a Projects – Import Job from Template

  1. From the main page of the Team for Capella scheduler, select the New Item link from the menu on the left-hand side of the screen.
  2. Enter the job name and source job template as follows: Set the Job name to " Projects – Import <serverPort> (<repoName> )", where <serverPort> is replaced by the configured server port number, e.g. 2036 and <repoName> is replaced by the repository name, e.g. TEST-01 . Activate the Copy existing job radio button. In the Copy from text field, start typing the word " TEMPLATE" and then from the drop-down list that appears, select the entry " __TEMPLATE – Projects – Import serverPort (_repoName___ )". Press OK .
  3. In the job configuration screen, amend the Description text by replacing the placeholders <serverPort> and <repoName> with the actual server port and repository name respectively.
  4. Activate the job by de-selecting the Disable this project checkbox.
  5. It is not recommended to have multiple Import jobs launched at the same time. Each Import jobs must be shifted in time by at least 30 minutes. In the job configuration, in Build Triggers section, modify the minutes and hours values within the schedule (first and second numeric cron fields) if needed.
  6. Within the Command field of the Build section, modify the Team for Capella server and console ports environment variables and Team for Capella repository name as follows: Replace TEAMFORCAPELLA_SERVER_PORT_repoName with the appropriate server port environment variable for this Team for Capella server/repo Replace TEAMFORCAPELLA_CONSOLE_PORT_repoName with the appropriate console port environment variable for this Team for Capella server/repo Replace <repoName> with the name for this Team for Capella repository:bc. del *.zip del *.txt del *.activitymetadata rd /s /q importer-workspace cd TEAMFORCAPELLA_APP_HOME/tools importer.bat -data " WORKSPACE/importer-workspace" -archivefolder " WORKSPACE" -stopRepositoryOnFailure true -checksize 5 -importCommitHistoryAsText -port TEAMFORCAPELLA_CONSOLE_PORT_TEST_01 -consoleport TEAMFORCAPELLA_CONSOLE_PORT_TEST_01 -repoName TEST_01
  7. Upon saving the changes to the job the main screen for the new job appears.

Troubleshooting

Jenkins window service is not launched when there are multiple versions of Java installed

By default Jenkins will be launched using the java executable found in Windows\System. If the java version from this java executable is different from the key Java Runtime Environment\CurrentVersion in the registry, the service cannot be installed. If this problem is encountered, there are 2 solutions:

Connection timeout is too short

By default, the connection used to launch command by jobs has a timeout of two minutes. However, in specific cases (like saving a large volume of modifications) the user may want to increase this timeout value. If the user launch importer or maintenance job (which refers to importer or maintenance application), he can increase this timeout by defining a new parameter -consoleTimeout (see Importer parameters documentation). If the user launch an other job (which refer to the command application), he can specify the timeout for the connection with a value in milliseconds just after the port number argument.

14. Importer Configuration

4.3. Importer Configuration

  1. Importer Configuration
    1. Importer strategies
    2. Importer parameters
      1. Jenkins Text Finder configuration
      2. Add e-mail notification on failed backup
      3. How to set the password in secure storage
    3. Examples

The importer is an application used to extract the project from the cdo server database to a local folder. It produces as many zip file as modeling project. It can also be used to import the user profiles model.

The importer also extracts information from the CDO Commit history in order to produce a representation of the activity made on the repository. This information is denominated Activity metadata. See help chapter The commit history view and Commit description preferences for a complete explanation. By default, the importer will extracts Activity Metadata for every commits on the repository. Be aware that the parameter -projectName has no impact on this feature. It will also export commits that do not impact the selected project. Still, it is possible to specify a range of commit using the parameters -to and -from.

Importer strategies

Several import strategies are supported by the Importer application:

See also Projects - Import job documentation.

Importer parameters

Important: Importer.bat file uses -vmargs as a standard eclipse parameter. Eclipse parameters that are used by importer.bat override the value defined in capella.ini file. So if you want to change a system property existing in capella.ini (-vmargs -Xmx3000m for example) do not forget to do the same change in importer.bat.

The importer needs credentials to connect to the CDO server if the server has been started with authentication or user profile. Credentials can be provided using either -repositoryCredentials or -repositoryLogin and -repositoryPassword parameters. Credentials are required only for Connected import (see Importer strategies section above for more details). Here is a list of arguments that can be set to the Importer (in importer.bat or in a launch config):

Arguments Description
-repositoryCredentials Login and password can be provided using a credentials file. It is the recommended way for confidentiality reason. If the credentials does not contain any password, the password will be searched in the eclipse secure storage. See how to set the password in the secure storage

This parameter must not be used with -repositoryLogin or -repositoryPassword parameters else the importer will fail.

To use this property file

  • Add the following program argument: -repositoryCredentials <path_to_credentials_file>
  • Fill the specified file using the following format (only one line allowed):
aLogin:aPassword

Note: Credentials are required only for Connected import (see Importer strategies section above for more details).

-repositoryLogin The importer needs a login in order to connect to the CDO server if the server has been started with authentication or user profile.

-repositoryPassword must not be used with -repositoryCredentials else the application will fail.

Note: Credentials are required only for Connected import (see Importer strategies section above for more details).

-repositoryPassword This parameter is used to provide a password to the importer accordingly to the login.

If -repositoryPassword is not used, the password will be searched in the eclipse secure storage. See how to set the password in the secure storage -repositoryPassword must not be used with -repositoryCredentials else the application will fail.

Warning: some special characters like double-quote might not be properly handled when passed in argument of the importer. The recommended way to provide credentials is through the repositoryCredentials file or the secure storage.

Note: Credentials are required only for Connected import (see Importer strategies section above for more details).

-hostname Define the team server hostname (default: localhost).
-port Define the team server port (default: 2036).
-consolePort Define the team server console port (default: 12036).
-consoleTimeout Define the connection timeout in milliseconds (default: 120000 ms).
-connectionType The connection kind can be set to tcp or ssl (keep it in low case) (default: tcp)
-httpLogin Importer application will trigger an Http request. This argument allows to give a login to identify with on the Jetty server.
-httpPassword Importer application will trigger an Http request. This argument allows to give a password to authenticate with on the Jetty server.
-httpPort Importer application will trigger an Http request. This argument allows to give a port to communicate with on the Jetty server.
-httpsConnection Importer application will trigger an Http request. This boolean argument specifies if the connection should be Https or Http.
-importType The backup is available in three different modes:
PROJECT_ONLY to only export the shared modeling projects from the CDO repository to local;
SECURITY_ONLY to only export the shared user profile project from the CDO repository to local;
ALL to export both.

(default: PROJECT_ONLY)

-repoName Define the team server repository name (default: repoCapella).
-projectName By default, all projects are imported (with the right -importType parameter). Argument "-projectname X" can be used to import only project X (default: *).
-runEvery Import every x minutes (default -1: disabled).
-archiveFolder (deprecated) Define the folder where to zip projects (default: workspace). This argument is deprecated. Instead you should use -outputFolder (and -archiveProject=true but true is its default value).
-outputFolder Define the folder where to import projects (default : workspace).
-logFolder Define the folder where to save logs (default : -outputFolder).
-archiveProject Define if the project should be zipped (default : true). Each project will be zipped in a separate archived suffixed with the date. Some additional archives can also be created:
  • For projects containing images referenced by the current project: If the current project being managed by the importer process contains a diagram element that has a reference to an image which is located in another project, then this other project will be added in another zip file. See more information about image management
  • For Capella libraries: If the current project being managed by the importer process has a dependency to a library, then the resource of the library used by the current project will be part of another zip file.

Note: Some library resources may not be referenced by the current projet and so not included in the zip.

-overrideExistingProject If the output folder already contains a project with the same name this argument allows to remove this existing project.
-closeServerOnFailure Ask to close the server on project import failure (default: false). If the server hosts several repositories, it is better to use the parameter -stopRepositoryOnFailure.
-stopRepositoryOnFailure Ask to stop the repository on project import failure (default: false).
Note: it is currently not possible to restart a single repository, if defined in cdo-server.xml. To restart the stopped repository, stop and restart the server.
-backupDBOnFailure Backup the server database on project import failure (default: true).
-checkSize Check project zip file size in Ko under which the import of this project fails (default: -1(no check)).
-checkSession Do some checks and log information about each imported project (default: true).
  • It checks that the project session can be opened and closed and that it contains no resource with an URI with the scheme cdo.
  • It also logs a lot of useful information about the project: used viewpoints, information about representations and capella models. For more details, refer to Sirius Session Details of the Sirius user documentation.
-errorOnInvalidCDOUri Raise an error on cdo uri consistency check (default: true).
-addTimestampToResultFile Add a time stamp to result files name (.zip, logs, commit history) (default: true).
-optimizedImportPolicy This option is no longer available since 1.1.2.
-maxRefreshAttemptBeforeFailure The max number of refresh attempt before failing (default: 10). If the number of attempts is reached, the import of a project will fail but as this is due to the activity of remote users on the model, this specific failure will not close the repository or the server even with "-stopRepositoryOnFailure" or "-closeserveronfailure" set to true.
-timeout Session timeout used in ms (default: 60000).
-exportCommitHistory Whether the Commit History metadata should be exported (default: true). If the value is false, all other options about the commit history will be ignored. You should also update the "Jenkins Text Finder" configuration to avoid unstable build. See Jenkins Text Finder configuration section
-from The timestamp specifying the date from when the metadata will be exported. If omitted, it exports from the first commit of the repository. The timestamp should use the following format yyyy-MM-dd'T'hh-mm-ss.SSSZ. For example, for the date 03/08/2017 10h14m28s453ms on a time zone +0100 use the argument "2017-08-03T10:14:28.453+0100". The timezone may be omittted(format without Z part). In this case, the time zone is the time zone of the system. The timestamp can also be computed from an Activity Metadata model. In that case, this parameter could either be an URL or a path in the file system to the location of the model. If the date corresponds to a commit, this commit is included. Otherwise the framework selects the closest commit following this date. In case of using a previous activity metadata, the last commit of the previous export is also included.
-to The timestamp specifying the latest commit used to export metadata. If omitted, it exports to the last commit of the repository. The timestamp should use the following format yyyy-MM-dd'T'hh-mm-ss.SSSZ. For example, for the date 03/08/2017 10h14m28s453ms on a time zone +0100 use the argument "2017-08-03T10:14:28.453+0100". The timezone may be omittted(format without Z part). In this case, the time zone is the time zone of the system. The framework selects the closest commit preceding this date. Be careful, due to technical restrictions, this parameter only impacts the range of commit for exporting activity metadata from the CDO server. Using this parameter will not export the version of the model defined by the given date.
-importCommitHistoryAsText Import commit history in a text file using a textual syntax (default: false). The file has the same path as the commit history model file, but with txt as extension.
-importCommitHistoryAsJson Import commit history in a json file format (default: false). The file has the same path as the commit history model file, but with json as extension.
-includeCommitHistoryChanges Import the commit history detailed changes for each commit done by a user with one of the save actions (default: false). The changes of commits done by wizards, actions and command line tools are not computed, those commits have a description which begins by specific tags like [Export], [Delete], [Maintenance], [User Profile], [Import], [Dump]. This option is applied for all kinds of export of the commit history (xmi, text or json files). Warning about the importer performance: if this parameter is set to true the importer might take more time particularly if the history of commits is long.
-computeImpactedRepresentationsForCommitHistoryChanges Compute the impacted representations while exporting changes (default: false). Warning about the importer performance: if this parameter is set to true the importer might take more time particularly if the history of commits is long. For each commit with changes to export, it will compute the impacted representations.
-XMLImportFilePath This option allows to perform the import based on an XML extraction of the repository. It is mandatory for Offline and Snapshot imports, see the Importer strategies section for more details. It is recommended to provide an absolute path. Some arguments related to the server connection will be ignored. Only the arguments -outputfolder and -repoName are mandatory.
-cdoExport This option allows to send a snapshot creation command to the server before performing the import as described in Importer strategies section. (default: false). The -XMLImportFilePath argument is mandatory since the path is used to create and consume the snapshot. Note: The cdo export command takes the lock on projects aird resources. This strategy makes it possible to prevent a concurrency save from connected users. If the lock cannot be acquired after several attempts, an error message is logged and the import is cancelled.
-archiveCdoExportResult This option defines if the XML file resulting from the cdo export command launched by the importer in intermediate step (if -cdoExport is true) should be zipped (default : false). If option is true, the XML file zip is created in the "Output folder" (see -outputFolder documentation) and the XML file is then deleted. -archiveCdoExportResult must not be used without -cdoExport argument to true otherwise the application will fail. Indeed the application will only archive the XML file if it has produced it.
-help Print help message.


If the server has been started with user profile, the Importer needs to have write access to the whole repository (including the user profiles model). See Resource permission pattern examples section.

If this recommendation is not followed, the Importer might not be able to correctly prepare the model (proxies and dangling references cleaning, ...). This may lead to a failed import.


The importer uses the default configuration of Capella and does not need its own configuration area. For this to work properly, the importer needs to have read/write permission to the configuration area of Capella, otherwise it can end up with some errors about access being denied. A common situation where the importer can be found in this situation is when the Scheduler is launched as a Windows service. In this case, the user account executing the service is not necessarily configured to have the read/write permission to Capella's configuration area. If somehow you cannot give the read/write permission to the importer, a workaround is to provide it a dedicated configuration area by adding the following arguments at the end of importer.bat file: -Dosgi.configuration.area="path/to/importer/configuration/area" and if necessary, update the existing argument -data importer-workspace to point to a location with read/write permission.

Jenkins Text Finder configuration

The job contains a post action that verifies that the commit History metadata text file is generated with the parameter exportCommitHistory set to true by default:

If you change the parameter exportCommitHistory to false, the build will become unstable because of this configuration. So you should deactivate the option "Unstable if found" to avoid this warning that does not make sense with this parameter set to false. Don't forget to set it back if you set the value to true again.

Add e-mail notification on failed backup

Thanks to the Jenkins Text Finder post-build action, if the logs of a build contains the text Warning, the build is marked as unstable (with a yellow icon). You can go further and be notified by email in that case. In the Project - Import configuration page, scroll down or select the tab Post-build Actions. There click on the Add post-build action button and choose E-mail notification.

On this new action, you just need to add the e-mails to be notified in case of unstable build.

How to set the password in secure storage

The importer does not use the same credentials as the user. It is stored in a different entry in the Eclipse 'Secure Storage'. Storing and clearing the credentials requires a dedicated application that can be executed as an Eclipse Application or using a Jenkins job.

Examples

example1: import project importer.bat -nosplash -data importer-workspace
-closeServerOnFailure true
-backupDbOnFailure true
-outputFolder C:/TeamForCapella/capella/result
-connectionType ssl
-checkSize 10

example2: import user profile model importer.bat -nosplash -data importer-workspace
-closeServerOnFailure false
-backupDbOnFailure false
-outputFolder C:/TeamForCapella/capella/result
-connectionType ssl
-checkSize -1
-importType SECURITY_ONLY

19. Exporter Configuration

4.4. Exporter Configuration

The exporter is an application used to export all projects from a given local folder into a remote repository. It can also be used to export the user profiles model.

Exporter strategy

The Exporter application support one strategy :

See also Projects - Export job documentation.

Exporter parameters

Important: exporter.bat file uses -vmargs as a standard eclipse parameter. Eclipse parameters that are used by exporter.bat override the value defined in capella.ini file. So if you want to change a system property existing in capella.ini (-vmargs -Xmx3000m for example) do not forget to do the same change in exporter.bat.

The exporter needs credentials to connect to the CDO server if the server has been started with authentication or user profile. Credentials can be provided using either -repositoryCredentials or -repositoryLogin and -repositoryPassword parameters. Here is a list of arguments that can be set to the Exporter (in exporter.bat or in a launch config):

Arguments Description
-repositoryCredentials Login and password can be provided using a credentials file. It is the recommended way for confidentiality reason. If the credentials does not contain any password, the password will be searched in the eclipse secure storage. See how to set the password in the secure storage

This parameter must not be used with -repositoryLogin or -repositoryPassword parameters else the exporter will fail.

To use this property file

  • Add the following program argument: -repositoryCredentials <path_to_credentials_file>
  • Fill the specified file using the following format (only one line allowed):
aLogin:aPassword
-repositoryLogin The exporter needs a login in order to connect to the CDO server if the server has been started with authentication or user profile.

-repositoryLogin must not be used with -repositoryCredentials else the application will fail.

-repositoryPassword This parameter is used to provide a password to the exporter accordingly to the login.

If -repositoryPassword is not used, the password will be searched in the eclipse secure storage. See how to set the password in the secure storage -repositoryPassword must not be used with -repositoryCredentials else the application will fail.

Warning: some special characters like double-quote might not be properly handled when passed in argument of the exporter. The recommended way to provide credentials is through the repositoryCredentials file or the secure storage.

-hostname Define the team server hostname (default: localhost).
-port Define the team server port (default: 2036).
-consolePort Define the team server console port (default: 12036).
-consoleTimeout Define the connection timeout in milliseconds (default: 120000 ms).
-connectionType The connection kind can be set to tcp or ssl (keep it in low case) (default: tcp)
-repoName Define the team server repository name (default: repoCapella).
-sourceToExport Define the path of folder containing projects to export.

This folder can be :

  • a folder that contains one or more projects to export,
  • a zip containing one or more Sirius project that is aird file,
  • a folder that contains one or more zip file.|
-logFolder Define the folder where to save logs (default : -outputFolder).
-overrideExistingProject If the remote repository already contains a project to export with the same name this argument allows to remove this existing project (default: false).
-mergeDifferenceOnExistingProjects If -overrideExistingProject is set to true (default: false), this argument allows to select one of the two following override strategies:
  • Replace: Delete remote resources content and replace by local content (commit history is lost) (default)
  • Merge: Use Diff/Merge to compare local and existing resources and commit only the differences.
-overrideExistingImage If the remote repository already contains image with the same name, this argument allows to ignore and override it..
-closeServerOnFailure Ask to close the server on project export failure (default: false). If the server hosts several repositories, it is better to use the parameter -stopRepositoryOnFailure.
-stopRepositoryOnFailure Ask to close the repository on project export failure (default: false).
Note: it is currently not possible to restart a single repository, if defined in cdo-server.xml. To restart the stopped repository, stop and restart the server.
-addTimestampToResultFile Add a time stamp to result files name (.zip, logs, commit history) (default: true).
-timeout Session timeout used in ms (default: 60000).
-httpLogin Exporter application will trigger an Http request. This argument allows to give a login to identify with on the Jetty server.
-httpPassword Exporter application will trigger an Http request. This argument allows to give a password to authenticate with on the Jetty server.
-httpPort Exporter application will trigger an Http request. This argument allows to give a port to communicate with on the Jetty server.
-httpsConnection Exporter application will trigger an Http request. This boolean argument specifies if the connection should be Https or Http.
-help Print help message.


If the server has been started with user profile, the Exporter needs to have write access to the whole repository (including the user profiles model). See Resource permission pattern examples section.

If this recommendation is not followed, the Exporter might not be able to override existing projects on remote for example. This may lead to a failed export.


The exporter uses the default configuration of Capella and does not need its own configuration area. For this to work properly, the exporter needs to have read/write permission to the configuration area of Capella, otherwise it can end up with some errors about access being denied. A common situation where the exporter can be found in this situation is when the Scheduler is launched as a Windows service. In this case, the user account executing the service is not necessarily configured to have the read/write permission to Capella's configuration area. If somehow you cannot give the read/write permission to the exporter, a workaround is to provide it a dedicated configuration area by adding the following arguments at the end of exporter.bat file: -Dosgi.configuration.area="path/to/exporter/configuration/area" and if necessary, update the existing argument -data exporter-workspace to point to a location with read/write permission.

How to set the password in secure storage

The exporter does not use the same credentials as the user. It is stored in a different entry in the Eclipse 'Secure Storage'. Storing and clearing the credentials requires a dedicated application that can be executed as an Eclipse Application or using a Jenkins job.

Examples

example1: export project exporter.bat -nosplash -data exporter-workspace
-closeServerOnFailure true
-connectionType ssl
-sourceToExport C:\Users\me\Documents\runtime-T4C

17. Client Preferences Initialization

4.5. Client preferences initialization

  1. Client preferences initialization
    1. Introduction
    2. Setting the default preference values (recommended)
    3. Preference keys
      1. How to discover the preference value
    4. Setting the preferences value for the workspace

Introduction

As any eclipse application, Team For Capella uses preferences to manage the behavior of the application.

There are many preference scopes including the default and the instance scope. Instance scope, if set, has the priority to the default scope. The default scope is the value by default provided by the application. The instance scope corresponds to the preferences a user can change with the Preferences dialog box accessible with the menu Windows/Preferences. These preferences are stored in the user's workspace. For more details, refer to the eclipse Preferences documentation

For more information about the preferences used for Team For Capella, refer to the client preferences documentation.

The Administrator, in charge of customizing the product functionalities, may want to

Setting the default preference values (recommended)

To initialize the default preferences without having to provide a plug-in, you can use the pluginCustomization Eclipse parameter. Refer to Eclipse Runtime documentation for more information.

The principle is to declare a property file which contains pairs of key/value. The key is the qualified name of the preference and the value is the value of the preference.

Preference keys

Preferences have a default value that is associated to the Team for Capella application. This chapter explains how to change their default value. Nevertheless, the user has the ability to use a different value, than the default one, using the Preferences dialog box. This will set a value for the scope corresponding to the user workspace. The workspace scope has a higher priority than the default scope.

Sirius Preferences

Preference keys

Default value if not set

Sirius "Automatic Refresh" and "Do refresh on representation opening"

org.eclipse.sirius.ui/PREF_REFRESH_ON_REPRESENTATION_OPENING=<boolean value>
org.eclipse.sirius/PREF_AUTO_REFRESH=<boolean value>

true


Team collaboration Preferences

Preference keys

Default value if not set

Check by default the check button in the "Capella Connected Project" wizard to have the Sirius Refresh preferences specific to the connected project that is being created.

fr.obeo.dsl.viewpoint.collab/PREF_ENABLE_PROJECT_SPECIFIC_SETTINGS_DEFAULT_VALUE=<boolean value>

true

Connection Url
1- Alias
2- Server IP address
3- Server port
4- Connection type
5- Repository name


1- fr.obeo.dsl.viewpoint.collab/PREF_DEFAULT_REPOSITORY_ALIAS=<string value>
2- fr.obeo.dsl.viewpoint.collab/PREF_DEFAULT_REPOSITORY_LOCATION=<string value>
3- fr.obeo.dsl.viewpoint.collab/PREF_DEFAULT_REPOSITORY_PORT=<integer value>
3- fr.obeo.dsl.viewpoint.collab/PREF_DEFAULT_CONNECTION_TYPE= enumeration [TCP, SSL]
4- fr.obeo.dsl.viewpoint.collab/PREF_DEFAULT_REPOSITORY_NAME=<string value>


1- "Default"
2- localhost
3- 2036
4- TCP
5- repoCapella

Commit history view
1- Require description for commit actions
2- Pre-fill commit description
3- Commit description provider
4- Automatically use the pre-filled description when none is provided


1- fr.obeo.dsl.viewpoint.collab/PREF_ENABLE_DESCRIPTION_ON_COMMIT=<boolean value>
2- fr.obeo.dsl.viewpoint.collab/PREF_COMPUTE_COMMIT_DESCRITION=<boolean value>
3- fr.obeo.dsl.viewpoint.collab/PREF_PREFERRED_DESC_PARTICIPANT= complex value
4- fr.obeo.dsl.viewpoint.collab/PREF_AUTO_USE_PRE_FILLED_COMMIT_DESC=<boolean value>


1- false
2- false
3- Default
4- false

Release all explicit locks after committing

fr.obeo.dsl.viewpoint.collab/PREF_RELEASE_EXPLICIT_LOCK_ON_COMMIT=<boolean value>

false

Display Write Permission Decorator

fr.obeo.dsl.viewpoint.collab/PREF_DISPLAY_WRITE_PERMISSION_DECORATOR=<boolean value>

true

Ability to lock the semantic element at representation creation or move

fr.obeo.dsl.viewpoint.collab/PREF_LOCK_SEMANTIC_TARGET_AT_REPRESENTATION_LOCATION_CHANGE=<boolean value>

true

How to discover the preference value

Sometimes, the value of the preference is complex. It is the case for some preferences visible in Preferences dialog box. To know the value of a particular preference:

Setting the preferences value for the workspace

Once you have configured the preferences using the Preference dialog box, you have to export the preferences to a text file:

Then each user will have to import the preference file to set the preferences values for his workspace.

  • The import process has to be done for each workspace.
  • Using the Preference dialog box allows you to configure the preferences without knowing the technical name of the preferences but some preferences are not available in the Preference dialog box. So you have to add it manually in the exported preferences file. Refer to the Preference keys to know what to add in the preferences file.
00. System administrator overview

5.1. System Administrator Overview

System administrators handle the installation, configuration and authentication on the CDO server that is used for sharing Capella projects. For these activities, Team for Capella provides the following functionalities in Eclipse or as jobs which can be installed in a Jenkins used as a scheduler:

Team for Capella bundles and installation guide are available at https://www.obeosoft.com/en/team-for-capella-download.

5. System Administrator Guide

18. Jenkins Installation

5.2. Jenkins scheduler for Team for Capella installation guide

The documentation of Team for Capella presents many applications (Backups, diagnostics...) that can be scheduled with Jenkins in order to have a centralized platform to manage your shared projects.

  1. Jenkins scheduler for Team for Capella installation guide
    1. Download and install Jenkins
      1. Windows
      2. Linux
      3. End of the installation
    2. Install Jenkins plugins and jobs required for Team for Capella
      1. Automatic installation
      2. Manual installation
    3. Miscellaneous settings
      1. Executors
      2. Locale
      3. Default view
      4. Display Job Description
      5. Change the Port Used by Jenkins
        1. Windows
        2. Linux
      6. Set specific folders for Jenkins
        1. Windows
        2. Linux
    4. Updates
    5. Uninstall Jenkins

Download and install Jenkins

It is recommended to install a 2.375.x LTS release. Team for Capella 6.1.0 has been tested with Jenkins 2.375.3 LTS release.

If you choose to deploy a more recent version, we strongly recommend to use a release from the LTS (Long Term Support) stable releases stream available at Jenkins.io.

The default Jenkins port is 8080. But it is recommended to set the port to 8036 (In the previous Team for Capella installation, the embedded Jenkins was deployed on port 8036). Otherwise, there will be a conflict with the REST admin server which default port is 8080.

The port can be chosen in the Jenkins installation wizard. This following documentation will often reference the port 8036.

Windows

The Jenkins 2.375.3 LTS Windows installer can be downloaded from this link.

If you choose to deploy a more recent version, we strongly recommend to use a release from the LTS (Long Term Support) stable releases stream available at Jenkins.io.

Once downloaded, proceed to the installation.
It is recommended to install the Jenkins service (automatic loading on restart) and the suggested plugins.

Linux

The Jenkins 2.375.3 LTS packages for Linux can be downloaded from the LTS Releases package repository corresponding to the targeted distribution, see
See this link.

The scheduler has been tested on RedHat and Debian based distributions. The Jenkins installation instructions are available at Installing Jenkins: Linux

The Server and Importer applications require a display to be executed properly. An Xvnc server needs to be installed on the Linux server.

On Debian based distributions, you can install either tigerVNC or TightVNC:

sudo apt install tightvncserver
sudo apt install tigervnc-standalone-server

On RedHat based distributions:

dnf install tigervnc-server

In addition, make sure that the Xvnc jenkins plugin is installed on the Jenkins (it is installed by install-TeamForCapellaAppsOnJenkins.sh).

Note: Make sure that the jenkins user has read, write and execution permission on the TeamForCapella root folder.

End of the installation

At the end of the installation, your web browser should be displaying Jenkins.

Install Jenkins plugins and jobs required for Team for Capella

Automatic installation

Once Jenkins is installed, you can run our installation script that will install all the jobs allowing the Jenkins scheduler to manage the different Team for Capella applications. This script also downloads all the Jenkins plugins required for the different jobs.

In your Team for Capella installation folder, go to the tools/resources/scheduler folder. In this folder, you will find a script install-TeamForCapellaAppsOnJenkins.bat (or install-TeamForCapellaAppsOnJenkins.sh for Linux), edit this file in a text editor.

Not only does it contains all the required commands to download and install the plugins, but there are some parameters for accessing Jenkins to fill in. These parameters are:

As documented in https://www.jenkins.io/doc/book/managing/cli/, you can get your API token from /me/configure page of your Jenkins. The script will automatically download the Jenkins CLI client and use it to install the plugins. Then it will create all the Team for Capella jobs and sort them into different views. Finally, once the script finished, you only need to restart Jenkins. The simplest way is to use the /restart page of your Jenkins. On Windows, if you have installed Jenkins, to restart it, you could also use your system Services window.

The dashboard will present all the Team for Capella applications.

Note that the plugins versions were chosen at the time of the release of the Team for Capella version you are working on. Once the script executed, it is recommended to keep Jenkins up to date and also to check for new updates of the installed plugins. Go to Manage Jenkins > Manage Plugins. On the Update tab, select all plugins and then click on the Download now and install after restart.

These jobs executes Team for Capella applications, therefore Jenkins requires a global environment variable referencing the location of your team for Capella installation:

  • Go to Manage Jenkins > Configure System and scroll down to the Global properties section.
  • Check Environment variables and add a new one named TEAMFORCAPELLA_APP_HOME with the path to your Team for Capella installation folder as the value (it is the top folder that contains the subfolders capella, tools, ...).

Note that the development team is working on improving the installation script to add this variable, but some Jenkins APIs have been removed for security reasons as it was seen as code injection.

Additional configuration steps are recommended, see Executors, Locale, Default view and Display Job Description in miscellaneous settings section.

Restart Jenkins or its service after this configuration phase.

Manual installation

If you do not wish to install the Team for Capella applications with the script, you can still proceed manually.
The first step is to install the required plugins. In your Team for Capella installation folder, go to the tools/resources/scheduler folder, you will find two files with names starting with RequiredPlugins.

They contains the same list of plugins, one lists them by name, the other one list them by URL to their .hpi.
You need to install all of them. Go to Manage Jenkins > Manage Plugins to install them from the plugin manager.
Then restart Jenkins.

Now that the required plugins have been installed, the Team for Capella jobs can be deployed as well:

Restart Jenkins and now the dashboard will present all the Team for Capella applications.

These jobs executes Team for Capella applications, therefore Jenkins requires a global environment variable referencing the location of your team for Capella installation:

  • Go to Manage Jenkins > Configure System and scroll down to the Global properties section.
  • Check Environment variables and add a new one named TEAMFORCAPELLA_APP_HOME with the path to your Team for Capella installation folder as the value (it is the top folder that contains the subfolders capella, tools, ...).

Finally, as there are many jobs, it will be easier to manage by grouping these applications by tabs:

As an example, you can order your tabs as follows:

Additional configuration steps are recommended, see Executors, Locale, Default view and Display Job Description in miscellaneous settings section.

Miscellaneous settings

Executors

Locale

Default view

Display Job Description

Change the Port Used by Jenkins

Windows

Go to the directory where you installed Jenkins (by default, it’s under Program Files/Jenkins), edit jenkins.xml , then update the value of --httpPort in the <arguments> tag of of the service definition:

<executable>java</executable>
<arguments> -some -arguments --httpPort=8036 -some -other - arguments</arguments>

Finally, go to Windows service, and restart the Jenkins service (or restart the Jenkins server if you launched it manually).

Change the name and id of the Jenkins service

Go to the directory where you installed Jenkins (by default, it’s under Program Files/Jenkins), edit jenkins.xml , then update the value of the <id> and <name> tags of of the service definition:

  <id>TeamForCapellaScheduler</id>
  <name>Team For Capella Scheduler</name>

Open a Command Prompt as administrator in this folder and execute the following commands

  sc stop jenkins
  sc delete jenkins
  jenkins.exe install
  jenkins.exe start

Finally, go to Windows service, and check that

Linux

The configuration file after a standard installation is located in:

By default, the port is 8080:

HTTP_PORT=8080

The service has to be restarted after the port modification:

systemctl restart jenkins

Set specific folders for Jenkins

Windows

It is possible to force Jenkins to use some specific folders. Go to the directory where you installed Jenkins (by default, it’s under Program Files/Jenkins), edit jenkins.xml , then complete the <arguments> tag of the service definition:

Finally, go to Windows service, and restart the Jenkins service (or restart the Jenkins server if you launched it manually).

Linux

Open the jenkins configuration file (see the previous Change the Port Used by Jenkins paragraph for the configuration file location)

Updates

It is recommended to check for updates. On the top-right area, Jenkins will show notifications if there are some updates or issues identified. Furthermore, when you select the Manage Jenkins menu, the top area will present updates or corrections that can be applied to Jenkins or its plugins. Depending on the importance it will be presented in different colors (red>yellow>blue). Most of the time, it is notifications about new updates but in any case, it is a good practice to check this page once in a while and follow what is presented.

Uninstall Jenkins

The Jenkins service can be stopped and deleted using the following commands in a Windows Command Prompt:

  sc stop jenkins
  sc delete jenkins

The id of the service is jenkins by default but you might have changed it as described in Change the name and id of the Jenkins service section.

Jenkins can be completely removed from your system with the use of its Windows Installer.

10. Server Configuration

5.3. Server Configuration

In this document you will discover how to manage a Server supporting Collaborative Modeling features.

  1. Server Configuration
    1. Cdo-server.xml File
    2. Authenticated Configuration
    3. User Profiles Configuration
    4. Not Authenticated Configuration
    5. Activate LDAP authentication
      1. Activate LDAP authenticator
      2. Configure LDAP authenticator
      3. Configure LDAP with Active Directory
      4. Configure LDAP with a manager
        1. Example of LDAP configuration with a manager
        2. Example of LDAP configuration with a manager and Active Directory
      5. Use a self-signed or non CA-authentified certificate
    6. Activate OpenID Connect authentication
      1. Configure Team for Capella server
        1. Activate OpenID Connect authenticator
        2. Configure OpenID Connect authenticator
        3. Configure embedded web server for OpenID Connect authentication
      2. Configure the application on the OpenID Connect platform
        1. Configure OpenID Connect authenticator with MS Azure AD
    7. Audit mode
    8. Activate WebSocket connection
      1. Client configuration
      2. Tools configuration
      3. Server configuration
      4. Optional configuration
    9. Activate SSL connection
      1. Client configuration
      2. Tools configuration
      3. Server configuration
    10. Managing certificate
      1. Generate a keystore
      2. Sign your certificate from a certificate authority(optional)
      3. Export certificate from a keystore
      4. Create a truststore from a certificate
    11. Team for Capella Server: the REST Admin Server
      1. REST Admin API
      2. Credentials_Management
    12. Team for Capella Server Installation Types
      1. Quick Installation (1 Server, 1 Repository)
      2. Configuration with 1 Server, n Repositories, N Models
        1. Introduction
        2. How to Add a New Repository
      3. Configuration with N Servers, N Repositories, N Models (1 Scheduler)
        1. Introduction
        2. How to Add a New Server
    13. How to stop the server
    14. How to reset the server
    15. How to Improve Export Performances
    16. Reinitialize database
      1. Restore database from database backup
        1. How to manually restore a DB backup
      2. Restore database from projects backup
    17. How to externalize configuration in a specific folder
    18. How to Change Ports Values
    19. How to Increase the Size of Description and Documentation Columns

Cdo-server.xml File

The main configuration file used by the Team for Capella Server is the cdo-server.xml file.

The Team for Capella Server bundle comes as a standard Eclipse application. In the installed package, locate the Configuration folder and open it.

In this folder, locate the cdo-server.xml file and open it.

Here is a commented extract of the ''‹cdo-server.xml›'' delivered with Team for Capella:

Highlighted elements can be changed to customize the Team for Capella Server.

Note that many repository configuration options can not be changed anymore after the repository has been started the first time or if some data have been exported once to the server. If you need to change something in this configuration afterwards, you should then first delete the database files (files with db extension). A typical example is changing the name of the repository. The only elements you can change in the configuration file afterwards are Type of access control : userManager, securityManager, ldap or none and the acceptor.

Authenticated Configuration

To activate the authenticated server you have to set the line below in the cdo-server.xml file before the <store > tag.
<userManager type="auth" description="usermanager-config.properties"/>

usermanager.properties is a path to the authenticated server configuration file. The path can be absolute or relative to the cdo-server.xml file.

users.file.path=users.properties
# ldap configuration
auth.type=ldap
auth.ldap.url=ldap://127.0.0.1:10389
auth.ldap.dn.pattern=cn={user},ou=people,o=sevenSeas
auth.ldap.filter=
auth.ldap.tls.enabled=false
auth.ldap.truststore.path=
auth.ldap.truststore.passphrase=
# openID Connect configuration
#auth.type=openidconnect
#auth.openIDConnect.discoveryURL=https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration
#auth.openIDConnect.tenant=organizations
#auth.openIDConnect.clientID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
#auth.openIDConnect.technicalUsers.file.path=technicalUsers.properties

The file users.properties contains entries which keys are the logins and values are the passwords. Note that space must be escaped with \ else it will be considered as a key-value separator.
Examples:

admin=admin
			
John\ Doe=secret

Note :
This is the default mode, when Team for Capella is installed the server is set with a file authentication configuration.
You must not escape spaces in the login field required to connect to remote model (see Connect to remote model section).
The same applies when you create a new user through the "security model" (see Access Control section).

As access control modes are exclusive, other modes must be commented in the cdo-server.xml file:
<!-- <securityManager type="collab" .../> -->
<!-- <authenticator type="ldap" .../> -->

The server must be restarted to take into account the modifications done in the cdo-server.xml file.

On Client side, use the User Management view available in all Team for Capella clients. When using this view, the server does not need to be restarted after changes in the user accounts

User Profiles Configuration

To activate the user profile server you have to set the line below in the cdo-server.xml file before the <store > tag. The user profiles model is created at the first server launch.
Once activated, you must see this during the Team for Capella Server starting:

<securityManager type="collab" realmPath="userprofile-config.properties" />

userprofile-config.properties is a path to the user profile configuration file. The path can be absolute or relative to the cdo-server.xml file.

realm.users.path=users.userprofile
administrators.file.path=administrator.properties
# ldap configuration
auth.type=ldap
auth.ldap.url=ldap://127.0.0.1:10389
auth.ldap.dn.pattern=cn={user},ou=people,o=sevenSeas
auth.ldap.filter=
auth.ldap.tls.enabled=false
auth.ldap.truststore.path=
auth.ldap.truststore.passphrase=
# openID Connect configuration
#auth.type=openidconnect
#auth.openIDConnect.discoveryURL=https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration
#auth.openIDConnect.tenant=organizations
#auth.openIDConnect.clientID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
#auth.openIDConnect.technicalUsers.file.path=technicalUsers.properties

Be aware that once the server has been launched with the User Profile mode enabled, modifications on this file will have no effect. If you want to manage the list of administrators, please have a look at User Pofiles documentation and especially at the Promote a User to Super User section if you want to promote an existing user to administrator. On the other hand, you can also make backups (shared projects and User Profiles model), stop the server, delete the database, modify the administrators files, restart the server and re-export your data.

As access control modes are exclusive, other modes must be commented in the cdo-server.xml file:
<!-- <userManager type="auth" .../> -->
<!-- <authenticator type="ldap" .../> -->

The server must be restarted to take into account the modifications done in the cdo-server.xml file.

Not Authenticated Configuration

This configuration allows to work with a CDO server without authenticating from a client.
Just comment securityManager, userManager and authenticator tags in the cdo-server.xml file:
<!-- <securityManager type="collab" .../> -->
<!-- <userManager type="auth" .../> -->
<!-- <authenticator type="ldap" .../> -->

The server must be restarted to take into account the modifications done in the cdo-server.xml file.

Activate LDAP authentication

Activate LDAP authenticator

You can activate LDAP authentication in three different ways:

The server must be restarted to take into account the modifications done in the cdo-server.xml file.

These ways are excluding themselves.

To activate LDAP authentication, as exclusive authenticator, the following authenticator tag must be added to the repository configuration in cdo-server.xml.

<authenticator type="ldap" description="ldap-config.properties" />

ldap-config.properties is a path to a properties file containing the LDAP authenticator configuration. This path may be relative to the CDO server configuration file or absolute.

As access control modes are exclusive, other modes must be commented in the cdo-server.xml file:
<!-- <userManager type="auth" .../> -->
<!-- <securityManager type="collab" .../> -->

Configure LDAP authenticator

The LDAP authenticator’s configuration file is a properties file whose content could look like the following one:

ldap.url=ldap://127.0.0.1:10389
#ldap.url=ldaps://127.0.0.1:10389
ldap.dn.pattern=cn={user},ou=people,o=sevenSeas
ldap.filter=
ldap.tls.enabled=true
ldap.truststore.path=trusted.ks
ldap.truststore.passphrase=secret

where :

When the LDAP authenticator is used in User Profile or Authenticated configurations, those properties property keys must be prefixed by the auth. and the auth.type=ldap is needed to activate the LDAP authentification.

Important !

Unlike the other two configuration ways (with «user profile server» and «authenticated server»), in the «exclusive authenticator configuration», the properties are not prefixed by auth.

If the LDAP certificate has been signed by an official Certificate Authority it is not required to set the trust store path as the JVM already trusts the CA.

If you need to generate a self-signed certificate or need to create a trust store from an existing certificate please refer to the following section.

Configure LDAP with Active Directory

An LDAP using Active Directory provides a field sAMAccountName that is usually used as a key (like the «cn» field). Users can be identified using this field associated with a domain name after an «@» as separator. This leads to this pattern: sAMAccountName@DomainName. As the user identifies himself by providing only his identifier, not the domain name, the corresponding pattern is: {user}@DomainName.
For instance, if the domain name is «MyCompanyDomain» then the LDAP pattern will be: auth.ldap.dn.pattern={user}@MyCompanyDomain

Configure LDAP with a manager

Some LDAP does not support anonymous binding (if your LDAP server doesn’t even allow a query without authentication), then Capella would have to first authenticate itself against the LDAP server, and Capella does that by sending the «manager» DN and password. Using this specific connection, the user credentials (given by the user in the authentication popup) can be looked for in the LDAP tree.

This manager credentials needs to be provided in the properties file as it will not be asked to the user. These credentials are provided with the following properties:

The search for the user himself in the LDAP is provided with the following properties:

Example of LDAP configuration with a manager

# ldap configuration
ldap.url=ldap://ldap.myCompany.com:389
ldap.user.search.base=dc=myCompany,dc=com
ldap.user.search.filter=(&(objectClass=account)(cn={user}))

# The manager credentials are useful for LDAP requiring authentication to run search filters
ldap.manager.dn=uid=manager,ou=People,dc=myCompany,dc=com
ldap.manager.password=DerfOcDoocs6

ldap.tls.enabled=false

Example of LDAP configuration with a manager and Active Directory

# ldap configuration
ldap.url=ldap://ldap.myCompany.com:389
ldap.user.search.base=dc=myCompany,dc=com
ldap.user.search.filter=(&(objectClass=organizationalPerson)(name={user}))

# The manager credentials are useful for LDAP requiring authentication to run search filters
ldap.manager.dn=manager@myCompany.com
ldap.manager.password=managerPassword

ldap.tls.enabled=false

Use a self-signed or non CA-authentified certificate

In case the certificate is self-signed or the CA used in your certificate is not managed by the jvm, you will need to generate a truststore and reference this truststore from the configuration file.

Follow the Export and TrustStore creation steps to create the trust store.

Activate OpenID Connect authentication

With a server set with an OpenID Connect Connect authentication, the user will be able to authenticate using the UI provided by the OpenID Connect Platform. Instead of having the default dialog where the user enters his login password, here the embedded T4C web server will display a popup web browser interacting with the OpenID Connect platform.

For instance, for a server set with MS Azure AD, here is the user experience when the user clicks on the «Test Connection» button of the Connection wizard. A web browser is displayed and present a Sign-in interface provided by MS Azure AD.

Then, the user follows the authentication process through the different web pages provided by the OpenID Connect platform depending on how it is configured.

Finally, the user will be presented a web page displaying if the authentication was successful or not. The user can close the browser and continue as usual. In this page, a «Logout» hyperlink allows to logout the current user. The end-user is redirected to the sign-in page and may sign-in with another login.

Technical views such as CDO views or Administration views still authenticate with basic login/password credentials. See Configure OpenID Connect authenticator to know how to configure this credentials.

Configure Team for Capella server

Activate OpenID Connect authenticator

You can activate the OpenID Connect authentication:

Note: For the combination with both «user profile server» and «authenticated server», the user name to configure in Team For Capella must correspond to the attribute "Name" of the user in the OpenID Connect authentication platform.

The server must be restarted to take into account the modifications done in the cdo-server.xml file.

To activate the OpenID Connect authentication, as exclusive authenticator, the following authenticator tag must be added to the repository configuration in cdo-server.xml. Make sure the other tags are commented.

<authenticator type="openidconnect" description="openid-config.properties" />

openid-config.properties is a path to a properties file containing the OpenID Connect authenticator configuration. This path may be relative to the CDO server configuration file or absolute.

As access control modes are exclusive, other modes must be commented in the cdo-server.xml file:
<!-- <userManager type="auth" .../> -->
<!-- <securityManager type="collab" .../> -->

Finally, the OpenID Connect authentication requires a web server in order to securely communicate with the OpenID Connect platform. If the CDO server is configured with the OpenID Connect authentication mode, then it will require to activate the embedded web server for this secure communication.

Configure OpenID Connect authenticator

<installation folder>/server/configuration/openid-config.properties is the OpenID Connect authenticator’s configuration file. It is a properties file whose content could look like the following one:

openIDConnect.discoveryURL=https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration
openIDConnect.tenant=organizations
openIDConnect.clientID=79bce8de-7542-4b90-bf18-XXXXXXXXXXXX
openIDConnect.technicalUsers.file.path=technicalUsers.properties

where :

Configure embedded web server for OpenID Connect authentication

As presented before, the OpenID Connect Authentication requires a web server in order to authenticate securely.
This is the same web server as the one providing the web services (REST API) for repository management. See in the dedicated section how to install and activate this experimental feature.

To activate the OpenID Connect support, you need then to set the value of the admin.server.jetty.auth.openidconnect.enabled property to true in <installation folder>/server/configuration/fr.obeo.dsl.viewpoint.collab.server.admin/admin-server.properties.

Note that if you do not have the Team for Capella server and all the Team for Capella clients installed on the same machine, you will need to configure the web server in https mode. Indeed, this is a security required from the OpenID Connect platform. So,

To configure the admin server with https, do the following changes in <installation folder>/server/configuration/fr.obeo.dsl.viewpoint.collab.server.admin/admin-server.properties

# Jetty configuration
admin.server.jetty.https.enabled=true

# The next three line will be needed if the '${admin.server.jetty.https.enabled} option is set to true.'
admin.server.jetty.ssl.host=0.0.0.0
admin.server.jetty.ssl.port=8443
admin.server.jetty.ssl.keystore.path=${currentDir}/<keystoreFile>
admin.server.jetty.ssl.keystore.passphrase=<password>

Configure the application on the OpenID Connect platform

On the OpenID Connect platform, there is one property that requires to be properly set: the redirect URI. Indeed, the embedded web server expects that the redirect URI is the page /auth/redirect.
This means that the redirect URI must be set to

Configure OpenID Connect authenticator with MS Azure AD

If your OpenID Connect platform is MS Azure AD, here is a quick way to find how to configure the OpenID Connect authenticator in Team for Capella.

First, the openIDConnect.discoveryURL is provided by the OpenID Connect platform itself, not by your application. For MS Azure AD, this protocol is presented in the online documentation. On the same page, there is a list of the different values the openIDConnect.tenant.

For the openIDConnect.clientID, you will need to look for it in the application you created in MS Azure AD in order to use it for authentication from Team for Capella. From the MS Azure AD home page, you can select App registration. Select your application for Team for Capella. From the overview, you can see the Application ID.

Note that from this menu, you must set the redirect URI from the menu Authentication. In Platform configuration add a Web platform and set the redirect URI.

The last property, openIDConnect.domainURL, depends on the location/address of the web server and is not linked with the OpenID Connect configuration.

On your application, do not forget to add the users that will be able to authenticate to the application:

It is also recommended to create a conditional access policy (Security/Conditional Access) so you can set a timeout to the session once users are authenticated. You can also define how users are grant access (for instance with multi-factor authentication).

Note that to be able to add conditional access policies, you need to disable the security defaults.

Note that the following options must be activated because the authentication uses the implicit grant

Audit mode

The Audit mode aims to configure the server so it keeps tracks of all versions of each object in the CDO Server database. It is required for comparing different versions of the model for example.
There are two different auditing configurations: Audit and Audit with ranges.

This Audit with ranges mode has been the default mode between Team for Capella 1.3.0 and Team for Capella 5.0.0.

The Audit mode is the default mode since Team for Capelle 5.1.0 to improve user-side performances (export, export with override, semantic browser refresh, ...)

The difference between the two modes is in the storage of lists: when the with ranges variant is used, the database stores only the delta between each versions of lists. This implies to load all preceding revisions of a list to compute a given state. But for some situations, it can slow the growth of the database. An analysis of the project can lead to a recommendation to switch to this mode.

When using the auditing modes, the size of the database might need to be monitored. If the database size grows bigger than 4 GB, the user might need to clear it if he encounters performance issues. That is to say, importing the models from the server, clearing the database and then importing the models back in the new database. Be aware that after doing this operation he will not be able to compare new commits against the commits done before the clearance. Some benchmarks have been done, after 10 000 commits modifying semantic and graphical elements this size have never been reached. In this context, modification and saving model times increase slightly compared to a server that does not have audit mode enabled. However both operations still feel smooth for the user.

Be aware that it is not possible to switch between «Audit», «Audit with ranges» or "non «Audit» modes on a CDO server that holds models. The switch has to be done on a empty CDO server database.

In order to disable the Audit mode you have to change cdo-server.xml to:

<property name="supportingAudits" value="true"/>
<mappingStrategy type="horizontalNonAuditing"> 
<mappingStrategy type="horizontalNonAuditing">
	...
	<!-- property name="withRanges" value="false"/ -->
</mappingStrategy>

In order to (re-)activate the Audit mode you have to change cdo-server.xml to:

<property name="supportingAudits" value="true"/>
<mappingStrategy type="horizontalAuditing"> 
<mappingStrategy type="horizontalAuditing">
	...
	<property name="withRanges" value="false"/>
</mappingStrategy>

In order to activate the Audit with ranges mode you have to change cdo-server.xml to:

<property name="supportingAudits" value="true"/>
<mappingStrategy type="horizontalAuditing"> 
<mappingStrategy type="horizontalAuditing">
	...
	<property name="withRanges" value="false"/>
</mappingStrategy>

Activate WebSocket connection

It is possible to activate a WebSocket connection between the client and the CDO server.
Both client and server have to be configured accordingly.

Client configuration

On client side, users will have to use WS or WSS connection types regarding the configuration of the server.

The client side configuration will depend on the global deployment of the current server and the use of the WS and WSS connection types.

Then a user will have to use the following parameters to connect to the repository:

When the REST Admin server runs in HTTPS mode, it will be configured with a certificate.
If this certificate is self-signed or untrusted, the following system properties can be added in the client capella.ini file in order to configure the security checks:

Those properties are used to configure the Jetty’s
org.eclipse.jetty.util.ssl.SslContextFactory).

Additional properties might be needed, see server configuration section.

Tools configuration

When WebSocket transport is activated on the server, the importer and other tools must be configured accordingly to be successful.
The same configuration than the client needs to be done in the -vmargs section of the tools script (importer.bat, maintenance.bat, exporter.bat, ...).

Server configuration

The REST Admin Server and the CDO Server need to be configured to enabled the Net4J WebSocket-based transport:

The move from a Websocket-based transport to a SecuredWebsocket-based transport can be done through Jetty configuration by enabling HTTPS, or with the use of an HTTPS reverse proxy server (Nginx or Apache for example).

Optional configuration

Here is a list of optional settings which will impact both server and clients configurations:

Activate SSL connection

It is possible to activate a SSL connection between the client and the CDO server.
Both client and server have to be configured accordingly.
On server side a keystore has to be set/ up and, on client side, a trust store containing the key store public key has to be set up. See chapter Managing certificate to generate keystore and truststore.

Client configuration

Add the following lines in the client capella.ini file:

-Dorg.eclipse.net4j.tcp.ssl.passphrase=secret
-Dorg.eclipse.net4j.tcp.ssl.trust=file:///<trusted.ks absolute path>

Tools configuration

When SSL is activated on the server, the importer and other tools must be configured accordingly to be successful.
Add the following lines in the script files (importer.bat, maintenance.bat, exporter.bat):

-Dorg.eclipse.net4j.tcp.ssl.passphrase=secret ^
-Dorg.eclipse.net4j.tcp.ssl.trust=file:///<trusted.ks absolute path> ^

Server configuration

In the cdo-server.xml configuration file the acceptor has to be configured to accept SSL connections
<acceptor type="ssl" listenAddr="0.0.0.0" port="2036"/>
Set the acceptor type to ssl.

Add the following lines in the server ini file:

-Dorg.eclipse.net4j.tcp.ssl.passphrase=secret
-Dorg.eclipse.net4j.tcp.ssl.key=file:///<server.ks absolute path>

Managing certificate

Keytool can be used to create and manage certificates and stores. This tool is provided with the JDK and its documentation is available here.

Generate a keystore

The keystore contains certificate information, private and public key. To generate it use the following command:

keytool -genkey -ext SAN=IP:<server IP> -keyalg "RSA" -dname o=sevenSeas -alias keystore_alias -keystore server.ks -storepass secret -validity 730 -keysize 4096

-ext: For example, <server IP> may be the LDAP server for SSL connection between CDO server and LDAP server or may be the CDO Server for SSL connection between client and CDO server.
-dname: optional. It initializes the metadata of your organization.

Sign your certificate from a certificate authority(optional)

This step is optional and you may then proceed with Export certificate from a keystore.
For that step, you have to give your certificate signature request(server.csr) to your certificate authority(CA) which, in return will provide a signed certificate(server.crt).

keytool -certreq  -alias keystore_alias -file server.csr -keystore "server.ks"

The two steps below allow to import root certificate and intermediary certificate.

keytool -import -alias Root_CA -keystore server.ks -file Root_CA.cer
bc. keytool -import -alias Server_CA -keystore server.ks -file Server_CA.cer

Then, import the signed certicated into the server.ks keystore.

keytool -import -alias keystore-signed -keystore server.ks -file server.crt

Export certificate from a keystore

To export a certificate from an existing keystore the following command can be used :

keytool -export -keystore server.ks -alias keystore_alias -file server.cer

This command asks for the store’s passphrase and then create a server.cer file containing the certificate previously created.

Create a truststore from a certificate

It is advised to not export the whole keystore on clients. Creating a truststore containing only the certificate and public key is recommended. This truststore is intended to be deployed on clients which need to connect to the server.

keytool -import -file server.cer -alias keystore_alias -keystore trusted.ks -storepass secret

This command creates a new truststore in file trusted.ks. This truststore contains the server’s public key, it can be copied on client machines and referenced via the truststore.path configuration key.

The truststore is protected with secret as a passphrase.

Team for Capella Server: the REST Admin Server

The Team For Capella server is composed of the CDO repositories server and an HTTP Jetty server.

By default, the Jetty admin server is automatically started with the CDO server on the port 8080.
The admin server is used :

You can find more information in the file <TeamForCapella installation folder>/server/configuration/fr.obeo.dsl.viewpoint.collab.server.admin/admin-server.properties : it contains all the admin server configuration information.

REST Admin API

The REST Admin Server provides a whole set of services to manage the projects, the models and the users.
The documentation is available at the URL http(s)://<admin server IP>:<admin server port>/doc

A swagger documentation is available at the URL http(s)://<admin server IP>:<admin server port>/openapi. It can be enabled or disabled with the admin.server.jetty.servlets.admin.docandopenapi.enabled property

Credentials_Management

The first time the server is launched, a default «admin» user and its associated default token are created in the Eclipse secure-storage of the user that started the CDO server.
The «admin» credentials are stored in a dedicated node used by the server. The token is hashed and encrypted.
A secret.txt file , containing the token, is created in the same folder than admin-server.properties file. It can be used in third party application to authenticated with the admin server. Do not forget to remove this file as soon as you can.

Moreover, the admin credentials are also added in the secure storage for the application needs (importer, exporter, etc) in a dedicated node. The credentials are encrypted.

This way once the server has been started the first time, there is no additional step. The applications can automatically be used, being authenticated with the admin server with the «admin» user.

Nevertheless, it is possible to manage the user and the user token with the Credentials application

By default, the secure storage is created or retrieved from the home of the system user currently executing the application:

It is also possible to change the location of the secure storage with the use of the -eclipse.keyring program argument in both TeamForCapella/server/server.ini and TeamForCapella/capella/capella.ini. The secure storage must be shared between server-side client, tools and server in order to be able to use it from the Scheduler jobs. For example to use a fixed secure storage located in TeamforCapella/.eclipse/secure_storage:

-eclipse.keyring
../.eclipse/secure_storage

Team for Capella Server Installation Types

Quick Installation (1 Server, 1 Repository)

Installation process and details are described in the Installation Guide for Team for Capella.

Moreover, do not install any viewpoint except PROPERTIES KEY/VALUES-typed viewpoint. Ask to viewpoint providers whether their viewpoint is compatible with Team for Capella.

If the viewpoint is compatible with Team for Capella, deploy the viewpoint on every Team for Capella client and the importer used by server. Clean and export models again after a viewpoint installation.

Configuration with 1 Server, n Repositories, N Models

Introduction

This is the recommended configuration to work with several projects.

How to Add a New Repository

Hypothesis: the repository is added to a just installed version.

Add a new repository to the Team for Capella Server:

Note the 2 default repositories (content is collapsed in this screenshot),

Notes:

Add a new job to Team for Capella Scheduler (Jenkins) to manage the new repository:

Check the configuration is working: Start the Team for Capella Server using the "Server – Start"job (click on )and open the TeamForCapella\server\ folder

db and workspace folders should have been created:

Configuration with N Servers, N Repositories, N Models (1 Scheduler)

Introduction

How to Add a New Server

Hypothesis: the server is added to a just installed version, by default it will only contain the default repository "repoCapella".

  1. Create a new Team for Capella server instance,
    1. Do a copy of the TeamForCapella\server folder to**newServer** (for example),
    2. Change the cdo server port in the TeamForCapella\newServer\configuration\cdo-server.xml(for example 2037):
    3. Change the http server port in the TeamForCapella\newServer\configuration\admin-server.properties(for example admin.server.jetty.port=8081):
    4. (deprecated telnet only) Change the telnet server port in the TeamForCapella\newServer\server.ini(for example -console 12037):
  2. Add new jobs to Team for Capella Scheduler (Jenkins),
    1. Launch Jenkins,
    2. Using a web browser, connect to "http://localhost:8036",
    3. Duplicated all the jobs you needs. (In Jenkins use «New item» button and fill «Copy from» field.)
    4. For every jobs, in the build part of the job, add -httpPort <admin server port> parameter to refer to the right instance of the admin server. (for example -httpPort 8081)
    5. For «Server - Start» job, in the build part of the job, change the path of the server
    6. «Backup and restore» and «Diagnostic and repair» jobs, in the build part of the job, add -port <cdo repository port> parameter to refer to the right instance of the cdo server(for example -port 2037)
    7. (deprecated telnet) For «Backup and restore» and «Diagnostic and repair» jobs, in the build part of the job, add -consolePort <telnet port> parameter to refer to the right instance of the cdo server (for example -consolePort 12037):
    8. (deprecated telnet) In the build part of the job, if the job uses command script, add <telnet port> parameter

How to stop the server

The main methods to close the server are the following:

To avoid database corruptions, the server must in no way be closed these ways:
- Using the “Abort” button on the Server – Start job of the Scheduler,
- Especially on Windows 2008 Server 64 bits platforms:

- Closing the command prompt running the server (if any) by clicking on the Windows close button,
- Leaving the server close when the user logs out or the computer stops (to avoid this problem, it is advised to launch the Scheduler as a service so the server is not closed on log out).

How to reset the server

To restart with a clean server or after a database corruption, it can be useful to reset the server:

Note that it is also possible to restore the database from the result artifacts of the Database – Backup job, refer to the Capella client Help Contents in chapter Team for Capella Guide > System Administrator Guide > Server Configuration > Reinitialize database.

How to Improve Export Performances

The following line is used to configure the database (in cdo-server.xm):

To improve performances when exporting big models to the repository, change LOG=1 by LOG=0. When exports are done, return to the original value (LOG=1 is useful to avoid database corruptions when the server process is killed).

Reinitialize database

You have three ways to reinitialize data in a database.

Restore database from database backup

The use of the Database – Restore job should be preferred but it is still possible to manually do the same operation.

This operation should be used to restore a database from the file generated by the Database – Backup job (this file has a pattern like: repoCapella.20151105.171109-sql.zip).

The database will be restored in exactly the same state as it was when the backup was performed:

How to manually restore a DB backup

  1. Edit "server.ini" file
  2. Change the vmarg property collab.db.restore to true as follow: -Dcollab.db.restore=true
  3. Specify the backup file location with the -Dcollab.db.restoreFolder parameter (default value is db.restore in the server)
  4. Put the .zip backup file in the specified directory. Example with db.restore:
  1. Stop the server using the Server – Stop job
  2. Start the server using the Server – Start job
  3. If everything went well, you will get a log like the following one in the server’s console:
!ENTRY com.thalesgroup.mde.melody.collab.server.repository.h2 1 0 2020-04-22 18:39:32.409
!MESSAGE Restore repoCapella processing starts.

!ENTRY com.thalesgroup.mde.melody.collab.server.repository.h2 1 0 2020-04-22 18:39:33.977
!MESSAGE Restore repoCapella restored database from : C:\TeamForCapella\server\..\scheduler\jenkins_home\jobs\Database - Backup\builds\7\archive\repoCapella.20200422.182742-sql.zip

!ENTRY com.thalesgroup.mde.melody.collab.server.repository.h2 1 0 2020-04-22 18:39:33.980
!MESSAGE Restore repoCapella processing ends. The file has been moved to C:\TeamForCapella\server\..\scheduler\jenkins_home\jobs\Database - Backup\builds\7\archive\repoCapella.20200422.182742-sql.zip.restored

!ENTRY org.eclipse.emf.cdo.server.db 2 0 2020-04-22 18:39:35.537
!MESSAGE Detected crash of repository repoCapella

!ENTRY org.eclipse.emf.cdo.server.db 1 0 2020-04-22 18:39:35.614
!MESSAGE Repaired crash of repository repoCapella: lastObjectID=OID248, nextLocalObjectID=OID9223372036854775807, lastBranchID=0, lastCommitTime=1 586 948 133 861, lastNonLocalCommitTime=1 586 948 133 86

The .zip backup file will be suffixed by .restored or .error if the restore failed. This behavior can be disabled with the use of -Dcollab.db.restore.rename.source.file=false .

NOTE: Restore process only supports textual script backup with the name that ends with –sql.zip.

If you want to remove restored locking sessions from the database, use the Durable Locks Management view (see the Server Administration part of this documentation).

Restore database from projects backup

This way gives more control on the restoration as you may delete the repository and the repository is restored project by project.
To restore projects in a repository:

How to externalize configuration in a specific folder

Example:

server/server.exe -data C:/data/TeamForCapella/server/workspace

capella/importer.bat -data C:/data/TeamForCapella/server/importer-workspace

capella/command.bat -data C:/data/TeamForCapella/server/command-workspace

Example:

server/server.exe -configuration C:/data/TeamForCapella/server/configuration

tools/importer.bat -configuration C:/data/TeamForCapella/server/configuration

tools/command.bat -configuration C:/data/TeamForCapella/server/configuration

Example:

-vmargs -Dnet4j.config=C:/data/TeamForCapella/server/configuration/cdo-server.xml

Example:

Line 18 : <userManager type=«auth» description="C:/data/TeamForCapella/server/usermanager-config.properties" />

Example:

Line 37 : <dataSource uRL="jdbc:h2:C:/data/TeamForCapella/server/db/h2/capella;LOG=0;CACHE_SIZE=65536;LOCK_MODE=0;UNDO_LOG=0" (…)

Update scheduler/conf/context.xml to change the attribute Environment JENKINS_HOME with the path of the jenkins_home folder :

Example:

-vmargs -Dcollab.db.backupFolder=C:/data/TeamForCapella/server/db.backup

-Dcollab.db.restoreFolder=C:/data/TeamForCapella/server/db.restore

To directly externalize all previous file, you can edit server.ini file

Example: To externalize all files in the folder C:\data\TeamForCapella\server

1) Update server.ini

-console

-data

**C:/data/TeamForCapella/server/workspace **

-configuration

C:/data/TeamForCapella/server/configuration

-vmargs

-Dnet4j.config= C:/data/TeamForCapella/server /configuration

-Dcollab.db.backup=false

-Dcollab.db.restore=false

-Dcollab.db.backupFolder= C:/data/TeamForCapella/server /db.backup

-Dcollab.db.restoreFolder= C:/data/TeamForCapella/server /db.restore

-Dcollab.db.backupFolderMaxSize=1G

-Dcollab.db.backupFrequencyInSeconds=900

-Dosgi.requiredJavaVersion=11

-Xms128m

-Xmx2000m

-XX:PermSize=128m

How to Change Ports Values

See Server configuration section → Cdo-server.xml File

See Jenkins installation section → Change the Port Used by Jenkins.

See Team For Capella Web server section → Change the Port of the admin server

(deprecated telnet) Change telnet port

This is deprecated because by default telnet is not used anymore. It has been replaced by the admin server.

By convention we could use 12036 for a server that listens to the port 2036 (defined in cdo-server.xml), 12037 for the server that listens to 2037, 12038 for 2038 etc…

Ex: command.bat localhost 12036 capella_db backup

Ex: command.bat localhost 12036 close

Ex: importer.bat –consoleport 12036 –archivefolder

NOTE: If you have several jobs using the OSGI port value, you can create an environment variable to store it in a single place.

How to Increase the Size of Description and Documentation Columns

When very long text are written in Description or Documentation fields, an error of the following type can occur when saving a remote project or exporting a local project to the server:

[ERROR] org.h2.jdbc.JdbcSQLException: Value too long for column DESCRIPTION VARCHAR

To avoid this problem, change the file server/configuration/cdo-server.xml to use:

<dbAdapter name="h2-capella" /> instead of <dbAdapter name="h2" />

Fields description and documentation will be stored in CLOB instead of VARCHAR.

h2-capella is the default value in cdo-server.xml.

09. Server Administration

5.4. Server Administration

  1. Server Administration
    1. Administration Views
      1. Durable Locks Management View
        1. Activate the durable locking
        2. Use the View
        3. Additional information on Locking Sessions
        4. Remove Locking Sessions
      2. User Management View
    2. Administration Tools
      1. Repository maintenance application
      2. Job configuration
      3. REST Admin Server

Administration Views

The Team for Capella client comes with two views useful to perform some administrative tasks: The Durable Locks Management view, and the User Management view. To access to these features, you must install the Team for Capella - Administration Views feature from the Team for Capella update site.

After restarting your T4C client, go to Preferences > General > Capabilities to enable the Administration Views capability.

Durable Locks Management View

Important: The durable locking is deactivated by default since Team For Capella 1.1.4 and 1.2.1.

Activate the durable locking

The durable locking mechanism allows to configure the explicit locks manually taken by a user as persistent locks. If a user takes explicit locks and then terminates his connection to the remote model (by closing his shared project or exiting the Team for Capella client), his explicit locks are not released and he will retrieve them on the next connection to the repository.

The durable locking can be activated by a client by adding the following option in the plugin_customization.ini file:

fr.obeo.dsl.viewpoint.collab/PREF_ENABLE_DURABLE_LOCKING=true 

If the plugin_customization.ini file is not present, you need

-pluginCustomization 
plugin_customization.ini

Note that the activation or deactivation of durable locking will have no effect on existing connection projects. The client have to remove the local connection project and to connect to the remote project again.

The following sections describe the case where the durable locking is activated.

Use the View

Team for Capella provides the Durable Locks Management view to list existing locking sessions and delete them if needed.

When doing the first operation with this view, you will be asked to logon with the following dialog:

It is allowed to remove Locking Sessions only if the corresponding user is not connected.

Additional information on Locking Sessions

The Durable Locks Management view displays all locking sessions existing on the repository and the locks created by these locking sessions (if any).

A locking session is created whenever a team project is created on a client (Capella Connected Project). So if a user creates several team projects, he can have several locking sessions (as user1 in the screenshot above). Each locking session has a unique ID stored in the local .aird file.

Locks are owned by a locking session, so if the same user has two locking sessions (<=> 2 team projects) and he locks an element in the first locking sessions, this element will appear with a red lock in the second locking session.

Remove Locking Sessions

As explained above, using the Durable Locks Management view, locking session can be removed (this action is available by all users but should be done by the administrator only). A locking session can be removed only if nobody is connected using it.

All locks hold by the locking session are removed with it.

If a user tries to connect to the repository using an existing connection project referencing a removed Locking Session ID, an error dialog is displayed (see below) and a new locking session is created. The ID of this new locking session will replace the old one in the local .aird file on the next save action.

User Management View

Team for Capella provides the User Management view to manage users on the Team for Capella Server.

The Durable Locks Management view is useful only if the Team for Capella Server is configured to work with the access control " Identification".

The view is shown.

When doing the first operation with this view, you will be asked to logon with the following dialog:



Administration Tools

Repository maintenance application

The repository might have some inconsistent data and might need to be maintained.

The Repository maintenance application will look for the following inconsistencies:

This link might be broken if the representation has been deleted or if the internal index of the Representation Descriptor list is incorrect. That can cause some troubles for the different users connected to the project.

The application aims to delete orphan Representation Descriptors and stale references in the repository (both graphical and semantic models).

Once done the application will close the server.

Note: This application requires that no user is connected to the repository.

Job configuration

There are two jobs available for maintenance in the Scheduler:

The application needs credentials to connect to the CDO server if the server has been started with authentication or user profile. Credentials can be provided using -repositoryCredentials parameter. Here is a list of arguments that can be passed to the application or using the job (in maintenance.bat or the job config):

Arguments Description
-repositoryCredentials Login and password can be provided using a credentials file.

To use this property file

  • Add the following program argument: -repositoryCredentials <path_to_credentials_file>
  • Fill the specified file using the following format (only one line allowed):
aLogin:aPassword
-hostname defines the team server hostname (default: localhost).
-port defines the team server port (default: 2036).
-repoName defines the team server repository name (default: repoCapella).
-connectionType The connection kind can be set to tcp or ssl (keep it in low case) (default: tcp)
-consolePort The port to access the osgi console (default: 2036). This value has to be equal to the console eclipse parameter of the server.ini.
-diagnosticOnly Allowed values are true or false. If true, only the diagnostic is done. The database will be unchanged. (default: false)
-launchBackup Allowed values are true or false. If true, the capella_db backup is done before any change is done on the database. (default: true)
-archiveFolder Indicates where the backup zip will be stored.
-httpLogin Backup and Maintenance are triggered by an Http request. This argument allows to give a login to identify with on the Jetty server.
-httpPassword Backup and Maintenance are triggered by an Http request. This argument allows to give a password to authenticate with on the Jetty server.
-httpPort Backup and Maintenance are triggered by an Http request. This argument allows to give a port to communicate with on the Jetty server.
-httpsConnection Backup and Maintenance are triggered by an Http request. This boolean argument specifies if the connection should be Https or Http.

REST Admin Server

An administration feature through WebServices is available for the Team for Capella Server: it brings users and repositories management capabilities through REST API and exposes an OpenAPI description:

Refer to documentation available in the folder server/dynamic to discover how to install and enable it.

11. Access Control (User Profiles)

5.5. Access Control

  1. Access Control
    1. Available Access Control Modes
    2. Notices when configuring Access Control mode
      1. Switching between different access control modes
    3. User Profiles
      1. Configuration
      2. Connection to the User Profiles Model
      3. Default configuration for Team for Capella
        1. Representation Creation/Move Special Case
      4. User Creation
      5. Role Creation and Association with Users
      6. Resource Permission Pattern Examples
      7. Promote a User to Super User
      8. Import/Export User Profiles Model
      9. How to change user login/password
      10. Troubleshooting
        1. Administrator Password Forgotten
      11. Known issues

Available Access Control Modes

Several modes of access control can be used for each repository on the server:

Notices when configuring Access Control mode

Switching between different access control modes

When switching between different access control modes, the server must be restarted.Otherwise, the configuration update will not be taken into account.

User Profiles

Configuration

In Team for Capella, when using the User Profiles feature, user names and access rights are stored in the repository (i.e. in the database). Note, that, when passwords are stored in the user profiles model (when LDAP is not used), they are not encrypted. That’s why the user names management part of this feature must be considered as a simple identification feature.

If the server has been started with user profile, the Importer needs to have write access to the whole repository (including the user profiles model). See Resource permission pattern examples section.

If this recommendation is not followed, the Importer might not be able to correctly prepare the model (proxies and dangling references cleaning, ...). This may lead to a failed import.

To use the User Profiles feature in T4C, you first need to install the associated Team for Capella User Profiles UI feature from the Team for Capella update site.

After restarting your T4C client, go to Preferences > General > Capabilities to enable the User Profiles capability.

Connection to the User Profiles Model

You can connect to the user profiles model of a repository thanks to the dedicated wizard:

The accounts created by default in the user profiles model are those defined in the administrators file. Refer to Server Configuration/User Profile Configuration

To be able to change the user profiles model, the Administrator account should be used.

Here the default user profiles model with its table opened:

By default, the userprofile resource is hidden. To make it appear under the userprofile project, the EMF Resources filter must be deactivated via the Customize View... dialog.

Default configuration for Team for Capella

When the server is configured with the User Profiles functionality, the following roles are automatically created:

These defaults roles are required :

Note that as user created as administrators (in the administrator properties file as presented in the previous part) have full access and do not need to be assigned to any role. Trying to assign roles to administrators will be prevented and a dialog will appear explaining that the administrators already have full access.

Representation Creation/Move Special Case

If the user has only a read only right on the semantic element, he can not create/clone/move a representation on it. If trying, a pop up will be displayed telling that it failed. More information in Locks and Updates on Diagrams

User Creation

To add a user:

And complete login information

Role Creation and Association with Users

Use the dedicated tool to add a role:

A name can be given to the created role using the Properties view (attribute ID).

Once the new role is created, right click on it to add resource permission.

Complete the textbox with path of authorized resource


Finally, associate users to a role in the Properties View of the role:



  • By default, users have read access on all resources.
  • Administrator has a write access on all resources you don’t have to assign write permissions for each project for him.
  • You can give write or read access on a resource but empty permission is not supported.
  • A user can export a project on a repository only if he has write access on " / ".


Inaccessible elements for a user have a gray padlock.

Resource Permission Pattern Examples

Since only resource permissions are currently available, to define fine grain permissions on a model, it has to be cut into several fragments.

Here is an example project:

Write access to the whole repository (including the user profiles model)

.* or /.*

Write access to the whole TestModel project

/TestModel/.*

Write access to OA fragments of TestModel

/TestModel/fragments/OA.* or /TestModel/.*OA.*

Write access to OA and SA fragments of TestModel

/TestModel/fragments/(OA|SA).* or /TestModel/.*(OA|SA).*

Write access to the semantic part of TestModel

/TestModel/.*(capella|melodyfragment)

Write access to the representation part of TestModel (diagrams and tables)

/TestModel/.*(aird|airdfragment|srm)

Write access to TestModel but not its fragments

/TestModel/.*(aird|capella|srm) or /TestModel/[^/]*


When dealing with aird and airfragment files do not forget to give the same rights to srm files (files used to store the representations data when the lazy loading is enabled, the lazy loading is enabled by default).

Note that the project name in a resource permission pattern must be the name coming from the server repository. This is not necessarily the same name than the locally imported project (e.g. if TestModel.team is the name of the locally imported project, putting TestModel.team in the permission pattern will not work).

Promote a User to Super User

At startup, there is only one superuser: Administrator.

A basic user can be promoted to super user. To do that:

Import/Export User Profiles Model

You have the possibility to import a user profiles model; this is the same mechanism as for a Capella project.

In Team for Capella, you need to enable the Sirius Collaborative Mode – Default UI > User Profiles capability to access the import/export User Profiles functionalities.

Then, you need to create a general project which will contain the imported User Profile model.

Import User Profiles model:

Enter a local URI starting with platform:/resource/

Example: platform:/resource/LocalUserProfilesProject/users.userprofile

To export, we can create a general project (or reuse the general project created earlier) and put a User Profile model into it, then right click on the User Profile model and choose Export:

How to reuse the user profiles model

It is recommended that you backup your user profiles model (Refer to Server Administration/Team for Capella Scheduler/Import user profiles model).

  • You can reuse the user profiles model using the export wizard. You can export it to another repository of either the same server or another server
  • In case of DB crash, start your server in standard configuration (Refer to Server Configuration/Not Authenticated Configuration), with a clean database. That configuration will not initialize the user profile model. Then export the user profiles model to the CDO repository. Now you can restart the server with user profile; as the user profile model is found it will not be reinitialized.
  • The user profile model can be reused from a Team for Capella version to another. It does not need to be migrated.

How to change user login/password

User login/password can be modified via the Update User Information contextual menu. This contextual menu can be accessed by right-clicking on the column corresponding to the user being modified. Note that this action is done only by right-clicking on one of the cells of the column, clicking elsewhere (e.g. on the column title) should be avoided.

Once the User Update dialog appears, we can modify either user login or password.

Notes:

Troubleshooting

Administrator Password Forgotten

If the administrator password has been forgotten, it will no more be possible to change the user profiles model or export a model to the server.

To give a new password to the Administrator account:

Known issues

Please notice the following known issues:

Re-connection to a user profiles model raises error

6. Developer Guide

00. Developer overview

6.1. Developer Overview

Team for Capella is a collaborative MBSE tool and methodology that relies on the Sirius framework. Both provides extension points and APIs allowing developers to customize and extend Team for Capella. Some of these developments are available as open source add-ons. This documentation will reference some pointers to get started:

15. Developer Guidelines

6.2. Developer Guidelines

To avoid performance issues, some guidelines must be followed.

  1. Developer Guidelines
    1. Viewpoint Generation
    2. CDO Native Vs CDO Legacy mode
    3. Diagram extensions
      1. Mapping accesses
      2. Interpreter access

Viewpoint Generation

It is recommended to generate viewpoint with CDO Native.

Please refer to the Capella Studio Documentation to see how to generate this part of the Viewpoint.

CDO Native Vs CDO Legacy mode

Viewpoints (as described in Capella Guide > User Manual > Overview > Capella Ecosystem) must be generated for CDO.

Nevertheless, if you decide to use the Legacy mode, you can enable it by setting the non UI preference CDOSiriusPreferenceKeys.PREF_SUPPORT_LEGACY_MODE to true, even it is not a recommended nor supported mode in Team for Capella. For more information refer to the Activate Legacy mode support.

Diagram extensions

Mapping accesses

Repeated calls to the following methods must be avoided:

For remote models, these methods do not simply access to a reference as the target objects are not shared, then it is recommended to use local variable instead of multiple occurences of those calls.

Interpreter access

Repeated calls to org.eclipse.sirius.tools.api.interpreter.InterpreterRegistry.getInterpreter(object) must be avoided. Note that the IInterpreter is the same for the whole ResourceSet and corresponding Sirius Session. If you already have this Session, you can use org.eclipse.sirius.business.api.session.Session.getInterpreter().

7. TEAM FOR CAPELLA Software User Agreement

OBEO S.A.S. is a French company, headquartered at 7 Boulevard Ampere, BP 20773, 44470 CARQUEFOU, FRANCE, and registered with the Business Number: 485 129 860 RCS Nantes.

THALES GLOBAL SERVICES S.A.S. is a French company, headquartered at 19-21 avenue Morane Saulnier, 78 140 Velizy Villacoublay, FRANCE, and registered with the Business Number 424 704 963 R.C.S. VERSAILLES.

The SOFTWARE is the TEAM FOR CAPELLA software.

The USER is the recipient of the SOFTWARE license (the licensee).

I. Intellectual property rights

  1. The company THALES GLOBAL SERVICES possess intellectual property rights over the SOFTWARE and OBEO hereby confirms that it holds a concession for distribution and technical support & maintenance rights for said SOFTWARE.

  2. The user license for the SOFTWARE does not result in any transfer of the ownership of property rights, and entails solely the user rights stipulated herein.

  3. The USER receives a non-exclusive and non-transferable right to use the SOFTWARE in a form that runs on one machine, provided payment of the agreed price is received in accordance with the terms of the agreement.

  4. The USER undertakes not to directly or indirectly infringe the rights held by THALES GLOBAL SERVICES and OBEO. The USER undertakes to take all measures necessary relative to its authorised users to ensure the confidentiality and respect of property rights over said SOFTWARE. The USER undertakes in particular to ensure that its personnel do not keep any documentation or any copies or reproductions of the SOFTWARE.

II. Scope of rights granted under the license

  1. The SOFTWARE will be used solely for the USER's internal requirements and the requirements of users authorised by the USER, up to the maximum number of authorised users, and for a perpetual or limited duration of use as described and approved by both parties in the Technical and Financial Proposal issued by OBEO or in the USER purchase order. Third parties outside the USER's company are excluded from the license.

    The USER must ensure that only authorised users have access to the SOFTWARE. Any additional license requested by the USER will incur an additional charge based on the current schedule of charges.

  2. The USER is permitted to:
    1. Install and use the SOFTWARE on a computer or virtual machine, provided the user has a user license;
    2. Transfer the SOFTWARE from one computer to another;
  3. The USER will refrain from assigning, leasing, supplying, distributing or lending the SOFTWARE, and from granting sub-licenses or any other rights, without prior written agreement from OBEO.

    More generally, the USER undertakes not to disclose all or part of the SOFTWARE to any third party by electronic methods, over the internet, or by any other means.

  4. The USER undertakes not to make any amendment, modification, correction, arrangement, adaptation, transcription, combination or translation of all or part of the SOFTWARE without express, prior, written permission from OBEO, for which OBEO itself will first obtain express permission from THALES GLOBAL SERVICES.

  5. The USER is permitted to make and keep a single copy of the SOFTWARE for backup and archiving purposes and for use in recovery in the event of an incident.

    The USER is not permitted to reverse engineer, decompile or translate the SOFTWARE.

  6. The USER acquires no rights over the SOFTWARE source code, and OBEO alone reserves the right to make modifications, under supervision from THALES GLOBAL SERVICES, in order to correct any faults or development enhancements to the SOFTWARE.

    Only the owner of the intellectual property rights is in fact permitted to modify the SOFTWARE, change versions, amend the functionality, specifications, options and all other features, without providing notice to the USER and without the USER being able to derive any advantage whatsoever therefrom.

  7. In the event the USER wishes to obtain indispensable information for the implementation of interoperability between the SOFTWARE and some other software developed independently by the USER, for a use that is consistent with the SOFTWARE's intended purpose, the USER undertakes to consult OBEO before starting any work to this end, and OBEO can provide the USER with the information needed to provide this interoperability, which OBEO itself obtains from THALES GLOBAL SERVICES. The parties will negotiate a reasonable fee in exchange for this service.

    If THALES GLOBAL SERVICES is unable to provide the information required to provide interoperability of the SOFTWARE, OBEO will be entitled to authorise the USER to decompile or reproduce the SOFTWARE, strictly within the stipulations of Article L.122-6-1 IV of the French Intellectual Property Code.

  8. Pursuant to Article L.122-6-1 III of the French Intellectual Property Code, the USER is permitted to observe, study or test the functioning or security of the SOFTWARE, in order to determine the ideas and principles which underlie any element of the SOFTWARE if this is done while loading, displaying, running, transmitting or storing the SOFTWARE as the USER is permitted to do by virtue hereof.

    THALES GLOBAL SERVICES must be informed of any activity of this kind performed pursuant hereto.

  9. The USER will refrain from reproducing the documentation about this SOFTWARE without prior written permission from OBEO.

  10. Any unauthorised use, or use not compliant with these conditions of use of the SOFTWARE, will result in termination of the present user license as of right one month after the sending of formal notice that is not acted upon, and without prejudice to any legal proceedings seeking remedy for any subsequent loss or harm suffered by OBEO and the holder of the intellectual property rights.

  11. The USER acknowledges that the software may contain Open Source Software which may be subject to separate license terms. The relevant license terms are provided by OBEO to the USER either as part of the SOFTWARE or as part of the documentation.

III. Evaluation license

  1. OBEO may grant the USER an evaluation license solely for evaluation, testing and demonstration purposes, enabling the USER to evaluate, test and use the SOFTWARE for a set period with a maximum of 2 months, in order to confirm its suitability.

  2. The USER is then allowed to download or install an evaluation version of the SOFTWARE.

  3. The USER will consequently refrain from using the SOFTWARE for any purpose inconsistent with those for which the evaluation license is granted. For instance, the USER will not use or deploy the SOFTWARE in any production environment.

    The USER in particular may not decompile, copy or reproduce in any way whatsoever the SOFTWARE made available to the USER.

  4. At the end of the contractually-stipulated evaluation period, the USER undertakes either to acquire a full user license for the SOFTWARE from OBEO, or to destroy the SOFTWARE and stop using it.

  5. OBEO does not provide any support or maintenance service relative to evaluation licenses.

IV. Change in designated system

  1. The USER is responsible for the proper operation of the hardware used to run the SOFTWARE and for the compliance of its environment with OBEO's specifications.

  2. In the event of a permanent or temporary change in the system designated by the USER, the USER must have ensured beforehand that the future designated system is compatible with the SOFTWARE, and notify OBEO of the change. OBEO may refuse to ratify the change of system. If the USER fails to comply with such a refusal, OBEO is entitled to terminate this agreement.

  3. In all cases where the designated system is changed, the USER undertakes to immediately destroy all files comprising the copy of the SOFTWARE installed on the previous designated system.

V. Warranty and maintenance

  1. It is recommended that the USER take out a support & maintenance contract, its terms and renewal conditions are set forth in the Technical and Financial Proposal issued by OBEO.

  2. OBEO warrant the software conforms to its documentation, however, the USER acknowledges and agrees that the SOFTWARE is not guaranteed to run either error- free or without interruption and that the USER is under the exclusive control and responsibility for the usage of any inputted or generated outputted data (including its accuracy and adequacy). While the warranty or support & maintenance contract is active, Obeo is committed to remedying at its expense any blocker issue detected by the USER under the condition it can be reproduced on a non-modified software executed within the technical requirements set forth in the documentation. The USER acknowledges and commits to execute the process set forth in the Technical and Financial Proposal to create such requests.

  3. EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. The USER is solely responsible for determining the appropriateness of using the SOFTWARE and assumes all risks associated with its exercise of rights under this agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations. OBEO does not guarantee against the risks inherent in using the SOFTWARE including but not limited to service interruption, loss of connection, data loss, system crashes, poor performance or deterioration in performance. EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER OBEO AND/OR ITS THIRD PARTY SUPPLIERS SHALL HAVE ANY LIABILITY FOR ANY INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

  4. The USER is responsible for taking backups before any work is carried out on its hardware or software by OBEO.

  5. EXCEPT FOR BREACH OF CONFIDENTIALITY, INSURED CLAIMS, AND THE PARTIES' RESPECTIVE EXPRESS INDEMNITY OBLIGATIONS, THE TOTAL LIABILITY OF EITHER PARTY TO THE OTHER PARTY FOR ALL DAMAGES, LOSSES, AND CAUSES OF ACTION (WHETHER IN CONTRACT, TORT (INCLUDING NEGLIGENCE), OR OTHERWISE) SHALL NOT EXCEED 10% OF THE AGGREGATE FEES PAID HEREUNDERPURCHASE PRICE. THE LIMITATIONS PROVIDED IN THIS SECTION SHALL APPLY EVEN IF ANY OTHER REMEDIES FAIL OF THEIR ESSENTIAL PURPOSE.

VI. Indemnity

  1. OBEO will defend actions brought against the USER at its own expenses provided that it is based upon a claim that the SOFTWARE infringes a United States copyright or patent, or violates any third party proprietary right or trade secret. OBEO will pay all costs and damages finally awarded against the USER, provided that OBEO is given prompt written notice by the USER of such claim and is given all available information, reasonable assistance, and sole authority to defend and settle the claim.

  2. OBEO will not have any obligation under the "VI Indemnity" section and will have no liability whatsoever if the claim is (1) based upon the use of the SOFTWARE in combination with other software not provided by OBEO if such claim would not exist except for such combined use, (2) based upon a version of the SOFTWARE modified by the User or any other third party if the claim relates to the modified parts, (3) based upon the use of the SOFTWARE by the USER in a manner not authorized or not set forth in this agreement.

  3. OBEO, at its own choice and expenses, will get the right to continue using the SOFTWARE for the USER, or will modify or replace the SOFTWARE so it becomes non-infringing; or, if such remedies are not reasonably available, OBEO will accept the return of the SOFTWARE and this agreement will terminate.

  4. OBEO will have no liability on any expense made by the USER related to any action except prior written consent from OBEO. OBEO will have no liability for infringement of the intellectual property rights of a third party except as expressly provided in this "VI Indemnity" section.

VII. Export

  1. The USER agrees that national or international foreign trade law and regulations may prevent OBEO from fulfilling its obligations under this agreement, including embargoes or any other sanctions.

  2. The USER and OBEO will strictly comply with applicable export and import laws and regulations, including those of the United States, and will reasonably cooperate with the other by providing all information to the other, as needed for compliance.

  3. Except when otherwise required by law or regulation, the USER shall not export, re-export or transfer, whether directly or indirectly, the SOFTWARE and material delivered pursuant to this agreement without first (1) at the USER sole expense, complying with the applicable export laws and the import laws of the country in which the SOFTWARE is to be used and (2) the express written consent of OBEO and (3) a validated export license is obtained applicable authority where required.

  4. This SOFTWARE contains publicly available encryption source code classified ECCN 5D002 and use encryption technologies, notably SSL/TLS to protect customer data in transit. The country in which you are currently may have restrictions on the import, possession, and use, and/or re-export to another country, of encryption software. BEFORE using any encryption software, please check the country's laws, regulations and policies concerning the import, possession, or use, and re-export of encryption software, to see if this is permitted.

  5. The provisions of this "VII Export" section will survive the expiration or termination of this agreement for any reason.

VIII. US Government contracts

  1. This SOFTWARE is a commercial product that has been developed exclusively at private expense. If this SOFTWARE is acquired directly or indirectly on behalf of a unit or agency of the United States Government under the terms of (1) a United States Department of Defence contract, then pursuant to DOD FAR Supplement 227.7202-3(a), the United States Government shall only have the rights set forth in this license agreement; or (2) a civilian agency contract, then use, reproduction, or disclosure is subject to the restrictions set forth in FAR clause 27.405-3, entitled Commercial computer software, and any restrictions in the agency's FAR supplement and any successor regulations thereto, and the restrictions set forth in this license agreement.

IX. General

  1. This agreement shall come into force on the date of the order of the SOFTWARE license by the USER and will be in effect until the expiration of the license, unless terminated as set forth in this agreement. Upon termination of the agreement or expiration of the license, the USER shall immediately destroy or return all copies of the terminated or expired SOFTWARE.

  2. During the term of this agreement and one year after its termination, the USER shall maintain accurate information on to the use of the SOFTWARE. Unless strictly prohibited by Government policy OBEO shall have the right, once per year, at its own expense and under reasonable conditions of time and place in USER's premises, to audit and copy these records and to verify the USER compliance with the terms of this agreement.

  3. The USER acknowledges to have read this agreement, understand it and agree to be bound by its terms and conditions. The USER further agree that this agreement are the complete and exclusive statement of the agreement between the parties regarding the SOFTWARE, which supersedes all proposals or prior agreements, oral or written, and all other communications between the parties relating to the subject matter of this agreement.

  4. If any term or provision of this agreement is determined to be invalid or unenforceable for any reason, it shall be adjusted rather than voided, is possible, to achieve the intent of the parties to extent possible. In any event, all other terms and provisions shall be deemed valid and enforceable to the maximum extent possible.

  5. Neither party shall be liable for any loss, damage, or penalty arising from delay due to causes beyond its reasonable control.

  6. Notice to be given or submitted by the USER to OBEO shall be in writing and directed to OBEO headquarters.

  7. This agreement may be modified only by a written instrument duly executed by an authorized representative of OBEO and the USER. OBEO and the USER agrees that any terms and conditions of any purchase order or other instrument issued by the USER in connection with this agreement that are in addition to or inconsistent with the terms and conditions of this agreement shall be of no force of effect.

  8. This agreement may not be assigned or transferred by the USER, in whole or in part, either voluntarily or by operation of law, without the prior written consent of OBEO.

  9. The failure of a party to enforce any provision of this agreement shall not constitute a waiver of such provision or the right of such party to enforce such provision or any other provision.

  10. This agreement will be governed by and construed in accordance with the substantive laws of FRANCE, without giving effect to any choice-of-law rules that may require the application of the laws of another jurisdiction.