Monday, July 25, 2016

Diagram Centric Model Versioning

By Stefan Schefberger and Matthias Winkelhofer

Introduction

Model driven engineering becomes more and more important in the area of software development. As with any engineering project, you need to collaborate as a team sharing the source code, as well as models, through a version control system. EMF Compare and EGit provide a sufficient mechanism to support model versioning. However, the model comparison viewer always strictly separates between changes applied to the model and the diagram. Many users however do not consider these two worlds, the model and the diagram, as separate artifacts, but rather as a unified concept. They prefer to interact mainly with the diagrams since the graphical representations are closer their way of thinking. For such diagram-centric user scenarios, the separation as in EMF compare is counter-intuitive. Therefore, we present in this post a new diagram-centric plug-in that supports the combination the model and diagram changes in a common view.


Diagram Example:


Before

After

Requirements

But before we were able to work on our approach to satisfy the diagram-centric users, we had to think about their requirements and demands. After some extensive discussions we gathered the following main points:
  • no strict separation in model and diagram differences
  • keep control over the management of model changes
  • full functionality of the versioning mechanism
  • implementation compatible to the already existing groups


Approach

To be able to tell EMF Compare how we want to change the way it displays the tree of changes, we first of all initialized a new Eclipse plug-in and, of course. Within the plugin.xml configuration file we had to register an additional org.eclipse.emf.compare.rcp.ui.group by adding it with assistance of the so called Extension Point Selection Wizard. After doing that we were able to add our new EMF Compare group extension and finally add the corresponding class and a proper label.


But of course this was only the basic prerequisite. We still needed a way to logically implement our desired solution. Therefore we had to link the already described model and diagram representation. To have a specific use case for this project, we chose the modelling framework Papyrus and its implementation of UML. Looking at the corresponding XML files in EMF, you can see a link which points from the diagram to the model side. This is the case, because for one logical element, there are several graphical ones.


EMF Compare already provides a hierarchical data structure that inherits this property. The root level for one versioning process is called Comparison and contains several Matches which are created for identical elements that reside on the different versioning sides (Left, Origin, Right). Since within this context everything is built up as a tree, each Match can contain a list of Sub-Matches (containments) and of course because we are talking about versioning, on each level there is the possibility to have multiple differences, that shall be displayed within the Diagram-Centric group later on.


To be able to work with this already given data structure we decided to reuse some logical components of the BasicDifferenceGroupImpl.java. Finally we were able to fulfill our purpose via using a HashMap to store the Diagram Matches that corresponds to the desired Model elements.


Solution

For better understanding we provide two different representation of the comparison tree. In the first image you can see the default representation without any additional grouping. As we already mentioned during several sections of our paper, again there is the strict separation of the tree representation in the Diagram and Model perspective.
In contrast to this, the second one shows our final implementation, which unifies these two worlds and provides the user with a diagram-centric versioning tree.

Default representation without grouping

Diagram-Centric representation


Outlook

Now, after the hard work, we are able to show them in the desired way. But there are several areas with improvement potential. First of all this group was initially designed to work with UML and can be understood as a prototype. So to make this group applicable for end users, a lot more modelling languages have to be taken into account and tested against a huge amount of special cases. Another point of improvement is the compatibility of our implementation to get along with the pre-implemented filtering functions of EMF Compare. And finally during our examinations we were focused on the logical correctness of our solution.

But as a closing statement, once more we would like to note, that the Diagram Centric group implementation is the first step in a specialized model oriented perspective of representing the changes of graphical and logical models.

Wednesday, July 20, 2016

Interactive Model Animator for xMOF Models

By Matthias Hoellthaler and Tobias Ortmayr

Introduction

Model-driven Development (MDD) gained significant popularity over the last couple of years. Because of the higher abstraction of domain-specific languages it is possible to minimize redundant activities and improve the understandability of complex problems. This leads to a software development process which is less code-centric and more model-centric. Models are no longer only used to document design decisions, but became the main development artifact and source for code generation. Therefore, adequate techniques for ensuring the quality of models and their correctness in terms of expected behavior are necessary.

Existing ecosystems like the Eclipse Modelling Framework (EMF) provide profound tooling support for well-established concepts but are lagging behind current trends and developments like executable Domain Specific Modelling Languages (xDSML). The research field of xDSML is in comparison a relative young one. Unfortunately, this results in a lack of well established standards. The Moliz project provides with xMOF a promising approach for specifying xDSMLs based on the OMG standards MOF and fUML.

The aim of this project was to build a prototype of a model animator for xMOF models to improve the tool support for xMOF. This animator extends the debugging functionality of the Moliz model execution engine by interpreting debugging events to retrieve information about the current execution state of the model and using this information to visualize the state in the graphical representation (in this case activity diagrams). The animator supports node-wise stepping of xMOF activities and animates the activity diagrams to give the language designer a visual feedback about the state of the ongoing execution. To facilitate the integration into the Moliz project the model animator is implemented as an Eclipse plug-in. We implemented the animation in Graphiti and Sirius to demonstrate the differences between the two approaches.

Animation with Graphiti

In Figure 1 we see the Graphiti-based animator during the execution of a Petri net. As we can see in the bottom right, the nodes of the activity diagram are animated. Even after the end of an activity they are still animated for better traceability. They will only be reset if the activity diagram is executed again.

Animation with Graphiti
Figure 1: Animation with Graphiti

Animation with Sirius 

In Figure 2 we can see the same model. This time it is animated with Sirius. Both animators provide comparable functionality, however, the Sirius-based animator provides a more sophisticated animation of activity diagrams.

Animation with Sirius
Figure 2: Animation with Sirius

Outlook 

The project should be extended in the future to further improve the tooling support. The following features are the most promising ones:

  • Animation support for simultaneously executing activities should be supported. In particular, if two or more caller execute the same activity, the current state of the diagram is currently overwritten by the newest caller.
  • Interactive stack traces are a useful addition to give the possibility for navigating between activity diagrams.
  • The Sirius editor should be capable of representing all xMOF metaclasses. At the moment only the Activity metaclass and associated elements are represented.
  • A better mapping algorithm should be implemented to guarantee a correct mapping between model elements and diagram elements. At the moment the name property of an element needs to be unique. A violation of this constraint can cause unexpected behavior.

Implementation

The source code of the project can be found on Github.

Tuesday, July 19, 2016

Modernizing Software Languages through the Application of Model-Driven Engineering

From XML Schema to Xtext

By Agnes Fröschl, Bernd Landauer, and Bernhard Müller.


Introduction

Since the invention of Extensible Markup Language (XML) [Harold], it has gained a great popularity. The language is nowadays used as configuration and exchange format for a vast amount of applications. Some examples are the GPS Exchange Format (GPX), Scalable Vector Graphics (SVG) or configuration files for Computer Numeric Control (CNC) machines for production data. To make sure a provided XML file is valid, XML Schema Definition (XSD) [Gao] was introduced. However, XML and XSD are both optimized for machine processing and not human readability [Badros].

To bring language engineers, i.e., for example, the person who designed the instruction reader for a CNC machine and domain experts, i.e., for example, the person who operates the CNC machine, together, the XMLText Framework [Neubauer] has been introduced. It provides a transformation from XSD to Xtext-based Domain Specific Language (DSL) [Eysholdt, Tolvanen] with a more comprehensive and easily human readable concrete syntax.

In this work, we describe various XSD features that are not yet supported or limited by the XMLText transformation as well as our efforts to extend it [3]. The target is to escape fixed concrete syntax and provide an easy to use and customizable syntax for non-language engineers. Another important key feature is to keep backward compatibility, such that systems, which rely on XML files as an input source, do not need to be adapted to fit the new syntax.

 

Extension of the XMLText framework

Although some features are already implemented in XMLText, XSD provides an extensive amount of advanced features, for which support has still to be created. Our work mainly focused on extending the Ecore and Xtext Grammar generation.

Data types were our first area of contribution. Instead of proper Xtext Terminals, only stubs were created. We implemented valid Terminals for various data types. With this extension, only minor efforts were necessary to implement the support of various length restrictions for strings.

A more advanced feature was the implementation of mixed content, i.e., the support for the mixed=true XSD attribute. This construct allows the mixing of various newly defined elements in the created syntax or, in other words, text content with arbitrary text elements between tags.

Finally, we implemented ID and IDREF to ensure unique values for certain elements to which others can refer to. The related features KEY and KEYREF have been examined but their support has not been implemented due to the usage of complex XPath rules which are beyond the scope of our project.

 

Concrete Syntax DSL

Making the concrete syntax DSL even more readable and customizable, we explored the possibilities of Xtext to adapt the concrete syntax and style the appearance in the editor.

The figure below shows an example how a customized concrete syntax for a company hierarchy could look like. Other implemented extensions can be seen too, like date data type which yields an error if the date is not valid, e.g. month greater than 12. Auto-completion for IDREF values referencing available ID values. An arbitrary text content element between the named tags.

customized concrete syntax DSL

 

Future Work

The XMLText framework targets a quite complex problem, not least because of the feature richness of XSD and respectively XML. There are several topics for further extensions. Future work may include following Topics:
  • Implementation of further XSD features closing existing gaps,
  • an XPath to OCL [Warmer] converter to fully support for example KEY and KEYREF XSD features,
  • a fully automized generation of customized concrete syntax DSL, which includes a configuration wizard for syntax adaption,
  • and a CSS interpreter for concrete syntax DSL styling.

 

Resources

[Badros] Badros, G.J.: JavaML: A Markup Language for Java Source Code. Computer Networks 33(1), 159-177 (2000).
[Eysholdt] Eysholdt, M., Behrens, H.: Xtext: Implement your Language Faster than the Quick and Dirty Way. In: Companion Proc. of OOPSLA. pp. 307-309. ACM (2010).
[Gao] Gao, Shudi, et al. W3C XML schema definition language (XSD) 1.1 part 1: Structures. In: W3C Candidate Recommendation 30.7.2 (2009).
[Harold] Harold, E.R., Means, W.S., Udemadu, K.: XML in a Nutshell, vol. 8. O'reilly Sebastopol, CA (2004).
[Neubauer] Neubauer, P., Bergmayr, A., Mayerhofer, T., Troya, J., Wimmer, M.: XMLText: From XML Schema to Xtext. In: Proceedings of the International Conference on Software Language Engineering. pp. 71-76. ACM, New York, NY, USA (2015).
[Tolvanen] Tolvanen, J., Kelly, S.: De ning domain-speci c modeling languages to automate product derivation: Collected experiences. In: Proc. of SPLC. pp. 198-209 (2005).
[Warmer] Warmer, Jos B., and Anneke G. Kleppe. The Object Constraint Language: Precise Modeling With UML. In: Addison-Wesley Object Technology Series (1998).

XMLText framework website: http://xmltext.big.tuwien.ac.at/
XMLText framework source code: https://github.com/patrickneubauer/XMLText
XMLText framework fork including extensions: https://github.com/syrenio/XMLText

Monday, July 18, 2016

Context Modeling and Analysis of Cyber Physical Production

Christian Proinger 

Introduction

Cyber-physcial Systems (CPS) are a composition of computational entities that are able to sense parameters of the physical world and its processes. They are provide and use services available on the internet. Cyber-physical production systems (CPPS) specializes this concept to the domain of production across all levels, from processes through machines up to production and logistics.

The problem we are addressing is the modeling and analysis of context-aware systems from requirement specification and early design to system deployment or commissioning.

In [1], a multidimensial context model is represented through a set of composable probabilistic state machines (called First-Order managers (FOMs). Dependencies among FOMs are represented through cause-effect relationships called remote firings. The composition of multiple dependent FOMs results  in a Higher-Order manager (HOM). Both FOM and HOM correspond to Continuous Time Markov Chains (CTMC). 
In order to support the approach presented in [1], we implemented an Eclipse based graphical editor for modeling i) FOMs, ii) their composition and iii) HOMs. For analyzing the CTMC model we implemented a code generation feature that generates the input for the Probabilistic Symbolic Model Checker (PRISM). The following figure shows an overview of the process we facilitate with our implemented tools.

Case Study: Factory Operator 

As a running example we consider a factory with three rooms (named A, B and C). Further consider a machine (machine A) that is located at room A in that factory that produces a
certain kind of product. After turning out a certain amount of items it enters a phase of self-maintenance where it checks if its tools need replacements or some re-calibration is necessary. Completing this process after every item would be too time consuming so the time span between self-maintenance phases is adjusted to be optimal in respect to the price of the raw goods and the probable amount spoilage it will produce when self-maintenance would become necessary during
a production phase.
The scenario additionally involves a human operator who was instructed to cycle through three specific rooms of the factory throughout her work schedule. The operator is expected to be at work for a certain amount of hours during work days. Her task is to maintain machines that encounter a problem during a self-maintenance phase that they can not resolve by themselves.
 One question about such a scenario we would like to be able to answer now is: ”How likely is it that, while machine A is in its self-maintenance phase, the operator is in exactly the same room?”. Another question might concern the probability of the human operator at least being at the factory and not at home. 

Context Modeling

To be able to reason about the factory operator case study we apply the framework introduced in [1], which allows to model different types of actions and context-awareness and their dependencies with a stochastic extension of UML state machines, called Managers. The model, which is annotated with performance characteristics is used to analyze properties of the system. First-Order Managers (they are called that because they are concerned with just a single context attribute) can be combined to get FOM-Composition models by introducing remote firing dependencies between them. The following diagram shows the Eclipse EMF meta model for the FOM-Composition.
The FOM-Composition model, as well as FOM models can be modeled with the Eclipse plugins we implemented. The following image illustrates how the FOM-Composition model for the factory operator case study looks like in our tool. 
From this model a context menu action, implemented as an Eclipse plugin, will let the user create a Higher-Order Manager (HOM) model. The HOM is obtained from the FOM-Composition model by using the cartesian product of the states of the source FOMs as states. The transitions in the HOM result from the transitions of the states that make up a combination-state and the remote firing relationships of these transitions. The HOM for the case study, that is created by our implementation, is shown in the following picture. 

Context Analysis

The HOM enables us to reason about the combined context and its evolution. By applying the stochastic process of Continous Time Markov chains (CTMC) we obtain transient- and steady-state probabilities for the combined states. To calculate these probabilities we chose to use PRISM[2]. PRISM takes a text file as an input that conforms to their DSL that describe CTMCs. We implemented an Eclipse plugin to generate this DSL file from a HOM model. The following image shows the PRISM GUI and contains the DSL translation of our HOM.
For our case study PRISM will provide us with the following values enabling us to answer the question initially stated: ”How likely is it that, while machine A is in its self-maintenance phase, the operator is in exactly the same room?”




References

[1]  Berardinelli, L., Cortellessa, V., and Di Marco, A. Fundamental Approaches to Software Engineering: 13th International Conference, FASE 2010, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2010, Paphos, Cyprus, March 20-28, 2010. Proceedings. Springer Berlin Heidelberg, Berlin, Heidelberg, 2010, ch. Performance Modeling and Analysis of Context-Aware Mobile Software Systems, pp. 353–367.
[2] PRISM - Probabilistic Symbolic Model Checker. http://www.prismmodelchecker.org/.

Friday, July 15, 2016

Automated FMU Generation from UML Models

Manuel Geier and Bernhard Sadransky


Introduction

The simulation of cyber-physical systems plays an increasingly important role in the development process of such systems. It enables the engineers to get a better understanding of the system in the early phases of the development. These complex systems are composed of different subsystems, each subsystem model is designed with its own specialized tool sets. Because of this heterogeneity the coordination and integration of these subsystem models becomes a challenge.

The Functional Mockup Interface (FMI) specification was developed by an industry consortium as a tool independent interface standard for integration and simulation of dynamic systems models. The models that conform to this standard are called Functional Mockup Units (FMU).

In this work we provide a method for automated FMU generation from UML models, making it possible to use model driven engineering techniques in the design and simulation of complex cyber-physical systems.

Functional Mockup Interface

The Functional Mockup Interface (FMI) specification is a standardized interface to be used in computer simulations for the creation of complex cyber-physical systems. The idea behind it being that if a real product is composed of many interconnected parts, it should be possible to create a virtual product which is itself assembled by combining a set of models. For example a car can be seen as a combination of many different subsystems, like engine, gearbox or thermal system. These subsystems can be modeled as Functional Mockup Units (FMU) which conform to the FMI standard.

The Functional Mockup Unit (FMU) represents a (runnable) model of a (sub)system and can be seen as the implementation of an Functional Mockup Interface (FMI). It is distributed as one ZIP file archive, with a ".fmu" file extension, containing:
  • FMI model description file in XML format. It contains static information about the FMU instance. Most importantly the FMU variables and their attributes such as name, unit, default initial value etc. are stored in this file. A simulation tool importing the FMU will parse the model description file and initialize its environment accordingly.
  • FMI application programming interface provided as a set of standardized C functions. C is used because of its portability and because it can be utilized in all embedded control systems. The C API can be provided either in source code and/or in binary form for one or several target machines, for example Windows dynamic link libraries (".dll") or Linux shared libraries (".so").
  • Additional FMU data (tables, maps) in FMU specific file formats

The inclusion of the FMI model description file and the FMI API is mandatory according to the FMI standard.

Tools

Enterprise Architect is a visual modeling and design tool supporting various industry standards including UML. It is extensible via plugins written in C# or Visual Basic. The UML models from which we generate our FMU are defined with Enterprise Architect.

Embedded Engineer is a plugin for Enterprise Architect that features automated C/C++ code generation from UML models.

We further used the FMU SDK from QTronic for creating the FMI API. It also comes with a simple solver which we used to test our solution.

Running Example

Our basic example to test our solution is called Inc. It is a simple FMU with an internal counter which is initialized at the beginning and it increments this counter by a specified step value, each time it gets triggered, until a final to value is reached or exceeded.

State Machine

The state machine contains just an initial pseudo state which initializes the state machine and a state called Step. The Step state has two transitions, one transition to itself, in case the counter is still lower then the to value, if this is the case, the inc() operation will be called and we are again in the Step state. If the value is equal or greater to the to value, it reaches the final state and no further process will be done.


Class diagram

The class diagram consists of two parts. The left part with the Inc class is project specific. It holds three attributes: counterstep and to. All attributes are of type int. The initial value for the counter is 0, for the step it's 5 and for the to value it's 50. The FSM classes on the right are the mandatory classes for the Embedded Engineer to be able to generate the state machine code.
Some specific implementation code also exists in various places. In the state machine you can see, that we have some guards on the transitions. These guards are actually code that will be used to generate the code for the state machine:

me->counter < me->to

and

me->counter >= me->to

The property me represents a pointer to an instance of the Inc class.

And finally the implementation of the inc() operation is also provided:

me->counter = me->counter + me->step;





Manual Code Generation

First we manually created our reference Inc FMU, the following steps where taken:
  1. UML models were defined in Enterprise Architect (class diagram and state machine diagram)
  2. C code was generated from the previously created models (with the Embedded Engineer plugin)
  3. The FMI model description xml file and the FMI API were created by hand
  4. The (compiled) FMI API was packaged together with the model description file into a FMU file. This was done with a batch script.


Automatic Code Generation

Now we want to automate the creation of the FMI model description file and the FMI API. For this purpose we wrote our own Enterprise Architect plugin. To be able to generate semantically correct FMI model description and FMI API artifacts, we attached additional information to the UML models. This was achived through the definition of various UML stereotypes for UML class attributes and methods. Since the FMI defines its own data types we also had to map the data types used in the UML models to the corresponding FMI data types. With these challenges addressed we were able to implement our FMU generation plugin.



Future Work

Our work comprises a fundamental prototype that is only a start and could be improved in various ways. The following list describes some issues that could be tackled.
  • One limitation of the current implementation is that we are not able to provide initial values for the FMU. Consequently, to test different configurations of our FMU, we always have to set the default values in the UML models and regenerate the FMU for the simulator again. Hence, future work includes creating new stereo types for initialization of FMU settings/variables and testing these bindings.
  • We used the FMU SDK simulator for this project. Other (more powerful) simulators should be tested too. Furthermore, co-simulation with other FMUs needs to be tested.
  • In our project we restricted ourselves to just look at discrete functions by using the event system of the FMU. To continue the journey we also have to consider continuous functions.
  • More complex examples should be considered to test the capabilities of the automatically generated code. By creating more complex examples the practical limitations of modeling a FMU with a class diagram and a finite state machine need to be discovered. Questions like "What can be implemented?" and "What can not be implemented?" need to be answered.
  • The automated code generation process could be reduced to a one-click functionality to directly generate the ".fmu" file without any additional compilation and packaging step necessary.

Acknowledgement

This work has been supported by LieberLieber in the context of the CDL-Flex project: http://www.sysml4industry.org/.

Screencast

Automated Scoping, Validation and Quickfixes for xText based languages

By Christian Beikov, Raphael Löffler, Oliver Reiter and Christopher Schwarz.

Xtext is a language development framework and mostly used to create domain specific languages as well as to develop feature-rich editors. Model validation with error highlighting, scoping and quick fixes are also supported, but have to be implemented manually. Our project generates said features automatically based on OCL constraints, which are defined in the underlying model of the language. The goal is to reduce the amount of work creating textual editors for DSLs by following a model based approach. In cause of the fact that the whole work is based on Xtext, a big part of it was to explore Xtext to find possible solutions for integrating the described automated approach. Xtext is one part of the openArchitectureWare (oAW), which is framework for supporting model driven development. The possibility to not only create a new DSL, but also create automatically specified editors for this DSL makes Xtext very interesting. Another big part were our work is based on is the Object Constraint Language (OCL). OCL is a textual formal specification language and is part of the Unified Modeling Language. It is a declarative language and was introduced by the Object Management Group (OMG) to add the possibility of adding constraints to object-oriented models that can not be defined with standard diagrammatic notations.

Validation

For each class, which has OCL constraints defined, a validator is generated in Java, which also tries to find the erroneous feature of the object. These are further used in a custom implementation of AbstractDeclarativeValidator to be used in an Xtext editor. The improvement to the default validation is the localization of the error, which also enables the possibility to provide quickfixes. The pictures below show our implementation (Fig. 2.) in comparison to the default validator (Fig. 1.), which highlights the complete, invalid object.

Fig. 1. Error highlighting with default validators
Fig. 2. Error highlighting with OCL validators

Scoping

A scoping provider basically returns a set of objects, which can be used in a specific context. Therefore scoping rules have to be implemented manually, as the default provider only generates a scope with objects of a valid type. Our project tries to filter these by the OCL constraints of the context. To give an example, Fig. 4. shows the filtered scope in comparison to the unfiltered one (Fig. 3.). A constraint hereby is, that a page element, e.g. a text-field, can only refer to features contained in the handled entity of the form. In this example, this is Event. Another one in this example is, that a date-selection-field can only refer to attributes of the type Date and therefore the other elements are filtered out.

Fig. 3. Scope of the default validator
Fig. 4. Scope of the OCL based validator

Quickfixes

Quickfixes are provided by the editor, if an error is detected. For cross references, this is basically just the combination of validation and scoping, but for primitive data types, these have to be provided seperately. Our implementation detects the type of the erroneous feature and generates a fix at runtime in the validator. There is currently just one limitation: quickfixes are provided only for boolean values and cross references, but this functionality can easily be extended in the future.

Fig. 5. Quickfix of the OCL based provider

Future work

Of course there are some restrictions and limits of our implementation, which can be improved in future. Especially the generated Java code can be optimized in future. As mentioned our generated quickfixes currently support only boolean values. The code generation can be extended in future to also provide other types. Another way to extend our solution is to integrate all OCL operations. Currently we only implemented the most important ones to provide basic functionality of the implementation. These two extensions can be easily introduced by extending the implementation for the wanted functionality.


The source code of the project can be found on Github (https://github.com/Advanced-Model-Engineering-SS16/xtext-ocl-extensions).

Friday, July 01, 2016

Implementing a GUI for defining ontology/model mappings

By Stefan Beyer, Raimund Hirz, and Christopher Tunkl

Motivation

In the domains of Software Engineering and Model Engineering, models can be used to describe and document complex systems from many different viewpoints with varying levels of detail. If multiple models about the same system are created, the models might potentially contain inconsistencies and contradictions. In order to detect and eliminate such inconsistencies, a way to define constraints on how two or more specific models should relate to each other - on how some model should be mapped to another - needs to be found. While there exist countless approaches to generate such mappings in an automated or semi-automated way by using various heuristics, our task was to develop a graphical user interface for domain experts for manually specifying and modifying mappings. In our case, those models are represented as ontologies, the cornerstone of the Semantic Web. Most standard technologies used in this area make an open-world assumption, which does not allow to express such constraints. Because of that we use SPARQL, the standard query language for Semantic Web data, to build queries that act as our mapping constraints. We define those constraints on separate models which in turn describe the ontologies

Goal

Our task was to create an extensible, Java-based GUI to define mappings between instances of a given meta-model. In particular, we had to fulfill the following requirements: - The implementation must be developed in Java. Whenever possible, the JFace UI toolkit should be used. - A graphical editor should be used to enable the user to create mappings by using drag & drop or similar “direct” interaction methods. The editor should allow mapping between the elements of two distinct models. - It should be possible to open a dialog to define further constraints for a specific mapping. - The user should be able to save the entire mapping as well as all constraints in a machine-readable file format. In addition to these mandatory requirements, four optional requirements were defined: - Instead of a separate application, the mapping editor should be implemented as an Eclipse plug-in so it can be used within other Eclipse-based applications. - It should be possible to load two or more models into the editor. - The nature of a mapping should be visible directly in the editor without having to open the mapping dialog. - In addition to saving the mapping, it should also be possible to load it again.

Terms and Concepts

Amongst others, the meta-model provided to us contains three key elements: - PackageDeclaration: A collection of TypeDeclarations (similar to packages in Java). Mappings are defined between the elements of two or more PackageDeclarations. - TypeDeclaration: The equivalent of a class. It contains AttrDeclarations and AssocDeclarations. - AttrDeclaration: An attribute, consisting of a name and a type - AssocDeclaration: A reference to another TypeDeclaration AttrDeclarations and AssocDeclarations are collectively known as StructuralFeatures



We distuingish between two kinds of mappings in our editor:

SFMapping

A mapping between two or more StructureFeatures, although our implementation is restricted to AttrDeclarations. It consists of a set of arbitrary value transformations and a mapping function, which defines the specific constraints for the values. Examples for value transformers are string operations such as concatentation or mathematical operations such as addition. Mapping functions define the required relations between two or more values, such as equality or order relations.

TDMapping

A mapping between two TypeDeclarations, consisting of a mapping function. For the sake of usability, TDMappings define constraints on the set of attributes as a whole. They can be seen as shortcuts or convenience mappings. For instance, the TDAllAttrEqual required all attributes sharing the same names to have equal values. That way, the user does not have to define the equality relation between each set of same-named attributes separately. It is easily possible to add further value transformers and mapping functions.

The Editor

Our editor consists of a main canvas, which displays the TypeDeclarations from the loaded models in a similar way to an UML class diagram. On the right edge of the canvas, the tool box contains tools to define and extend mappings.



To change the type of an TDMapping, it is possible to choose one of the predefined types using the context menu of the mapping.



In order to further refine SFMappings, the user can double-click the mapping rectangle in order to open the EditSFMappingDialog. The EditSFMappingDialog consists of the following views: - the toolbox on the left side of the window, containing the lists of available value transformers and mapping functions – the attribute list at the top, listing all the attributes and associations belonging to the TypeDeclarations of the current mapping – the mapping specification in the center, showing the mapping function and all used value transformers – the SPARQL view at the bottom, displaying the specified mapping as a SPARQL query



In order to add value transformers to the mapping or to change the mapping function, the user can double-click the corresponding entry in the toolbox. When adding a value transformer, the it is possible to set its arguments by selecting attributes or other value transformers using the respective combobox. For constant arguments, a textfield is displayed instead. Whenever any changes are performed, the SPARQL query at the bottom is updated in real-time.

Future Work

Our editor is designed in a way, that it can be modified and extended quite easily. There are currently plans to enhance and integrate it into an enterprise modeling environment, which uses the described meta-model. Also we would like to encourage others to create more sophisticated user interfaces for mapping tools of all kinds in order to help end users to better understand and leverage the capabilities of those tools.