Tuesday, October 01, 2013

How to apply EMF profiles to models

In a previous post, we demonstrated how to create EMF profiles based on the example of the famous library metamodel. Once we have created our profile, we want to apply them to corresponding models. In this post, we demonstrate step-by-step how this is done using EMF Profiles.

What we have so far

As shown in our previous post, we created a profile, named MyExtendedLibrary, for the library metamodel. In particular, we wanted to be able to store whether a book is an ebook and, if it is an ebook, in which format it is available (such as ePub or PDF). Additionally, we wanted to be able to annotate books with tags. Therefore, we created a profile containing two stereotypes EBook and TaggedBook, both applicable to instances of Book. The resulting profile is depicted below. Once we created this profile correctly (i.e., it is valid and contains no errors), it is automatically registered to the local profile registry; ready to be applied to library models.

Applying the profile

Assume we have a small library model, that is, an instance of the library metamodel, consisting of two books, named "My Book" and "Another Book". To apply our profile to this model, we first open the model in the Sample Reflective Ecore Editor, or in any other EMF-based modeling editor if available (e.g., a GMF-based editor).

Next, we open the Profile Application View using Window > Show View > Other... > EMF Profiles > EMF Profile Applications. Once the profile application view is visible, again select the library model to be annotated in the modeling editor. The profile application view will now detect automatically that we selected a modeling editor that is supported by EMF profiles and activates the buttons for creating new and loading existing profile applications (the button with the "window plus" icon and the button with the "green plus" icon) in the profile application view.

To create a new profile application, click the button with the "window plus" icon (it should say Apply Profile in the tool tip). This will open up a wizard in which we first have to select the location where we want to save the profile application information. Note that the additional information that is saved in profile applications is stored in a separate file. The extension of profile application files is pa.xmi. Select a location and a file name for the profile application file and click Next >.

In the next step, we choose the profile that we want to apply to the model. Therefore, the wizard lists all profiles that are available in the local profile registry. In this list, we may now select the MyExtendedLibrary profile and hit Finish.

The wizard now creates the empty profile application file for us and shows the profile application in the profile application view.

Now we can start applying stereotypes to our small library model. Therefore, we right-click the model element to which we want to apply a stereotype and select Apply Stereotype in the popup menu. Note that this command is only activated in the popup menu, if there is a stereotype available in the loaded profile that can be applied to the selected model element. In our example, this should be the case for all books that we have in our library model, since the MyExtendedLibrary profile contains two stereotypes that extend the class Book. So let us right-click the book My Book and click Apply Stereotype. This brings up a window in which we can select the stereotype to be applied; let us choose the stereotype <<EBook>> and click Ok.

The stereotype <<EBook>> is now applied to My Book. Thus, a new item indicating this stereotype application is added to the profile application view. Note that only those stereotype applications are shown in the view that are applied to the model element currently selected in the modeling editor (e.g., My Book). If you prefer to see all currently applied stereotypes in the currently loaded profile application file, irrespectively of which model element is selected, unselect the Pin View with Selected Element button in the profile application view.

Since we not only want to apply stereotypes, but also add additional information in terms of tagged values (attributes and references introduced by the stereotypes), we can now select the stereotype application in the profile application view and inspect the standard Eclipse Properties view. This properties view now shows all attributes and references of the applied stereotype; in our case this is for instance the format attribute, for which we can select the value PDF or EPUB, as specified in the MyExtendedLibrary profile. Note that we can apply both stereotypes in this profile only once for each book according to the specified multiplicity of the extension relationship (cf. MyExtendedLibrary profile).

Once we finished applying stereotypes, we can save the profile application by clicking the Save button in the profile application view and closing the modeling editor showing our library model. Of course, we can still load this profile application (or several at the same time) later. This is done by opening the library model and by clicking the Load Profile Application button (the one with the "green plus" icon). This brings up a wizard in which we select the profile application file to be loaded. Modifying the loaded profile application in terms of adding or deleting stereotype applications or modifying their attributes and references works similarly as with new profile applications.

The profile application file

One major advantage of EMF Profiles is that profile application files are common EMF models and, as a result, they can be processed with any EMF-based technology, such as model transformation tools, model-to-code generation frameworks, etc. So lets have a brief look at how these models look like.

Basically, profile application models consist of one container object of type Profile Application and a list of instances of the stereotypes that we applied to the respective model. The model element to which the stereotype has been applied is referred to through the reference appliedTo and the additional values (e.g., the format attribute of the EBook stereotype) are common EMF attributes or references. Loading profile applications as normal EMF models enables us to apply any processing techniques imaginable for common EMF models.

We hope you enjoyed reading this blog post and invite you to try EMF profiles for your needs. If you have any questions or comments, don't hesitate to contact us or post a comment below.

Friday, July 05, 2013

Developing exercises for “Model Engineering”

— by +Robert Bachmann, Martin Kerschberger, +Manuel Mertl, and +Thomas Wruss.

This blog post describes a project implemented in the context of the course “Advanced Model Engineering” in summer semester 2013. The results of the project consist of two parts. On the one hand we created a new exercise for students in the domain of model-driven engineering, and on the other hand we performed a practical evaluation of ATL against an imperative Xtend-based approach.

Model engineering exercise

The general structure of our exercise is based on the structure of the existing exercises that were used in previous instances of the course “Model Engineering”. Our exercise consists of three parts:
  1. Creation of two meta-models
    1. One meta-model for a PIM
    2. One meta-model for a PSM
  2. Model-to-model transformation from PIM to PSM using ATL
  3. Model-to-code transformation from PSM to executable code using Xtend

According to our intended workflow students solve each part in groups and receive a reference implementation of each part after the respective submission deadline. Therefore, we will avoid to publishing figures and code.

Our primary requirement for part 1 was that the PIM and PSM should not be too abstract but from already familiar to most students. Therefore, we chose a meta-model for questionnaires and surveys for part 1.1 (PIM), since most students are probably familiar with questionnaires and surveys. Since we can assume that all students are familiar with web forms, we chose a meta-model for simple web forms for part 1.2 (PSM).

Part 2 uses ATL to transform a questionnaire model into a simple web form model.

Part 3 uses Xtend to transform a simple web form model into an interactive HTML page. Since we do not want to assume that all students are proficient with CSS and JavaScript, we provide a re-usable CSS file and a small JavaScript library. This allows students to avoid boilerplate code and to focus on code generation.

The following screenshot shows an example of a simple generated questionnaire web page:


ATL versus an imperative Xtend-based approach

As mentioned above, we implemented a model-to-model transformation using ATL. The purpose of this transformation is to transfrom a questionnaire model into a simple web form model.

Additionaly we evaluated an imperative Xtend-based approach for model-to-model transformations. Our aim was to find out whether learning a DSL, in this case ATL, is really necessary or if a general-purpose programming language such as Xtend is sufficient. We chose Xtend since it is a modern general-purpose programming language. Since Xtend is Java-based we could leverage EMF’s features.

Xtend transformation infrastructure

Our Xtend-based transformation process consist of three steps:

  1. Model loading: The source model is loaded from disk.
  2. Model transformation: The target model is derived by applying a transformation method to the source model
  3. Model persistence: The target model is saved to disk.

In order to isolate the infrastructure code (step 1 and 3) from the actual model transformation we use the following approach: A transformation is modeled as class which extends the base class AbstractModelToModelTransformation (see source code). The base class is responsible for loading the source model and persisting the target model. The transformation method has to be provided by a concrete sub-class.

The following listing shows the complete implementation of an identity transformation:

Code examples

The following listing shows two ATL rules interspersed with their Xtend counterparts:

The first ATL rule on lines 2–8, is quite similar to its Xtend counterpart. The derivation of the values value and text on lines 5–6 is very similar to its Xtend counterpart on lines 12–13. The second rule on lines 17–22, is also similar to its Xtend counterpart. One noticeable exception is the derivation of the collection resultMessages on line 20 and the corresponding Xtend counterpart on line 26. Line 26 is more verbose since more explicit steps need to be performed to fit into Java’s type model.

Observations

  • ATL transformations tend to be shorter than Xtend transformations (in our case 160 versus 250 lines of code). Xtend (and Java) transformations in turn have the benefit that they use an imperative approach which is probably easier to comprehend for developers which are already familiar with Java.
  • Whereas the result of an ATL transformation is deterministic the execution order of ATL rules is not. By contrast, Xtend transformations follow the standard execution order semantics of a Java program. ATL’s approach would allow the ATL engine to execute multiple rules in parallel, whereas our Xtend transformation approach is strictly intended for non-parallel (single-threaded) execution. Although a multi-threaded Xtend transformation approach would be technically possible, we doubt that such a transformation could be implemented in a way that does not lack readability in comparison to a similar ATL transformation.
  • Both ATL and Xtend provide functional features, however, in a very different form. Most parts of an ATL transformation are functional in the sense that they are free of side effects. Imperative statements — statements with side effects — must be performed in designated areas of the code such as do-blocks. Therefore statements with side effects are easily recognizable by there location in the ATL source code file. By contrast, Xtend is an imperative object-oriented language that additionally provides some functional features. Therefore, there is no way to recognize statements with side-effects by looking for a certain keyword alone.

SMTL2Java: A simple model transformation language compiled to Java

Metamodels and models can be found everywhere in the information technologies, not everywhere recognised as such, but they are there, just think of a JPEG file. There is a description of how such a file has to look like, what information has to be in there and how the information is stored, that's the metamodel. And then there are JPEG files, e.g. from a digicam or a scanner, those are the models. Model based approaches like Model Driven Development (MDE) or Model Driven Software Engineering (MDSE) heavily depend on the ability to transform one model into another model. Within the previous example, that would be a JPEG file transformed into another format like PNG.

The starting point of our work is the Atlas Transformation Language (ATL), which is easy to read and understand, because it's written in plain text. But as soon as it is compiled into its intermediate bytecode, which is to be interpreted by the model transformation virtual machine, the code isn't that readable or understandable anymore, thus hard to debug. Our approach takes whole transformations and compiles them to Java, the advantage is clearly on hand, it's more readable, understandable and, most important, easily debuggable.

We focused our work on a functionally reduced subset of ATL and named it after what it is, a Simple Model Transformation Language (SMTL). In the first step we created an abstract syntax, our metamodel, see Figure 1.

Figure 1: The SMTL metamodel

In the second step we built a concrete textual syntax with Xtext, following our metamodel.

Concrete textual syntax


SMTL supports the transformation of several source elements to several target elements, the rules are the same as ATL standard matched rules. However OCL constraints are not supported. We have also added some different keywords to SMTL. In Listing 1.1 an example of an SMTL transformation is given. Listing 1.2 show an example of ATL transformation.

Listing 1.1: SMTL transformation

module A2B;
input A path/metamodel
output B path/metamodel

rule A2B {
  from
    a1 : A ! Model
  to
    b1 : B ! Model (
      pb name <− ”name”,
      nb name2 <− a1.name
    ),
    b2 : B ! Model (
      oeb b <− b1,
      rb listB <− a1.listA
    )
}


Listing 1.2: ATL transformation

--@path A=/at.ac.tuwien.big.ame13.atl2java.atl/model/A.ecore
--@path B=/at.ac.tuwien.big.ame13.atl2java.atl/model/B.ecore
module A2B;
create OUT : B from IN : A;

rule Model2Model {
  from
    ma : A ! Model
  to
    mb : B ! Model (
      b <- ma.a
    )
}


Comparing the SMTL grammar in Listing 1.1 with the ATL in Listing 1.2, we can notice the differences in the header part. SMTL doesn’t have a path definition. Instead of that information, the models are provided by the input and output keywords.
The main differences though are the specified bindings with keywords pb, nb, oeb, rb in SMTL, which do not explicitly exist in ATL. These bindings are as follows:

  • Primitive binding "pb": Represents the assignment of a primitive type, like integer or string to the feature of a target element.
  • Navigation binding "nb": Binds a feature of a target element with a defined source element’s feature value.
  • Output pattern element binding "oeb": Binds a feature of a target element with another target element of the same rule, but that target element has to be declared beforehand.
  • Resolve binding "rb": Has the purpose to bind a feature of a target element with a list of target elements or a single target object. The target elements have thereby got to be resolved first by the information of the transient links, since only the feature value of a source element is given for this kind of binding. From the grammar’s point of view, resolve bindings are nearly the same as navigation bindings and, as a result, extend those.

In the next step a model to code transformation is realised with the help of Xtend, with which SMTL models can be transformed into executable Java codes.

Model to code transformation


The main purpose of this part is to do a model to model transformation that is executed based on Java. For this, we first of all have to implement a model to code transformation with the help of Xtend, to transform a SMTL model into executable Java code, which then does the actual model to model transformation afterwards. So, the implementation in Xtend can be seen as the compiler for SMTL models to the corresponding Java classes.

For the implementation of the compiler it was important to consider the following points: In order to do a concrete model to model transformation in Java, the transformation has to be divided into a creation phase and an initialization phase. In the former phase only the creation of target elements out of matched source elements is done, whereby the bindings of the target elements’ feature values are completely done in the initialization phase. In order to have enough information to fulfill the bindings the second important point has to be considered, that is to save the trace information of source elements to target elements in transient links during the creation phase. Since in some kinds of bindings (e.g. resolve bindings) the information about which target elements are created out of which source elements is needed, the stored information in the transient links can help in these situations.

The following example shall show an exemplary model to model transformation.

Example


In Figure 2 we try to transform book elements into ebook elements, which is done by the rule Book2Ebook. The element book contains two information, namely title and author and these will be bind to the values of the features name and info of the element ebook. The binding types in this example are navigation bindings, which bind the features name <- title and info <- author. The other rule Model2Model defines the transformation from a model into another model element and contains a resolve binding. The resolve binding here is responsible, that the elements, which are transformed by the rule Book2Ebook, exists in the output model.

In this example, we have as input model three book elements and after the rule is executed, we get as output model the three corresponding ebook elements.

Figure 2: Exemplary model to model transformation "Book2Ebook"


Tuesday, July 02, 2013

FTL - A Plain Transformation Language for Ecore Models

Model transformations are a very important part of model driven software development approaches. However, popular transformation languages like QVT or ATL are not intuitive in their application and do neither provide a strong formal metamodel, nor a flexible transformation generator.

This is where Functional Transformation Language, or FTL for short, comes into play. It’s a functional model-to-model transformation language for Ecore models and is described itself in terms of an Ecore model and Eclipse Modeling Framework (EMF) tools like Xtext or Xtend.

Overview


Even though FTL is capable of transforming Ecore models, it does neither depend on the Ecore API, nor on Java or any other programming language. Instead, a FTL compiler translates FTL to an intermediate code and may perform some target language specific improvements e.g. tail call optimization. Currently, there is a reference implementation that compiles FTL to Java.
Let’s have a look at the basic concepts of FTL like the functional notion and the generation of Java code to perform the actual model-to-model transformation.

Functional Idiom

The best way to express a model transformation precisely is to describe it as a function that transforms model elements of the source metamodel to model elements of the target metamodel. This approach allows the specification of transformations with little syntactic overhead. Thus, FTL was strongly influenced by modern programming languages such as Scala, Xtend and Groovy. However, it provides some advantages in terms of model transformations compared to them. Especially, you just have to deal with the "what" and not with the "how".

Type System

FTL is a strongly typed language and supports only the most common data types natively. The compiler has to interpret the structural semantics of the expressions and attempts to infer its type. Since FTL uses a compiler in order to create the concrete code, every type that is supported by it can be used e.g. you could use an instance of java.util.List within the model transformation even if FTL is not aware of Java. In this case, the type has to be specified explicitly. For further information about mixing FTL with the target language, please refer to the section Interoperability.

Syntax


Every FTL transformation has to specify the root model element named "context" the transformation is performed on:

context farm.Farm;

Afterwards, there are a number of functions that take model elements as an input parameter and perform transformations on them. The following listing takes a Farm as an input parameter and milks every Animal that is a Cow and collects the resulting Buckets. In the end, Nothing is returned, which is a special value of type Nothing. It can be used whenever returning a value is not applicable due to some reason.

def milk(f: Farm): Nothing {
  for (a <- f.animals) {
  if (isCow(a)) {
  f.buckets += milk(a);
        }
  }
}

Note that isCow() is neither a FTL function, nor an operation of a model element. Instead, it is an invocation of a function available in the code of the language FTL is compiled to. In other words, it could be a Java method as follows:

public boolean isCow(Animal a) {
  return a instanceof Cow;
}

Functions can optionally be guarded by a condition enclosed by square brackets ("[]"). If it does not hold i.e. it evaluates to false, the function is not invoked.

def milk(cow: Cow) [cow.isFemale() && !cow.isMother()]: Bucket {
  if (cow.bio) {
  return milkByHand(cow);
  } else {
  return milkingMachine(cow);
  }
}

The following constructor function takes a Cow as an input parameter and creates a new Bucket instance (note the arrow => next to the return type) filled with the content of the Cow.

def milkByHand(cow: Cow): => Bucket {
  content = cow.content;
  cow.content = null;
}
// milkingMachine() has been omitted because we love our cows

Code Generation


As you can see in the previous listings, you do not have to deal with Ecore directly. The FTL compiler transforms FTL to ordinary code of the target language including the native function calls like isCow() from the example above. Besides that, the reference implementation for Java adds code for executing the transformation independently or as part of another application. You can see the corresponding Java source code below. Note that the code for running the transformation is omitted due to sake of simplicity.

package farm;

public class FarmTransformation {
  public static void main(String... args) {
  /* omitted the code for starting the transformation */
  }

  public void milk(Farm f) {
  for (Animal a : f.getAnimals()) {
        if (isCow(a)) {
           f.buckets.add(milk(a));
        }
     }
  }

  public Bucket milk(Cow cow) {
     if (!(cow.isFemale() && !cow.isMother())) {
        return null;
     }
     if (cow.isBio()) {
        return milkByHand(cow);
     } else {
        return milkingMachine(cow);
     }
  }

  public Bucket milkByHand(Cow cow) {
     Bucket bucket =
  FarmFactory.eINSTANCE.createBucket();
     bucket.setContent(cow.getContent());
     cow.setContent(null);
     return bucket;
  }
}

As you can see, the compiler performed a number of modifications on the FTL code. For example, all variables now have explicit type information, the guard condition of the milk() function became an if statement in the corresponding method and the constructor function that returned something of type => Bucket became a method that uses FarmFactory, which is provided by the metamodel, to construct Bucket instances.

Interoperability


As already mentioned, besides being a functional language supporting imperative constructs, FTL also supports data types of the target language as well as native function calls. This gives the opportunity to embed existing code into FTL and execute it as part of the model transformation.
Due to the fact that the compiler might not be aware of those types and functions during compile type e.g. because they can be loaded at runtime only, you have to give the compiler hints how they fit into FTL. The following listing contains a single function using java.util.List and java.lang.String:

def usageOfJavaTypes(list: java.util.List): String {
  val f: Farmer = f.get(0);
    val name: String = f.toString();
  return name;
}

Note that using types of the target language might reduce portability. If the FTL transformation would be compiled to C++, the compilation will fail unless the compiler replaces String with string and java.util.List with list. Furthermore, the native method calls such as get() and toString() are not available in C++ as well.
Nevertheless, this kind of extensibility bridges the gap between FTL and the target language and allows tailored model-to-model transformations along with other technologies. Therefore, this opportunity results in a clear benefit compared to other model transformation languages like ATL.

Getting FTL


FTL (including the Compiler and the IDE based on Eclipse) is published under the Eclipse Public License 1.0 and can be downloaded from Google Code:
https://code.google.com/p/functional-transformation-language/

Friday, June 28, 2013

Change Impact Analysis & Constraint Violation Detection

by: Markus Zoffi, Christina Zrelski, Roland Jöbstl, Christian Johannes Tomaschitz 

Whenever teamwork is part of the development process, one is not capable to avoid merging of double edited files. However, if these files only contain code the merge process can be resolved very easily, but if there are more complex constructs - like models - the merging process also increases its complexity. During the participation of the Advanced Model Engineering Lecture we worked on an efficient way to detect constraint violations after merging the models, which is presented in the following.

Goal of the Project

The overall idea of this project was concentrated on performing a change impact analysis of parallely evolved model versions. Thereby every model version in itself was valid, but the conjunction of different and/or parallel modifications caused one or more violations in constraints. The resulting constraint violations should not only be detected, we also wanted to identify the cause of the violation.

Defining Metamodel and OCL Constraints

The first step and general setup for the project was to create a simple metamodel and add some OCL constraints. Our metamodel is basically a simplification of an UML class diagram, it contains four classes: the class SimpleClassDiagram, SimpleClass, SimpleAttribute and SimpleRelation.

A SimpleClassDiagram consists of a number of SimpleClasses, which can in turn specify inheritance (superclass relation), attributes (relation to SimpleAttribute) and relations (bidirectional relation to SimpleRelation). A SimpleAttribute specifies certain properties of a SimpleClass, each one having a name and type, whereas a SimpleRelation defines the relation between classes as well as the properties of the relation itself (minCardinality, maxCardinality).


Afterwards we defined some basic OCL constraints in the metamodel (using OCLinEcore) which could be easily violated during a merge process for our testing purposes.
The following five constraints were used:

  • Unique Attribute Name:
    context SimpleClass
    inv: self.attributes->isUnique(name);
  • Limit Number Of Attributes:
    context SimpleClass
    inv: self.attributes->size() < 6;
  • Attribute Name Not Empty:
    context SimpleAttribute
    inv: self.name->notEmpty();
  • Cardinality of Relation Within Range:
    context SimpleRelation
    inv: self.minCardinality > -1 and self.maxCardinality < 50 and self.minCardinality <= self.maxCardinality;

Merging the Models 

In the context of model merging and diffmodel analysis we used the EMF Compare Framework, which provides comparison and merge facility for any kind of EMF models. For the merging task we use a three-way-merge, which means that two revised versions and their common ancestor model had to be specified. Thereby the ancestor model is treated as a reference for identifying and connecting the differences.

Detecting the Violations 

The first step was to iterate over the given models and identify which model elements matched to each other. Afterwards, for each two matching elements the differences between them were detected.
Thereby, each difference corresponds to a change (where change means, ADD, CHANGE, DELETE, MOVE Operation). In detail, all model elements that changed were processed and compared to the invalid trace object. The goal thereby was to identify the model element which was responsible for the validation error in the OCL constraint. These information was then processed in a further step to generate the output for the user.

Demonstrating our approach
For our demonstration let's take the following two models:


The merging process leads to a constraint violation with the Limit Number of Attributes constraint with the following output.



Monday, April 29, 2013

How to create an EMF Profile

Whenever you want to store additional information to EMF-based models and you cannot or don't want to change the respective metamodel, EMF Profiles provide a convenient solution for extending metamodels in a non-intrusive and lightweight manner; comparable to how UML profiles allow to extend UML. For extending a metamodel, all you have to do is to create a profile, which can then be applied to models in order to store the additional information you want to add to the original model. In the following, we show how profiles are created step by step.

Extending the library metamodel

For illustrating the process of creating a profile, we extend the well-known library example metamodel. This metamodel basically contains three classes: the class Library, Writer, and Book.

Now assume, we need to add additional information to library models without having to change the metamodel and, thus, without the need to re-generate the model-, edit-, and editor code. In particular, we want to be able to store whether a book is an ebook and, if it is an ebook, in which format it is available (such as ePub or PDF). Additionally, we want to be able to annotate books with tags.

Creating a profile project

To create a new profile project, we start the dedicated wizard for profile projects by selecting File > New > Other and select EMF Profile > EMF Profile Project.


After clicking Next, we enter a name for the profile project, as well as additional information about the profile, such as its name and its namespace URI. For the name and the namespace URI, it is recommendable to apply the same naming conventions as for metamodels.


After hitting the Finish button, the wizard creates the EMF profile project and opens the created empty EMF profile diagram, which has been initialized with the information provided in the wizard already, as can be seen in the Properties view.

Modeling the profile

A profile consists of stereotypes, which extend EClasses of metamodels. In our scenario, we want to store additional information to instances of the class Book of the library metamodel. Thus, we will create stereotypes that extend the class Book. Therefore, we first have to import the class Book. To do that, we use the popup menu (right-click the canvas of the profile diagram editor). In the popup menu, we now select Import Metamodel Element... and choose the extlibrary metamodel and, in the next step, the EClass Book. As a result, we obtain a shortcut to Book in our profile diagram canvas.



Now, we can start creating stereotypes using the palette in the profile diagram editor. To store whether a book (i.e., an instance of the class Book) is an ebook, we create the stereotype EBook. As we want to apply this stereotype to instances of Book, we create an Extension relationship from the created stereotype to the imported class Book. Extension relationships have a multiplicity (e.g., 0..1, 0..*, 2..4, etc.). This multiplicity specifies how often a stereotype can be (and must be) applied within one profile application to an instance of the extended class (e.g., Book). In our scenario, the stereotype EBook should not be mandatory, since we might have books that are not ebooks and, thus, should not be annotated with the stereotype EBook. Thus, we use a lower-bound multiplicity of 0. Further, it wouldn't make any sense to apply the stereotype EBook multiple times to the same book. To prohibit multiple applications, we set the upper-bound multiplicity to 1.


Stereotypes may carry additional data. One way to allow for adding additional data is to add tagged values to stereotypes. Since we want to store the format of an ebook, we add the tagged value named format to the stereotype EBook. Tagged values are basically attributes and can have primitive Ecore types, such as EString, EBoolean, ect., as well as custom EEnum types. We want to store whether an EBook is available in the format of EPUB or PDF; thus, we create a new EEnum named EBookFormat with two literals, EPUB and PDF, and set the type of the attribute format to EBookFormat.

Side note: Although this isn't needed in our example, stereotypes may, in general, also contain cross references to existing model elements (in the base model or in the profile application), as well as containment references (i.e., the stereotype application owns additional model elements). For creating references, just use the palette in the diagram editor and set the properties of the added reference accordingly. You can also use inheritance among stereotypes, if needed.

As mentioned above, we also want to be able to annotate books with tags. Therefore, we add another stereotype named TaggedBook, which also extends the class Book, and add a tagged value named tags of type EString to the created stereotype. To enable multiple tags for one book, we may set tags to be multi-valued (i.e., an upper-bound of -1 in the properties of the tagged value) and/or set the multiplicity of the stereotype to be 0..*, which specifies that the stereotype TaggedBook may be applied arbitrarily often to one instance of a book.

Registered profiles

Having finished the profile, we may validate it and save it. If the profile is valid, it will be registered automatically to the local profile registry. You may inspect all currently registered profiles using the Registered EMF Profiles view, which can be added using Window > Show View > Other... > EMF Profiles > Registered EMF Profiles.


As can be seen in the screenshot above, the created profile has been registered correctly. Note that you can still modify your profile; it will be synchronized automatically with the local profile registry, as long as your profile is valid. If it is invalid, it will disappear from the registry until you fix the validation error.

Once your profile is available in the local registry you can apply it to models that conform to the extlibrary metamodel. This, however, will be covered in future posts... so make sure to stay tuned.