Skip to content


The development of ...

During the development of the GBOL ontology it became increasingly more difficult to handle the changes made in the ontology. We had developed RDF2Graph in the past to reveal the structure of a semantic database but was only sufficient once a database was created. To enforce parsers to cohere to the ontology we required a more advanced solution in the form of an API generator based on an ontology.

To manage the large variety of properties and classes in an easy to use format we have developed Empusa as part of the GBOL Stack. Empusa is a java application which converts OWL/Shex like ontologies into an API for Java and R + an ontology website.

As an example, for the GBOL ontology, alone empusa generates from a 4000 line ontology a JAVA api of 50.000 lines, R api of 12.000 lines and an OWL and ShExC file of 12.044 and 3202 lines and this website you are currently viewing.

The input file for Empusa is a combination between OWL and a simplified version of ShEx, which can be edited within for example Protege.

The classes are defined in OWL, whereas the properties are defined in each class under the annotation property ‘propertyDefinitions’ encoded within a simplified format of the ShEx standard.

Additionally predefined value sets (for example all article types) can be defined by adding a subclass to the EnumeratedValueClass. Each subclass of the value set is represented as one element within the value set.

All together Empusa shortens the development cycle, eases the development, consistency and maintenance of GBOL and its associated framework as it generates all the elements from one single entity.

Using Empusa

To be able to use the Empusa application, the ontology or input file needs to be written in a combination of OWL/Shex of which structural examples are given below using Protégé.

Obtaining Empusa


The Empusa code generator can be obtained from here. The EmpusaCodeGen.jar can be directly used to convert a defined OWL/Shex ontology into the corresponding API / Shex / OWL / Documentation files.

Code base

If you are interested in the further development of Empusa you can access the code base and the various modules at:

The application can be installed through the following command which is located in the folder obtained.

./ install 

Getting started - Building your own ontology

Example ontology

An example of an ontology project can be found at:

And to obtain it through git you can run the following command:

git clone

There are two turtle files located in this cloned project:

  • ExampleAdditional.ttl
  • example-ontology.ttl

First, the ExampleAdditional.ttl is a file containing extra information about your ontology and should be changed accordingly:

@prefix up: <> .
@prefix owl: <> .
@prefix rdf: <> .
@prefix xml: <> .
@prefix xsd: <> .
@prefix rdfs: <> .
@prefix gen: <> .
@base <> .

<> rdf:type owl:Ontology ;
    gen:codeProjectGroup "";
    gen:codeProjectName "Example";
    gen:codeProjectVersion "0.2";
    gen:codeProjectFullName "Example Api";
    gen:codeProjectDescription "The Example API code generated from the Example owl file";

The values of the codeProject properties can be modified to your likings corresponding to the ontology you are making.

The second file example-ontology.ttl contains the ontology. We used protege to create this ontology file and can therefore be easily opened with protégé.

In the following image an overview of the example ontology is given.

Example ontology

The root of the ontology is an owl:Thing in which all the other subclasses are defined in.

An important class for the API is the EnumeratedValueClass. Within this class a limited selection for a specific property can be defined. For example when a class has the property country it should be defined as:

#* The country 
country type::Country;

This makes sure that the predicate country can only choose from the list of contries available in the subclass Country of the EnumeratedvalueClass. This to ensure strict coherence to an accepted naming scheme.

Linking to other classess can be done via

bibo:presents @bibo:Document*;
bibo:organizer @foaf:Agent*;
bibo:place xsd:String*;

In which the @bibo:Document points to the

Document class located under Literature in the same ontology file and the organizer points to an Agent class. The @bibo: makes use of the predifined prefixes in which bibo corresponds to

To define other types such as String, integer, date, etc... the following way of writing is used:

bibo:shortTitle xsd:String?;
dc:created xsd:date?;
bibo:numPages xsd:Integer?;

To strict the number of values a specific predicate can have the * ? + = $\sim$ symbols are used where * denotes 0..N, ? 0..1, + 1..N. The = and ~ sign can be used to define the references be stored as an ordered list to ensure that the elements are numbered.

For a more complex ontology written in this format have a look at the root-ontology.ttl in the GBOL git directory.

Generating the API

Once you have defined (or a part of) your ontology the API can be created. This is achieved through the EmpusaCodeGen.jar.

java -jar EmpusaCodeGen.jar

The following options are required: [-o | -output], [-i | -input]
Usage: <main class> [options]

    -rg, -RDF2Graph
      File to write RDF2Graph file
    -r, -Routput
      The directory into which the R project should be generated
    -sC, -ShExC
      Generate ShExC file
    -sR, -ShExR
      Generate ShExR file
      Generate a documentation page
    -eb, -excludeBaseFiles
      Do not overwrite the base project and pom files
      Default: false
  * -i, -input
      The additional file followed by the ontology to use
      Generate json framing file
  * -o, -output
      The directory into which the project should be generated
      Generate official OWL file
    -sNP, -skipNarrowingProperties
      RDF2Graph export skip property already defined in parent class
      Default: false

  * required parameter

For example to build the Example ontology:

java -jar EmpusaCodeGen.jar -i ExampleAdditional.ttl -i example-ontology.ttl -o ./MyJavaApi -owl ./file.owl -ShExC ./file.shex

This creates a MyJavaApi folder in which all the source code files and gradle build files are located. You can immediatly compile the java code into a jar package such that you can more easily integrate this as a dependency on an existing code base using the install script provided.


Using the API

After the compilation of the ontology for the API we focus now on how to use the API to create your RDF files.

Using the API through the source code files

This example is written for Intellij but I am sure it should also work for eclipse.

We first load the code base as a new project into intellij.

Project overview

  • I have created a new empty project in which the API folder was imported as a module with Gradle support.

  • MyProgram.class is created inside the following path MyJavaApi/src/main/java/nl/wur/ssb/, using the code outlined below:

package nl.wur.ssb;

import com.xmlns.foaf.domain.Agent;
import java.time.LocalDate;
import nl.wur.ssb.RDFSimpleCon.RDFFormat;
import nl.wur.ssb.RDFSimpleCon.api.Domain;
import org.purl.ontology.bibo.domain.Book;

public class MyProgram {

  public static void main(String[] args) throws Exception {
    Domain domain = new Domain("");

    // Creating the book object using the class type and a URL
    Book book = domain.make(org.purl.ontology.bibo.domain.Book.class,"");

    book.setAbstract("The abstract of the book");

    // The book was accepted today!

    Agent agent = domain.make(com.xmlns.foaf.domain.Agent.class,"http://example/com/myAgent007");
    agent.setName("James Bond");

    // Saving the database to a file in the TURTLE format
    String output = new File("example.ttl").getAbsolutePath(); , RDFFormat.TURTLE);

    System.out.println("Saved to: "+output);

Breaking it down into smaller steps the following is happening:

Standard java class creation:

package nl.wur.ssb;

import com.xmlns.foaf.domain.Agent;
import java.time.LocalDate;
import nl.wur.ssb.RDFSimpleCon.RDFFormat;
import nl.wur.ssb.RDFSimpleCon.api.Domain;
import org.purl.ontology.bibo.domain.Book;

public class MyProgram {

  public static void main(String[] args) throws Exception {

RDF / JENA store creation, using "" it creates a temporary memory store as long as the code runs this is available. Persistent disk stores can be used using a "file://" path.

Domain domain = new Domain("");

Once the domain object is created, classess can be spawned and filled with content.

for example example to create a book entry:


The source path as present in the "API" folder in this project + the class name (here Book) and the .class is needed. The second argument is the URL of this entry.

The nice thing about the Java api is to know what kind of properties are available for the Book class and all its parent classes. Just type variableName. and possibilities should pop up.

For example to set the Abstract a string object is required and can just be directly written like: ​
​ book.setAbstract("The abstract of the book");

Or instead of a string, setting the date for a publication can be done like this:


Connecting properties can be done via first creating the other object for example the Author of a book which is an Agent:

Agent agent = domain.make(com.xmlns.foaf.domain.Agent.class,"http://example/com/myAgent007");

And what is an author without a name?

agent.setName("James Bond");

Now connecting the book to the author:


As a book can have multiple authors the book.addAuthorList(agent) can be used multiple times to add other authors e.g. (book.addAuthorList(agent2))

Finally you would like to have the RDF database in a formatted RDF file. , RDFFormat.TURTLE);

Using the JAR file

You can also incorporate the JAR file into an already existing JAVA project.

To create a basic JAVA project in which you would like to include the jar execute the following in a new folder (e.g. test)

mkdir test
cd test
mkdir libs #Needed later
gradle init --type java-application

Open the build.gradle file in intellij as a new project.

Use gradle 'wrapper' task configuration

Once you are all set you should see the newly generated java main and test classes.

To incorporate your API you need to copy the jar file which in my case was into the libs folder.

Now you should modify the build.gradle file such that the jar files inside the libs folder are recognised (fileTree line).

dependencies {
    // This dependency is found on compile classpath of this component and consumers.
    compile ''

    // Use JUnit test framework
    testCompile 'junit:junit:4.12'
    // This includes all jar files from the libs folder
    compile fileTree(dir: 'libs', include: '*.jar')


The editor should see that a change occured and should allow you to reload the gradle build file by importing the changes.

For example:

public class App {
    public String getGreeting() {
        return "Hello world.";

    public static void main(String[] args) throws Exception {
        Domain domain = new Domain("");
        Book book = domain.make(org.purl.ontology.bibo.domain.Book.class,"");
        book.setAbstract("Abstract of the book");"book.ttl");
        System.out.println(new App().getGreeting());

When executed this should create the book.ttl file which contains:

@prefix owl:   <> .
@prefix rdf:   <> .
@prefix rdfs:  <> .

        a       <> ;
                "Abstract of the book" .

There are more possibilities with the API, such as direct querying, merging databases and more!

Feel free to contact us for any Empusa / API / OWL/Shex related issues.