Link Search Menu Expand Document
Start for Free

Migration Guide

This page contains migration guides for migrating Stardog from one major version to another (e.g. 6.X to 7.Y)

Page Contents
  1. Migrating to Stardog 9.0.0
    1. Java Upgrade
    2. Docker Considerations
    3. System Upgrade
    4. Inference and Reasoning
    5. BI/SQL and GraphQL Schema Generation
    6. Data Quality Validation
    7. Removal of Embedded Server
    8. No Default Anonymous User
  2. Migrating to Stardog 8.1.0
  3. Migrating to Stardog 8
    1. OWL Constraints
    2. Archetypes
  4. Migrating to Stardog 7
    1. Migrating Single-Server Stardog
    2. Migrating Docker-hosted Stardog
    3. Migrating Stardog Cluster
    4. Disk Usage and Layout
    5. Web Console Removed
    6. Memory Databases
    7. Memory Configuration
    8. Database Optimization & Compaction
    9. Database Configuration
    10. Snapshot Isolation
      1. Last Commit Wins
      2. Abort on Conflict
    11. Configuration for new Stardog 7 features
  5. Migrating to Stardog 6
    1. Stark API
      1. Updating your code
    2. Predictive Analytics Vocabulary

Migrating to Stardog 9.0.0

There are two major changes in Stardog 9 compared to Stardog 8:

  1. Stardog 9 requires Java 11 and will not work on Java 8.
  2. Stardog 9 uses a different storage layout for its system database.

We will describe the necessary steps for upgrading from Stardog 7 or 8 to Stardog 9 in the following sections and explain any changes in other functionality.

Java Upgrade

You will need to ensure that Java 11 is installed in the environment Stardog 9 will run. Otherwise follow the instructions for your operating system to install Java and verify the version of Java using the command java -version.

Docker Considerations

The Docker image of Stardog 9 includes Java 11 so no further action is needed. However, The use of /bin/sh and some other commands will no longer work in the Stardog 9 docker container due to a switch from using CentOS to Ubuntu for the underlying image. Users should switch to using /bin/bash and other Ubuntu equivalents.

System Upgrade

There is a change to the storage format for the system database that prevents reverting back to version 8.* once the upgrade to 9 has been made. The upgrade, once allowed, is completely automated and the system database will be updated automatically to the new version. Follow these instructions for the upgrade:

  1. Backup your server data following these insructions prior to upgrading to this release.
  2. Allow Stardog to upgrade to the new version by either a. Running the server start command with the --upgrade argument, OR b. Updating stardog.properties in STARDOG_HOME directory to have the upgrade.automatic = true entry

There is no separate migration or upgrade command required. Upgrade is expected to complete very quickly regardless of the size of data in your databases because only the contents of the system database is affected,

If you update stardog.properties to enable automatic upgrade you should remove the additional entry to avoid unintentional automated upgrades in the future.

Stardog 9 will not automatically migrate home directories created before version 7. If you would like to upgrade from an earlier version please follow the instructions for migrating to Stardog 7 first and then upgrade to Stardog 9.

Inference and Reasoning

Stardog 9 introduces a new inference engine named Stride. See the documentation for the details about this engine. The new inference engine will not take affect unless the reasoning.stride database option is set to true.

There are no changes to the existing reasoning behavior except the DL reasoning type that was already deprecated in previous versions has been completely removed in Stardog 9. If you have set the configuration option reasoning.type=DL those databases will be set to offline mode automatically after upgrading to Stardog 9. The configuration option should be set to a different type before the database can be brought online again.

BI/SQL and GraphQL Schema Generation

Stardog provides the capability to auto generate GraphQL schemas or BI/SQL schema mappings. Schema generation can either use OWL declarations or SHACL shape definitions. The database configuration options graphql.auto.schema.source and sql.schema.auto.source determine which source will be used. If OWL is used for schema generation source Stardog 9 will not use the reasoner by default for schema generation. This change has several implications:

  1. Schema generation is more performant and uses less memory.
  2. The annotation properties so:domainIncludes and so:rangeIncludes from the schema.org namespace will be treated similar to rdfs:domain and rdfs:range for creating attributes and columns.
  3. The generated schema will not reflect any inference results. For example, inverse property declarations in your schema will have no effect on the generated schema.

If you would like to generate the schemas with inference then you can use the following command to generate the schema manually and register it with Stardog:

$ stardog data model --reasoning --input owl --output sql DB

Please refer to the BI/SQL and GraphQL sections for the details of registering schemas manually.

Data Quality Validation

Stardog 9 introduces new SPARQL extensions for performing data quality validation with SHACL. THe details of this new capability are explained in {Data Quality Constraints section](../data-quality-constraints). Before Stardog 9 the only way to perform data validation was to use the API or the CLI. These methods still work in Stardog 9 and the validation results should not change as a result to upgrading to Stardog 9. The data validation API in Stardog 9 is a simple wrapper to generate SPARQL VALIDATE queries but the API behavior is backward-compatible.

Removal of Embedded Server

Before Stardog 9 it was possible to run a Stardog server embedded within the application JVM and connect to the server without going through the HTTP layer. There were several shortcomings of this capability and it was clearly stated not be a production feature. Most notably, embedded deployments suffered from server stability issues and several capabilities behaved different compared to standard server deployments. For this reason, the capability to run Stardog in embedded mode has been completely removed in Stardog 9. The main use case for embedded server was simpler setups in unit tests. Please see the Stardog API examples for how to deploy a Stardog server within a JVM and communicate via HTTP.

No Default Anonymous User

When Stardog server is started for the very first time with an empty STARDOG_HOME directory it creates an admin user. Prior to version 9.0 another user named anonymous was created automatically that would have read access to any resource on the server. Due to security complications caused by this setup Stardog 9.0 will not create the anonymous user by default. If stardog.properties configuration file contains the setting create.anonymous.user=true then the anonymous user will be created as in earlier versions. Note that, anonymous user has already been created then upgrading to Stardog 9 will not cause the user to be dropped.

Migrating to Stardog 8.1.0

There is a change to the storage format for data sources that prevents reverting back to version 8.0.1 once the upgrade to 8.1.0 has been made. Please backup your system database prior to upgrading to this release.

Migrating to Stardog 8

There are no major changes to the storage format in Stardog 8 compared to Stardog 7 so there are no migration steps required. You can start a Stardog 8 server using a Stardog home directory created by Stardog 7. The deprecated configuration options will be removed automatically when Stardog 8 starts the first time which should not take any noticeable amount of time. Due to these changes it is not recommended downgrading to Stardog 7 for that same home directory. As always, it is recommended to create a backup of your home directory before changing versions as an additional precaution.

Several features that were deprecated in Stardog 7 are removed in Stardog 8. The migration instructions for these specific features are described below.

OWL Constraints

Stardog 8 only supports constraints written in SHACL and removes the support for constraints written in OWL completely. As a consequence, there is no icv add command anymore. SHACL constraints can be added to a database using the regular data add command instead.

There is a utility class that can automatically translate some of the OWL constraints to semantically equivalent SHACL constraints. The following snippet shows how this utility can be used programmatically:

    // import com.complexible.stardog.icv.ShaclGenerator;

    Set<Statement> input = RDFParsers.read(Paths.get("path-to-OWL-file"));
    List<Shape> shapes = new ShaclGenerator().convertStatements(input);

    ShaclWriter writer = new ShaclWriter();
    shapes.forEach(writer::writeShape);


    NamespacesImpl namespaces = new NamespacesImpl(Namespaces.DEFAULT);
    namespaces.add("sh", SHACL.NS);

    RDFWriters.write(System.out, RDFFormats.PRETTY_TURTLE, writer.getGraph().graph(), namespaces);

Archetypes

Stardog 8 does not have any built-in archetypes. If you would like to use the archetypes, follow the instructions described in the Stardog Archetype Repository.

Migrating to Stardog 7

Stardog 7 introduces a new storage engine and snapshot isolation for concurrent transactions. This section provides an overview of those changes and how they affect users and programs written against previous versions.

The new storage engine in Stardog 7 introduces a completely new disk index format and databases created by previous versions of Stardog must be migrated in order to work with Stardog 7. There is a dedicated CLI command for migrating the contents of an existing Stardog home directory (i.e., all of the databases in a multi-tenant system).

The following instructions are for migrating all the databases in an existing STARDOG_HOME directory. Instead of migrating all the databases you can start with a new empty home directory and restore select databases using backups created by Stardog versions 4 or 5. If you use the following instructions with very large databases then you should increase the memory settings by setting the environment variable STARDOG_SERVER_JAVA_ARGS.

Migrating Single-Server Stardog

The steps for a single server migration:

  1. Stop the existing Stardog server; do not start Stardog 7 or have either server running
  2. Create a new empty Stardog home folder (we’ll call it NEW_HOME)
  3. Copy your license file to NEW_HOME
  4. Install Stardog 7
  5. cd to where you’ve installed Stardog 7
  6. # OLD_HOME is the STARDOG_HOME before you start the migration
    $ stardog-admin server migrate OLD_HOME NEW_HOME
    
  7. Set STARDOG_HOME (in your .bashrc profile or otherwise) to be equal to NEW_HOME.

The command will migrate the contents of the each database along with the system database that contains users, roles, permissions, and other metadata. Progress for the migration will be printed to STDOUT and can take a significant amount of time if you have large databases. The stardog.properties (if it exists) file will not be copied automatically. See Disk Usage and Layout for changes to the configuration options and other information.

Migrating Docker-hosted Stardog

The migration process for Stardog running in Docker is effectively the same with a couple of Docker-specific differences.

  1. Stop your Docker container.
  2. Create a new directory on the Docker host machine (we’ll call it NEW_HOME).
  3. Copy your license file to NEW_HOME
  4. Run the Stardog 7 Docker container in the following way, which will bring you to a command prompt within the container:
  5. # OLD_HOME is the STARDOG_HOME before you start the migration
    $ docker run -v <path to NEW_HOME>:/var/opt/stardog -v <path to OLD_HOME>:/old_stardog \
      --entrypoint /bin/bash -it stardog-eps-docker.jfrog.io/stardog:6.0.0-alpha
    
  6. Run the Stardog 7 migration tool in the following way:
  7. $ stardog-admin server migrate /old_stardog /var/opt/stardog
    
  8. Set STARDOG_HOME (in your bashrc profile or otherwise) to be equal to NEW_HOME.

Migrating Stardog Cluster

The migration steps for the cluster:

  • Stop all of the cluster nodes, but not the ZK cluster
  • Follow the above steps for single server migration on any one cluster node
  • Run the command stardog-admin zk clear
  • Start the node where migration completed with Stardog 7
  • On the other cluster nodes, create empty home folders
  • Start another node, wait for the node to join the cluster, and then repeat for each cluster node

Disk Usage and Layout

The layout of data in Stardog 7 home directory is different than in all previous versions. Previously the data stored in a database was stored under a directory with the name of the database. In Stardog 7 the data for all databases is stored in a directory named data in the home directory. The database directories still exist but they contain only index metadata along with search and spatial index if those features are enabled.

The disk usage requirements for Stardog 7 are higher than Stardog 6. The actual difference will depend on the characteristics of your data, but you should expect to see 20% to 30% increase in disk usage. Similar to Stardog 6, the disk usage of bulk loaded databases, e.g. when data is loaded by the db create command, will be lower than the disk usage when the same data is added incrementally, that is, in smaller transactions over time.

Web Console Removed

The web console, which had been deprecated in Stardog 6, has been removed entirely from Stardog 7. We encourage you to use Stardog Studio instead.

Memory Databases

Stardog 7 no longer supports in-memory databases. If keeping all data in memory is desired, we recommend placing the home directory on a RAM disk and create databases in the usual way.

Memory Configuration

Stardog 7 uses a new storage engine (RocksDB) which is a native library. No changes to the Java JVM memory settings are required, as Stardog will allocate memory to the storage engine from its off-heap pool. As with Stardog 6, users provide limits for the Java heap memory (-Xmx option) and the off-heap memory (-XX:MaxDirectMemorySize option). See Memory Usage for details.

Database Optimization & Compaction

Similar to Stardog 6, Stardog 7 performance degrades over time as the database is updated with transactions. The disk usage will continue to increase and data deleted by transactions will not be removed from disk. The existing db optimize can be used to perform index compaction on disk to improve the performance of reads and writes. The optimize command now provides additional options for the administrators to instruct which exact optimization steps to perform.

Database Configuration

All server and database options and their meanings are unchanged in Stardog 7, with the following exceptions:

  • Options starting with index.differential and index.writer. Stardog 7 has a new mechanism which replaces the previous implementation of Differential Indexes and Read-Your-Writes so these options are ignored.
  • transaction.isolation needs to be set to SERIALIZABLE for ICV Guard Mode in order to ensure data integrity w.r.t. the constraints.#

Snapshot Isolation

Stardog 7 uses a multi-versioned concurrency control (MVCC) model providing lock-free transactions with snapshot isolation guarantees. Stardog 6 provided a weaker snapshot isolation mechanism that required writers to acquire locks that sometimes blocked other transactions for a very long time, which is no longer the case. As a result, the performance of concurrent updates is greatly improved in Stardog 7, especially in the cluster setting.

There are two different modes for the MVCC transactions based on how conflicting changes made by two concurrent transactions will be handled by setting the transaction.write.conflict.strategy database option.

Last Commit Wins

This is the default behavior (transaction.write.conflict.strategy=last_commit_wins) where the change made by the last committed transaction will be accepted. If two concurrent transactions try to add or remove the same quad the change made by the transaction last committed will be accepted while the other change is silently ignored. This is similar to Stardog 6 behavior which uses locks to achieve the same effect in a less efficient way.

This option provides the best write throughput performance but it also means Stardog cannot maintain the aggregate indexes it otherwise uses for statistics and answering some queries. For this reason, the database option index.aggregate is set to off in this mode.

This also means Stardog cannot track the exact size of the database without introducing additional overhead. In this mode, when you ask for the size of the database using the data size CLI command or Connection.size() API call you will get an approximate number. For example, if you add a quad that already exists in the database it might be double counted. Stardog will periodically update this number to be accurate but the accuracy is not guaranteed in general. The option to retrieve the exact size of the database is provided both in the CLI and the Java API but it will require scanning the contents of whole database which might be slow for large databases.

Abort on Conflict

In this mode (transaction.write.conflict.strategy=abort_on_conflict), if two concurrent transactions try to add or remove the same quad, one of the transactions will be aborted with a transaction conflict. The client then should decide if conflicted transactions should be retried or aborted. This check introduces additional overhead for write transactions but makes it possible to maintain additional indexes and provide exact size information by setting the option index.aggregate to on.

Configuration for new Stardog 7 features

You may want to do additional configuration for two features added in Stardog 7. Read more about those here:

Migrating to Stardog 6

There are two major changes to take account of.

First, the primary incompatible change in Stardog 6 is a new core API, called Stark, which replaces RDF4j/Sesame as the core API around graph concepts. Additional information about that change is detailed below.

Second, the web console is DEPRECATED. It is still available in Stardog 6, but it is NOT supported. We encourage you to use Stardog Studio instead.

Stark API

The first thing you might notice is some different naming conventions than traditional Java libraries. Most notably, the Java Bean-style conventions of get and set prefixes are abandoned in favor of shorter, more concise method names. Similarly, you’ll notice exceptions are not post fixed with Exception, and are instead MalformedQuery or InvalidRDF. We don’t think the Exception postfix adds anything; it’s clear from usage that it’s an Exception, no need to add noise to the name.

Additionally, you will not find null returned by any method in Stark. If it’s the case that there is no return value, you get an Optional instead of null. This includes the optional context of a Statement; instead of using null to denote the default context, there’s a specific constant to indicate this, namely Values#DEFAULT_GRAPH and utility methods on Values for checking if a Value or Statement corresponds to the default graph. If you’re using an IDE that will leverage the JSR-305 annotations, @Nullable and @Nonnull, we’ve used these throughout the interface to document the behavior and you should see warnings if you’re mis-using the API.

There’s no longer a Graph class, so for cases where it’s appropriate to return a collection of Statement, such as the result of parsing a file, we’re simply using Set<Statement>. If you need to select subsets of Statement objects, such as all of the rdf:type assertions, there are utility methods provided from Graphs and Statements, or you can simply get a Stream from the Set and do the filting like you would with any other Collection.

Many of the core APIs have been cleaned up from their original counterparts. For example, Literal was trimmed down to just the basics, and if you need to get its value as a different type, like an int, there are static methods available from the Literal class.

Updating your code

In addition to the changes already mentioned, one thing to look out for is Value#stringValue on the older, Sesame based API. It returned the label of a Literal, the ID of a BNode and an IRI as a String. Generally, the correct replacement this behavior is Literal#str, but in some usages, using toString is sufficient. Value#toString in STARK returns the complete value of the Value object, eg, for a Literal it includes the lang/datatype, whereas stringValue did not.

This is a list of commonly used classes from the previous API, and their new counterparts:

Sesame/RDF4J Stark
ModelIO RdfWriters, RdfParsers
TupleQueryResult SelectQueryResult
Graph java.util.Set
QueryResultIO QueryResultWriters, QueryResultParsers
RDFFormat RDFFormats

Predictive Analytics Vocabulary

The IRIs used to assess the quality of machine learning models have been renamed as follows:

Stardog 5 Stardog 6
spa:validation spa:evaluation
spa:validationMetric spa:evaluationMetric
spa:validationScore spa:evaluationScore

See the examples in Automatic Evaluation section about the usage of these terms.