Link Search Menu Expand Document
Start for Free

Installation and Setup

This chapter discusses how to deploy a Stardog Cluster and some additional configuration. This page describes how to configure and set up a cluster. See the Chapter Contents to see what else is included in this chapter.

Page Contents
  1. Configuration
    1. Supported ZooKeeper Versions
  2. Installation
    1. 1. Start ZooKeeper Instances
    2. 2. Start Stardog instances
    3. 3. Start HAProxy (or equivalent)
  3. Single Server Migration
  4. Client Usage
  5. Configuration Concerns
    1. Topologies & Size
    2. Open File Limits
    3. Connection/Session Timeouts
    4. Errors During Database Creation
    5. Large Transactions
  6. Chapter Contents

Configuration

In this section, we will explain how to manually deploy a Stardog Cluster using stardog-admin commands and some additional configuration. We’ll set up a cluster with 6 nodes (3 for Stardog and 3 for ZooKeeper).

In a production environment, we strongly recommend you deploy and configure ZooKeeper from its documentation. Per the ZooKeeper documentation, we also recommend each ZooKeeper process runs in a different machine and, if possible, that ZooKeeper has a separate drive for its data directory. If you need a larger cluster, adjust accordingly. ZooKeeper should also run on separate nodes from Stardog.

Supported ZooKeeper Versions

Different versions of Stardog support different versions of ZooKeeper. You should use the latest patch version of ZooKeeper (e.g., if you are using ZooKeeper 3.8 with Stardog 10.0.0 then make sure to use ZooKeeper 3.8.3 if it is the latest patch version available). We try to support new major or minor versions of ZooKeeper in preview mode to allow sufficient testing before declaring it fully supported. Please refer to the table below to determine which version of ZooKeeper to use with your version of Stardog.

Stardog Version ZooKeeper 3.7 ZooKeeper 3.8 ZooKeeper 3.9
10.0.0 Deprecated Full Support Preview
9.2.1 Full Support Preview Unsupported
9.2.0 Full Support Preview Unsupported
9.1.1 Full Support Preview Unsupported
9.1.0 Full Support Preview Unsupported
9.0.1 Full Support Unsupported Unsupported
9.0.0 Full Support Unsupported Unsupported

Previous versions of Stardog support different ZooKeeper versions, as noted in the release notes. ZooKeeper 3.6 is deprecated in Stardog 9.0.0 and 9.0.1 and unsupported in Stardog 9.1.0+. Stardog 8.2 supports ZooKeeper 3.6 and 3.7 in GA. ZooKeeper 3.5 is no longer supported, as it was declared end of life on June 1, 2022. Prior to Stardog 7.9.0, Stardog officially supports the latest version of ZooKeeper 3.4.

The best way to configure Stardog and ZooKeeper in your own environment is to use whatever infrastructure you use to automate software installation. Adapting Stardog installation to Chef, Puppet, cfengine, etc. is left as an exercise for the reader.

  1. Install Stardog on each machine.
  2. Make sure a valid Stardog license key (for the size of cluster you’re creating) exists and resides in STARDOG_HOME on each node. You must also have a stardog.properties file with the following information for each Stardog node in the cluster:

    # Flag to enable the cluster; without this flag set, the rest of the properties have no effect
    pack.enabled=true
    # this node's IP address (or hostname) where other Stardog nodes are going to connect
    # this value is optional, but if provided it should be unique for each Stardog node
    pack.node.address=196.69.68.4
    # the connection string for ZooKeeper where cluster state is stored
    pack.zookeeper.address=196.69.68.1:2180,196.69.68.2:2180,196.69.68.3:2180
    
    • pack.zookeeper.address is a ZooKeeper connection string where cluster stores its state.
    • pack.node.address is not a required property. The local address of the node, by default, is InetAddress.getLocalhost().getAddress(), which should work for many deployments. However, if you’re using an atypical network topology and the default value is not correct, you can provide a value for this property.

    The license uuid is used as part of the signing key for OAuth tokens; therefore, the same license must be used on every node in the cluster. If the license uuid is different on the nodes, the token will only work against the node that generated it.

  3. Create the ZooKeeper configuration for each ZooKeeper node in a zookeeper.properties file. This is a standard ZooKeeper configuration file, and the same file can be used for every ZooKeeper node. The following config file should be sufficient for most cases:

    tickTime=3000
    # Make sure this directory exists and
    # ZK can write and read to and from it.
    dataDir=/data/zookeeperdata/
    clientPort=2180
    initLimit=5
    syncLimit=2
    # This is an enumeration of all ZK nodes in
    # the cluster and must be identical in
    # each node's config.
    server.1=196.69.68.1:2888:3888
    server.2=196.69.68.2:2888:3888
    server.3=196.69.68.3:2888:3888
    

    The clientPort specified in zookeeper.properties and the ports used in pack.cluster.address in stardog.properties must be the same.

  4. dataDir is where ZooKeeper persists cluster state and where it writes log information about the cluster:

    $ mkdir /data/zookeeperdata # on node 1
    $ mkdir /data/zookeeperdata # on node 2
    $ mkdir /data/zookeeperdata # on node 3
    
  5. ZooKeeper requires a myid file in the dataDir folder to identify itself. You will create that file as follows for node1, node2, and node3, respectively:

    $ echo 1 > /data/zookeeperdata/myid # on node 1
    $ echo 2 > /data/zookeeperdata/myid # on node 2
    $ echo 3 > /data/zookeeperdata/myid # on node 3
    

Installation

In the next few steps, you will use the Stardog Admin CLI commands to deploy Stardog Cluster: that is, ZooKeeper and Stardog themselves. We’ll also configure HAProxy as an example of how to use Stardog Cluster behind a proxy for load-balancing and fail-over capability. There’s nothing special about HAProxy here; you could implement this proxy functionality in many different ways.

1. Start ZooKeeper Instances

First, you need to start ZooKeeper nodes. You can do this using the standard command line tools that come with ZooKeeper:

$ ./zookeeper-3.8.3/bin/zkServer.sh start /path/to/zookeeper/config # on node 1
$ ./zookeeper-3.8.3/bin/zkServer.sh start /path/to/zookeeper/config # on node 2
$ ./zookeeper-3.8.3/bin/zkServer.sh start /path/to/zookeeper/config # on node 3

2. Start Stardog instances

Once ZooKeeper is started, you can start Stardog instances:

$ stardog-admin server start --home ~/stardog --port 5821 # on node 4
$ stardog-admin server start --home ~/stardog --port 5821 # on node 5
$ stardog-admin server start --home ~/stardog --port 5821 # on node 6

Important: When starting Stardog instances for the cluster, unlike single server mode, you need to provide the credentials of a superuser that will be used for securing the data stored in ZooKeeper and for intra-cluster communication. Each node should be started with the same superuser credentials. By default, Stardog comes with a superuser admin that has password "admin"; those are the default credentials used by the above command. For a secure installation of Stardog Cluster, you should change these credentials by specifying the pack.zookeeper.auth setting in stardog.properties and restart the cluster with new credentials:

pack.zookeeper.auth=username:password

If your $STARDOG_HOME is set, you don’t need to specify the --home option.

Make sure to allocate roughly twice as much heap for Stardog as you would normally for a single-server operation (there can be an additional overhead involved for replication in the cluster). Also, we start Stardog here on the non-default port (5821) so that you can use a proxy or load-balancer in the same machine on the default port (5820). This allows Stardog clients to act normally (i.e., use the default port, 5820) since they need to interact with HAProxy.

3. Start HAProxy (or equivalent)

In most Unix-like systems, HAProxy is available via package managers, e.g. in Debian-based systems:

$ sudo apt-get update
$ sudo apt-get install haproxy

At the time of this writing, this will install HAProxy 1.4. You can refer to the official site to install a more recent release.

Place the following configuration in a file (such as haproxy.cfg) in order to point HAProxy to the Stardog Cluster. You’ll notice that there are two backends specified in the config file: stardog_coordinator and all_stardogs. An Access Control List (ACL) is used to route all requests containing transaction in the path to the coordinator. All other traffic is routed via the default backend, which is simply round-robin across all of the Stardog nodes. For some use cases, routing transaction-specific operations (e.g. commit) directly to the coordinator performs slightly better. However, round-robin routing across all of the nodes is generally sufficient.

global
    daemon
    maxconn 256

defaults
    # you should update these values to something that makes
    # sense for your use case
    timeout connect 5s
    timeout client 1h
    timeout server 1h
    mode http

# where HAProxy will listen for connections
frontend stardog-in
    option tcpka # keep-alive
    bind *:5820
    # the following lines identify any routes with "transaction"
    # in the path and send them directly to the coordinator, if
    # haproxy is unable to determine the coordinator all requests
    # will fall through and be routed via the default_backend
    acl transaction_route path_sub -i transaction
    use_backend stardog_coordinator if transaction_route
    default_backend all_stardogs

# the Stardog coordinator
backend stardog_coordinator
    option tcpka # keep-alive
    # the following line returns 200 for the coordinator node
    # and 503 for non-coordinators so traffic is only sent
    # to the coordinator
    option httpchk GET /admin/cluster/coordinator
    # the check interval can be increased or decreased depending
    # on your requirements and use case, if it is imperative that
    # traffic be routed to the coordinator as quickly as possible
    # after the coordinator changes, you may wish to reduce this value
    default-server inter 5s
    # replace these IP addresses with the corresponding node address
    # maxconn value can be upgraded if you expect more concurrent
    # connections
    server stardog1 196.69.68.1:5821 maxconn 64 check
    server stardog2 196.69.68.2:5821 maxconn 64 check
    server stardog3 196.69.68.3:5821 maxconn 64 check

# the Stardog servers
backend all_stardogs
    option tcpka # keep-alive
    # the following line performs a health check
    # HAProxy will check that each node accepts connections and
    # that it's operational within the cluster. Health check
    # requires that Stardog nodes do not use --no-http option
    option httpchk GET /admin/healthcheck
    default-server inter 5s
    # replace these IP addresses with the corresponding node address
    # maxconn value can be upgraded if you expect more concurrent
    # connections
    server stardog1 196.69.68.1:5821 maxconn 64 check
    server stardog2 196.69.68.2:5821 maxconn 64 check
    server stardog3 196.69.68.3:5821 maxconn 64 check

If you wish to operate the cluster in HTTP-only mode, you can add the mode http to backend settings.

Finally, to start HAProxy with the config file we have defined:

$ haproxy -f haproxy.cfg

For more info on configuring HAProxy, please refer to the official documentation. In production environments, we recommend running multiple proxies to avoid single points of failure and using DNS solutions for fail-over.

Now Stardog Cluster is running on 3 nodes, each one on its own machine. Since HAProxy was conveniently configured to use port 5820, you can execute standard Stardog CLI commands to the Cluster:

$ stardog-admin db create -n myDb
$ stardog data add myDb /path/to/my/data
$ stardog query myDb "select * { ?s ?p ?o } limit 5"

If your cluster is running on another machine, you will need to provide a fully qualified connection string in the above commands.

Single Server Migration

It is assumed that Stardog nodes in a Stardog Cluster are always going to be used within a cluster context. Therefore, if you want to migrate a Stardog instance running in single server mode to running in a cluster, it is advised that you create backups of your current databases and then import them to the cluster. This ensures you can provide the guarantees explained in the Cluster overview. If you simply add a Stardog instance that was previously running in single server mode to a cluster, it will sync to the state of the cluster; local data could be removed when syncing with the cluster state.

Client Usage

To use Stardog Cluster with standard Stardog clients and CLI tools (stardog-admin and stardog) in the ordinary way, you must have Stardog installed locally. With the provided Stardog binaries in the Stardog Cluster distribution, you can query the state of Cluster:

$ stardog-admin --server http://<ipaddress>:5820/ cluster info

where ipaddress is the IP address of any of the nodes in the cluster. This will print the available nodes in the cluster, as well as their roles (participant or coordinator). You can also input the proxy IP address and port to get the same information.

To add or remove data, issue stardog data add or stardog data remove commands to any node in the cluster. Queries can be issued to any node in the cluster using the stardog query execute command. All the stardog-admin features are also available in Cluster, which means you can use any of the commands to create databases, administer users, and use the rest of the functionality.

Configuration Concerns

Topologies & Size

In the configuration instructions above, we assume a particular Cluster topology. That is, for each node n of a cluster, we run Stardog, ZooKeeper, and a load balancer. But this is not the only topology supported by Stardog Cluster. ZooKeeper nodes run independently, so other topologies – e.g., three ZooKeeper servers and five Stardog servers – are possible; you just have to point Stardog to the corresponding ZooKeeper ensemble.

To add more Stardog Cluster nodes, simply repeat the steps for Stardog on additional machines. Generally, as mentioned above, Stardog Cluster size should be an odd number greater or equal to 3.

ZooKeeper uses a very write heavy protocol; having Stardog and ZooKeeper both writing to the same disk can yield contention issues, resulting in timeouts at scale. We recommend at a minimum having the two services writing to separate disks to reduce contention or, ideally, have them run on separate nodes entirely.

Open File Limits

If you expect to use Stardog Cluster with heavy concurrent write workloads, you should probably increase the number of open files the host OS will permit on each Cluster node. You can typically do this on a Linux machine with ulimit -n or some variant thereof. Because nodes communicate between themselves and with ZooKeeper, it’s important to make sure there are sufficient file handle resources available. This point is especially true of Cluster but may be relevant for some workloads on a single Stardog database, that is, non-Cluster configurations, too.

Connection/Session Timeouts

Stardog nodes connect to the ZooKeeper cluster and establish a session. The session is kept alive by PING requests sent by the client. If the Stardog node does not send these requests to the ZooKeeper server (due to network issues, node failure, etc.), the session will timeout, the Stardog node will get into a suspended state, and it will reject any queries or transactions until it can establish the session again.

If a Stardog node is overloaded, it might fail to send the PING requests to ZooKeeeper server in a timely manner. This usually happens when Stardog’s memory usage is close to the limit and there are frequent GC pauses. This would cause Stardog nodes to be suspended unnecessarily. In order to prevent this problem, make sure the Stardog nodes have enough memory allocated, and tweak the timeout options.

There are two different configuration options that control timeouts for the ZooKeeper server. The pack.connection.timeout option specifies the max time that Stardog waits to establish a connection to ZooKeeper. The pack.session.timeout option specifies the session timeout explained above. You can set these values in stardog.properties as follows:

pack.connection.timeout=15s
pack.session.timeout=60s

Note that ZooKeeper has limitations about how these values can be set based on the tickTime value specified in the ZooKeeper configuration file. Session timeout needs to be a minimum of 2 times the tickTime and a maximum of 20 times the tickTime. So a session timeout of 60s requires the tickTime to be at least 3s. (In the ZooKeepeer configuration file, this value should be entered in milliseconds.) If the session timeout is not in the allowed range, ZooKeeper will negotiate a new timeout value, and Stardog will print a warning about this in the stardog.log file.

Errors During Database Creation

In order to ensure consistency in the cluster, if there is an error adding one or more data files at database creation time, the operation will fail, and no database will be created. In a single node setup, this is not the case, as the configuration option database.ignore.bulk.load.errors is set to true. When this configuration option is set, Stardog will continue to load any other data files specified at creation time and ignore those that triggered errors. It is not safe to ignore these bulk load errors in a cluster, which is why database.ignore.bulk.load.errors is set to false by default in a cluster. This discussion is merely to describe the differences between a single node and a cluster and how they handle errors during database creation. No additional configuration is required to ensure consistency in the cluster.

Large Transactions

By default, a Stardog Cluster uses a single phase commit protocol when commiting transactions. When a commit is signaled by the client, the coordinator will commit locally and then replicate the commit action to all participant nodes. Each node will commit the transaction and signal the result back to the coordinator, which will then return status to the client.

When you have large transactions against the cluster, there can be significant differences in commit times across nodes, which leaves a window where some nodes will have completed the commit operation and the transaction will be visible, while others have not. This is expected behavior even within a strongly consistent database system like Stardog.

To minimize this window, you can enable the usage of a two-phase commit protocol for cluster transactions. When this is enabled, the bulk of the time-consuming work of the commit will be done within the prepare phase prior to commit. This greatly minimizes the window where you could see different results across nodes, but at the cost of an additional round trip between the nodes in the cluster, adding overhead to the transaction.

Enabling this option when your average transaction time is under 1 second is not recommended. The overhead of the additional phase will typically result in longer transaction times. This option does not have any effect when running Stardog in a single node setup.


Chapter Contents