Link Search Menu Expand Document
Start for Free

Launchpad with Kubernetes

This chapter discusses deploying Stardog and Launchpad with a single Helm chart.

Page Contents
  1. Overview
  2. Prerequisites
  3. Preparing the namespace
  4. AKV/AKV2K8S
  5. Preparing manifest values
    1. Value: Identity Provider (IdP)
    2. Value: Container Registry
    3. Value: DNS info
    4. Value: Memory
    5. Value: Disk
  6. Install Stardog using Helm
  7. IdP and Stardog Roles
    1. Configuring IdP and Stardog Roles
    2. Troubleshooting IdP and Stardog Roles
  8. Conclusion

Overview

You can deploy Stardog Launchpad with Kubernetes to combine the power of a Stardog cluster with the convenience of Launchpad.

Our goal in this tutorial is to do the following:

  1. Show the commands needed to prepare and deploy a Stardog cluster with Launchpad.

  2. Show the commands help to show when automating in a CI/CD pipeline.

This page only covers how to set up a Stardog cluster with Launchpad via the CLI.

Prerequisites

Commands:

Dependencies:

  • A Stardog license

There are additional dependencies for which you have different options. We currently have documentation for bolded options:

  • IdP (Azure AD)
  • Container Registry (ACR)
  • K8s Providers (AKS, EKS, GCP, Openshift)
  • Certificate Management (AKV, Cert-Manager)
  • Ingress Controller (Nginx, Application Gateway) or Istio
  • DNS (Azure DNS Zone)

Preparing the namespace

First, create a namespace:

kubectl create namespace dev-sd

Then load your Stardog license file:

kubectl -n dev-sd create secret generic stardog-license \
--from-file stardog-license-key.bin=/path/to/stardog-license-key.bin

If your license file is simply called stardog-license-key.bin, you can use: kubectl -n dev-sd create secret generic stardog-license --from-file stardog-license-key.bin.

AKV/AKV2K8S

If you are not using Azure Key Vault and akv2k8s, you can skip this step.

If you are, register the rule to sync your certificates:

kubectl apply -f akv2k8s_stardog_objects.yaml

You can find an example akv2k8s_stardog_objects.yaml manifest here.

The secret names are dependent on the Stardog namespace. In the example, the namespace is dev-sd.

Preparing manifest values

Now we need to prepare the manifest to install Stardog. We have a few templates with different profiles as a starting point.

In the following sections, we will explain how to fill in these templates.

Our templates are mustache-compatible.

Value: Identity Provider (IdP)

The required IdP configuration variables depend on your service provider. You can see a tutorial on setting up Azure AD as your service provide here.

Value: Container Registry

Replace {{REGISTRY_NAME}} in the template with the name for your container registry. You can see a tutorial on setting up Azure Container Registry here.

Our templates assume that you will use stardog, dev-stardog, and dev-launchpad for the repo, Stardog, and Launchpad image names, respectively. If different, adjust accordingly.

Value: DNS info

In the template you will find the variables {{SUBZONE}} and {{DNS_ZONE}}. Replace those with the appropriate values. Our tutorial for Azure DNS Zone is coming soon.

Value: Memory

Our Helm chart contains several memory-related variables. Administering Stardog 101 is a good companion read to this section.

Pod:

  • {{CPU}} - Represents the CPU available for your pod, which should be the majority of the node’s capacity.
  • {{MEMORY}} – Denotes the memory available for your pod, which should utilize most of the node’s memory.
  • {{CPU_LIMIT}} - Sets the maximum CPU Stardog can consume during bursts (range between {{CPU}} and total CPU).
  • {{MEMORY_LIMIT}} - Defines the maximum memory Stardog can consume during bursts (range between {{MEMORY}} and total memory).

JVM:

  • {{MIN_HEAP}} – Follow these guidelines, dependent on pod settings above.
  • {{MAX_HEAP}} - Follow these guidelines, dependent on pod setting above.
  • {{DIRECT_MEM}} - Follow these guidelines, dependent on pod settings above.
  • {{STARDOG_CPU}} - Advisable to leave one CPU for other processes, thus {{CPU}} - 1.

We recommend VM nodes with no less than 8G for running Stardog. Nonetheless, smaller VMs can be used for deployment testing.

Value: Disk

Stardog requires a persistent disk, and you need to set two variables for it.

  • {{DISK_CLASS}} - The type of disk to use. We recommend always using the fastest disk possible from your provider.
  • {{DISK_SIZE}} - Follow these guidelines.

To keep costs low when testing your deployment mechanism, you can use the cheapest disk and 8GB.

Install Stardog using Helm

With your values.yaml setup completed, we can now proceed with the Stardog installation.

helm repo add stardog-sa https://stardog-union.github.io/helm-chart-sa/
helm repo update
helm upgrade --install dev-sd stardog-sa/stardog --namespace dev-sd --values values.yaml

These commands do the following:

IdP and Stardog Roles

Configuring IdP and Stardog Roles

During IdP setup, we created three groups associated with the reader, writer, and admin roles. As only the reader role exists in Stardog by default, you’ll need to create the writer and admin roles. You can do that with the following commands:

kubectl exec -n profile2-sd profile2-sd-stardog-0 -- /opt/stardog/bin/stardog-admin role add writer
kubectl exec -n profile2-sd profile2-sd-stardog-0 -- /opt/stardog/bin/stardog-admin role grant -n writer -a write -o *:*
kubectl exec -n profile2-sd profile2-sd-stardog-0 -- /opt/stardog/bin/stardog-admin role add admin
kubectl exec -n profile2-sd profile2-sd-stardog-0 -- /opt/stardog/bin/stardog-admin role grant -n admin -a all -o *:*

These commands exec into the profile2-sd-stardog-0 pods in the profile2-sd namespace and perform role add and role grant on the writer and admin roles. To learn more about our Security Model, see here.

Troubleshooting IdP and Stardog Roles

If you’re able to access the Stardog service but not able to authenticate with the Stardog server (i.e., there’s a checkmark next to Service and exclamation point next to Server on the diagnostic page), turn on DEBUG logging for token management. You can do that by adding the following to your log4j2.xml file and restarting your server:

 <Logger name="com.complexible.stardog.security.token" level="DEBUG" additivity="false">
     <AppenderRef ref="stardogAppender"/>
 </Logger>

After you’re finished troubleshooting, you’ll want to remove these lines so you don’t pollute your stardog.log file.

If your stardog.log subsequently shows:

DEBUG [timestamp] [XNIO-1 task-2] com.complexible.stardog.security.token.JwtTokenManagerImpl:validateToken(366): A valid token was found for [user]
DEBUG [timestamp] [XNIO-1 task-2] com.complexible.stardog.security.token.ApiTokenRealm:doGetAuthenticationInfo(141): User [user] requested group [role] but a role by that name does not exist.
DEBUG [timestamp] [XNIO-1 task-2] com.complexible.stardog.security.token.ApiTokenRealm:doGetAuthenticationInfo(177): JWT user [user] has no mapped roles

Then your role is defined in your IdP but not in Stardog. Go back to the previous section to see how to add the roles you need.

Conclusion

Now your installation should be ready to be used. Simply go to:

http://launchpad.$SUBZONE.$DNS_ZONE. 

Using our examples, it would be something like:

https://launchpad.dev.example.org