This page discusses how to configure Databricks as an external compute platform.
A Databricks Data Source added in Stardog using the
data-source add CLI command or
Stardog Studio can be registered as an external compute platform. Add specific properties in the data source definition to configure the Databricks data source as an external compute platform.
|external.compute||Boolean value for specifying whether or not data-source is registered as an external compute platform.|| |
|external.compute.host.name||Name of the Databricks workspace.|| |
|databricks.cluster.id||Databricks compute cluster id.|| |
|stardog.host.url||Stardog URL to which Databricks should connect back to write the results. URL should point to the same Stardog server from where external compute operation is triggered.|| |
|stardog.external.jar.path||Path of the released |
By default it points to Stardog’s public S3 bucket, where the latest released version is available.
There are two options for overriding the default path:
1) The jar can be downloaded from another s3 bucket. In this case, this should point to a custom s3 bucket path.
2) The jar can reside locally on the file system where the Stardog server is running. In this case, it should point to the local file system path.
The Stardog server will upload the jar to the Databricks cluster if it is not present on the value specified by
Download the latest jar from this link
|stardog.external.jar.upload.path||Path of the |
Should be set both when the jar is uploaded manually by the user and when the jar is uploaded automatically by Stardog.
|stardog.external.mapping.upload.path||Path on the dbfs where Stardog and spark job will write the temporary files. E.g., in the case of Virtual Graph Materialization operation, the mapping of the virtual graph will be stored here. Stardog and spark jobs will delete these temporary files after completing the process.|| |
|stardog.external.databricks.job.timeout||Spark job timeout property in seconds.|| |
|stardog.external.databricks.task.timeout||Spark task timeout property in seconds.|| |
|stardog.external.databricks.task.retry.count||The number of retries to attempt before the Spark job fails. Set to zero for a single attempt with no retries.|| |
|stardog.external.databricks.task.retry.interval.millis||Time interval after which Spark jobs make a retry attempt in case of an error.|| |
|stardog.external.databricks.is.retry.timeout||Boolean value for specifying whether the Spark job makes a retry attempt or not in case of a timeout error.|| |
|stardog.external.databricks.job.on.start.email.list||Comma-separated list of emails to be notified when the Spark job stars.|
|stardog.external.databricks.job.on.success.email.list||Comma-separated list of emails to be notified when the Spark job completes.|
|stardog.external.databricks.job.on.failure.email.list||Comma-separated list of emails to be notified when the Spark job errors out.|
|spark.dataset.repartition||Refer to spark-docs. Set this value to override the default partition behavior.|