What is a Graph in RDF?
RDF Graph Databases, also known as Triplestores, are a subset of Graph Databases where data is represented in triples. A simple triple consists of a subject, a predicate and an object aka subject-predicate-object. The predicate is the edge in the data graph that connects the subject to the object nodes. If we add context or graph information to a triple, we end up having the following structure: graph-subject-predicate-object. And when we talk about a graph in an RDF Graph Database, we always refer to it as the context. This type of triple, in turn, is named a quad.
The graph exists to structure and represent your data better because the triples with the same graph have the same context. The existence of the graph is one of the main differences between a property graph database and an RDF graph database. Yes, you can store your graph information in a property graph database too, but the RDF store is designed from the ground up with this in mind. In the end, the choice of the database type is a matter of performance and how you want your data to be represented best for your use case.
What Happens If There Is No Graph?
One can insert data in the RDF Graph Database that does not contain the graph information. These simple triples are stored in the so-called “unnamed graph” or “default graph” of the database. We want to see how to access this graph and we know that the DEFAULT SPARQL keyword is usually used in such cases.
Now that we specified what the DEFAULT graph is in relation to an RDF Graph Database, we will take a look at different triplestores and their specific implementation of it. We will look at some basic actions like data insert, delete and query.
The triplestores we evaluated are: RDF4J 2.4, Stardog 6.1.1, GraphDB 8.8, Virtuoso v7.2.2.1, AllegroGraph 6.4.6, MarkLogic 9.0, Apache JENA TDB, Oracle Spatial and Graph 18c. From now on when we mention one of them, we refer to the versions listed here. We did not change any configurations upon installation, so our observations relate to the default setup.
Download a table view of our results here.
Learning
Data insert observations
The insert data SPARQL query used is:
INSERT DATA {
<http://example.org/picasso>
<http://example.org/paints>
<http://example.org/guernica>
}
This query inserts a triple which has to graph information. The triple is stored in the DEFAULT graph of each RDF Graph Database. However, there is a difference from store to store of what the DEFAULT graph represents.
In Stardog, the DEFAULT graph keywords do not exist and instead one needs to use>. All triples land here.
Apache JENA TDB uses <urn:x-arq:DefaultGraph\> as default graph and the triples land here. You can use the DEFAULT keyword to query them.
Virtuoso has an internal default graph but the big difference is that a user cannot access it by using the DEFAULT keyword. The triples without graph information are added to this internal default graph.
Select data observations
The SPARQL query for selecting data used is:
SELECT * WHERE {
?s ?p ?o
}
For most of the triplestores what happens is that the data retrieved is coming from all graphs, including the DEFAULT graph. Basically, it does not take into account any specific graph. The exceptions are:
Stardog retrieves data only from its internal default graph <tag:stardog:api:context:default>.
For Virtuoso you always need a graph otherwise you receive: “No default graph specified in the preamble”.
Delete data observations
The SPARQL query used to delete a triple is:
DELETE {
?s ?p ?o
} WHERE {
<http://example.org/picasso>
<http://example.org/paints> ?o
}
Generally, the triples that match the pattern are deleted from ALL graphs it exists in. Exceptions from this behavior we found in:
Stardog deletes the triple only in the defined default graph.
MarkLogic and Apache JENA TDB behaves the same. It deletes the triples that match the pattern only from the internal default graph.
In Virtuoso one always needs to specify a graph to delete data.
We also want to remark how a SPARQL query looks like when the DEFAULT keyword is present. The query to select data would look like:
SELECT * FROM DEFAULT WHERE {
?s ?p ?o
}
Additional known configurations
In Stardog there is a configuration property which lets you choose which behavior you like better. Through the query.all.graphs = true parameter, when you query without a graph, it will look in all graphs — default and named graphs — exactly like in the case of RDF4J. And if the property is set to false, it will only query the internal default graph.
Additionally, if for some reason, you really need a graph in your SPARQL query even when you only need data from the DEFAULT graph, in Stardog you can write it as: FROM <tag:stardog:api:context:default> . And if you want to query all graphs, you can also do FROM <tag:stardog:api:context:all>.
In Virtuoso we learned that you always need to specify a graph when you query. So how do we work with the DEFAULT graph than?
There is a specific syntax for Virtuoso which lets you define/set your graph at the beginning of the query:
define input:default-graph-uri
INSERT DATA{
<http://example.org/picasso>
<http://example.org/paints>
<http://example.org/guernica>
}
Read more about it in the Virtuoso documentation.
AllegroGraph also provides some configurations. The defaultDatasetBehavior can be used directly in the SPARQL query to determine if :all, :default or :rdf should be used when no graphs name is specified in the query.
Or one can fix the default graph name with the default-graph-uris option (or the default-dataset-behavior) upon the run-sparql command.
In MarkLogic when working with REST or XQuery one has the default-graph-uri and a named-graph-uri parameters available, like mentioned in the SPARQL 1.1 Protocol recommendation to specify the graph.
In Apache JENA TDB all named graphs can be called with <urn:x-arq:UnionGraph>. The configuration parameter tdb:unionDefaultGraph can be added to switch the default graph to the union of all graphs. And the default graph can be specifically called with <urn:x-arq:DefaultGraph\>
Conclusion
RDF Graph Databases are built from the group up with the context of your data in mind. Knowing your graphs and triplestore setup is, from my point of view, a basic knowledge for both developers but also data engineers. Always start with the question: “what setup do I need for my use case?”