Skip to main content

Datastore REST API

The Datastore REST API lets users interact directly with the datastore: discovering the datastore schema, querying the content of the stores, updating the content, creating and managing branches.

Note that the REST API provides a way to edit the datastore but should not be considered as an optimized and efficient data source. To fill the datastore efficiently please read the Data Loading documentation.

A customer account is required to read the documentation of all the REST services provided with the ActivePivot application. Once connected you can access the detailed RAML documentation for the datastore REST API.

Permissions

There are two kinds of permissions used by the datastore REST API : branch permissions and store permissions. They are independent and defined separately : the store permissions are not specific to a branch.

These permissions are sets of usernames or roles. If a user has their name or one of their roles in the set then they have the permission.

Branch permissions

Branch permissions are made of 2 sets : owners and readers. These are managed by the Branch Permissions Manager. Owners of a branch can read and write data to that branch, readers can only read from the branch, users that are neither readers nor writers do not even see the branch.

Store permissions

The store permissions are defined at the store level or at the field level for more detailed permissions. Each field has 2 sets : readers and writers. Giving the permission to a store is equivalent to giving the permission for each field of that store. Writers on a field have read and write privileges on the corresponding field, readers can only read the field values and users that are neither readers nor writers of a given field cannot access that field at all.

Additionally each store can support insertion and deletion via the REST API. This means that a user that has writer permissions on each field of the store can insert or delete rows.

When discovering a store, the result contains canEdit, canInsert and canUpdate fields. These are user specific : the value depends on the global store permissions and the user permissions.

These permissions are stored in a IStoreSecurity which can be defined via a fluent builder:

IStoreSecurityBuilder builder = StoreSecurityBuilder.startBuildingStoreSecurity()
.supportInsertion()
.withStoreWriters(ROLE_ADMIN)
.withStoreReaders(ROLE_USER)
// make ROLE_ADMIN reader of the field "Id" - no writer for that field
.addFieldPermission("Id", Arrays.asList(ROLE_ADMIN), Collections.emptyList());

Combining Branch and Store Permissions

To do an action a user needs both store and branch permissions :

  • To read a field one needs the read permission on this field of the store and to be a reader of the branch
  • To update a field, one needs the write permission on this field and to be owner of the branch
  • To insert/delete a line in a store, one needs the write permission on all the fields of the store, to be owner of the branch, and the insertion/deletion must be activated in this store.

Configuration

DatastoreRestServicesConfig is the Spring configuration to expose the datastore REST API. It autowires the IDatastoreServiceConfiguration that defines the stores permissions and the custom parsers and formatters. See DatastoreServiceConfig as an example in the sandbox project.

Different types of queries

Discoveries

Discoveries query let the user discover the structure of the datastore. A discovery only returns the content that is visible to the user. The discoverable content is:

  • The stores, with the fields the user running the discovery can see, and their editability with respect to the user's permissions.
  • The references between the stores. A reference is only visible to the user if all the fields used by the reference are visible to them.
  • The branches and their permissions. A branch is only visible if the user has reader or owner role for that branch. The permissions are given as their set of usernames and/or roles : a reader of the branch cannot edit data, but can find out who has owner role.

Get Query

Get queries allow the user to get data from the datastores. These queries support conditions. Do not confuse the Get Query (versus Update Query) with the GET HTTP method (versus POST): if there are conditions, the Get Query is actually performed using the POST method. See details in the RAML documentation.

Update Query

Update queries allow the user to update the content of the datastore.

Branches Query

Branches query can be used to create, manage and delete branches.

Conditions

Conditions are plugins. Custom plugin values can be implemented in a project to match specific needs.

Core Conditions

There are classic conditions available in the core product.

Equal ($eq)

Used to filter on a specific value of a field:

{
"myFieldName" : {
"$eq" : 0
}
}

Equal is the default operation. If you do not call any operator the equal will be used. The previous example is equivalent to:

{
"myFieldName" : 0
}

Greater Than (gt and $gte) and Lesser Than (lt and $lte)

Used to compare the value of a field. $gt is strict Greater Than, while $gte is Greater Than or Equal to.

{
"myFieldName" : {
"$gte" : 0
}
}

Logical Operations ($or, $and, $not)

$or and $and are used to combine several sub-conditions:

{
"$or" : {
"field1" : {"$eq" : 10},
"field1" : { "$eq" : 15 }
}
}
{
"$and" : {
"user" : {"$eq" : "admin"},
"city" : { "$eq" : "Paris" }
}
}

The $and operator is the default behavior when there are multiple conditions listed but no operator. The previous example is equivalent to the following:

{
"user" : {"$eq" : "admin"},
"city" : { "$eq" : "Paris" }
}

$not is used to negate a single condition:

{
"$not" : {
"field1" : { "$eq" : 10}
}
}

Value Within a Collection ($in)

This is the equivalent of an equal but with several possible values:

{
"field1" : {
"$in" : [ "aa", "bb", "cc" ]
}
}

Pattern Matching ($like)

$like allows to test a String field for a pattern. It takes a regexp (such as accepted by Pattern.compile(String)) then the Strings are tested with Matcher#find(), which means we test that the field contains the regexp.

{
"myDateAsStringField" : {
"$like" : "^\\d{4}-01-01"
}
}

User Defined Conditions

In order to define your own condition you need to implement IConditionFactory.

The key of the plugin value is the string that will be used in your json queries.

This can also be used to override the behavior of the core operators. For example the $like operator could be overridden to be applied to non-string objects, using the object's toString:

@QuartetPluginValue(intf = IConditionFactory.class)
public static class UserDefinedLikeCondition extends PluginValue implements IConditionFactory {
public static final String PLUGIN_KEY = "$like";
@Override
public String description() {
return "Like operator for non-string objects";
}
@Override
public Object key() {
return PLUGIN_KEY;
}
@Override
public ICondition compile(String field, JsonNode value, JsonConditionCompiler jsonCompiler) {
return BaseConditions.Like(field, value.asText());
}
}

Update Procedures

Update procedures are plugin. Custom plugin values can be implemented in a project to match specific needs. The update procedures apply to the rows selected in by the "where" clause of the UPDATE action. See the detailed {@restDocLink: "RAML documentation|activepivot-datastore.html#data_branchesbranchNamepost"} for a full example of an update query.

Core Update Procedures

There are classic update procedures available in the core product.

Set a value ($set)

Set is the basic update operation : it sets a value to a field.

{
"$set" : {
"currency" : "EUR"
}
}

It can also be used without explicitly the $set operator by just using the field name and the value to set.

{   
"currency" : "EUR"
}

Copy from a field to another ($copy)

The $copy operator copies the value of a field to other fields. This operator expects all the fields to have the same type.

{   
"$copy" : {
"from" : "username",
"to" : [ "field1", "field2", "field3" ]
}
}

Set to the current date ($currentDate)

This operator sets the current date to a field or a list of fields. The target fields are expected to be a date in one of the following formats: Instant, Date, LocalDate, LocalTime, LocalDateTime, or ZonedDateTime. If several fields are specified, the exact same date will be put to every field and on every row (this date is constructed just before running the update procedure).

{   
"$currentDate" : ["lastUpdate", "lastConnexion"]
}

List of procedures

Multiple operations can be combined in the same procedure. They will be merged together and applied in the correct order in the same IUpdateWhereProcedure.

{   
"currency" : "EUR",
"city" : "Paris",
"$currentDate" : ["lastUpdate", "lastConnexion"],
"$copy" : {
"from" : "username",
"to" : [ "field1", "field2", "field3" ]
}
}

User Defined Update Procedures

You can use a custom update procedure if you write your own PluginValue that implements IUpdateWhereProcedureFactory.

In the same spirit as for the custom IConditionFactory, the plugin key of your custom procedure is also the key to use in the json.

Implement Hooks for Audit, Validation, Security...

It is possible to implement hooks that intercept the calls to the datastore REST API. These hooks must implement IDatastoreService.

These hooks are inserted between the IDatastoreRestService that exposes the REST service and the IDatastoreService that actually does the querying (see DatastoreRestServicesConfig.datastoreRestService() and DatastoreRestServicesConfig.jsonDatastoreService()).

A classic use of these hooks for each call would be :

  • Do your own logic first (logging the call, awaiting validation, filtering parameters...)
  • Call the same function on the next hook
  • Receive the result of the next hook.
  • Do your own logic depending on the result (logging success/failure, filtering result...)
  • Return the result

To help writing new hooks an abstract implementation ADecoratedJsonDatastoreService is provided, which basically only calls the next hook. By extending this hook one can override only the relevant methods.

Here is an example of a hook that intercepts the calls of each get or update and does a simple logging.

/*** A dummy logger that logs when there is a GET or EDIT query.
*
* @author ActiveViam
*
*/
public class DummyLoggingDatastoreService extends ADecoratedDatastoreService implements IDatastoreService {
private static final Logger LOGGER = Logger.getLogger(DummyLoggingDatastoreService.class.getSimpleName());
/**
* Constructor of {@link DummyLoggingDatastoreService}.
*
* @param nextService the next service to call.
*/
public DummyLoggingDatastoreService(IDatastoreService nextService) {
super(nextService);
}
/** the query id to identify logs. */
private AtomicInteger queryId = new AtomicInteger(0);
/**
* Log an event.
*
* @param queryId the id of the query doing the login.
* @param event the event to log.
*/
protected void logEvent(int queryId, String event) {
LOGGER.info("\tDLDS [" + queryId + "] : " + event);
}
@Override
public JsonDatastoreQueryResult queryDatastore(
String storeName,
JsonDatastoreQuery jsonQuery,
ICondition additionalCondition) {
int id = queryId.getAndIncrement();
logEvent(id, "new query on store " + storeName);
JsonDatastoreQueryResult result;
try {
result = nextService().queryDatastore(storeName, jsonQuery, additionalCondition);
} catch (Exception e) {
logEvent(id, "query failed");
throw (e);
}
logEvent(id, "query succeeded");
return result;
}
@Override
public Map<String, JsonStoreResult> editDatastore(
String branch,
JsonDatastoreEdit edit,
ICondition[] additionalConditions) {
final int id = queryId.getAndIncrement();
Map<String, JsonStoreResult> result;
logEvent(id, "new update on branch " + branch);
try {
result = nextService().editDatastore(branch, edit, additionalConditions);
} catch (Exception e) {
logEvent(id, "update failed");
throw (e);
}
logEvent(id, "update succeeded");
return result;
}
}

These hooks can be chained. When building the IDatastoreRestService only the first IDatastoreService must be given, each hook contains a reference to the next one. Remember that DatastoreService (the one that actually queries the datastore) must be called last.