Getting Started with DirectQuery
This section explains how to get started using DirectQuery to connect to an external Database.
tip
For an overview of DirectQuery, please see the DirectQuery Overview section in the User Guide.
Overall Sequence
To connect to a remote Database using DirectQuery you need to complete these steps:
- Configure your remote Database in Atoti Market Risk's expected Database format
- Set Atoti Market Risk configuration properties
- Deploy Atoti Market Risk in Horizontal Distribution
Database Schema
Your remote Database must be configured in the same format as Atoti Market Risk's in-memory datastores. This means the same Tables and Field Types will need to be replicated. You can do this by defining the Tables as outlined in the Database documentation or by exposing Views on your Database in the same expected Database format.note
All of the Database Tables must be present either as actual Tables or Views on your connected Database.
Required DirectQuery Properties
The following properties must be configured to get started with DirectQuery.application.yaml
The application.yaml file contains specific properties for enabling and connecting to a remote Database.Property | Default Value | Description |
---|---|---|
directquery.enabled |
false |
Enables / Disables the use of DirectQuery |
directquery.database.type |
N/A | Defines the Database to connect to |
For connecting to a specific Database, see our Supported Databases page.
Supported directquery.database.type
- “
snowflake
” for a Snowflake Database. Maps to the propertycom.activeviam.mr.common.directquery.constants.ConditionConstants.TYPE_SNOWFLAKE
.
mr.properties
Within the mr.properties file you can specify which dates to include for the initial load. This will be used to exclude dates from the DirectQuery dataset.
Property | Default Value | Description |
---|---|---|
initial-load.business-dates |
N/A | Dates to exclude from the DirectQuery data node |
When running with DirectQuery, you may not want to load all data from the Database. Currently, you can only filter
the queryable data by AsOfDate. This is done by excluding all dates present in the initial-load.business-dates
property.
Since the in-memory and DirectQuery data nodes cannot contain any of the same data (on an AsOfDate basis), you only need to configure this property once.
The in-memory node will then load the specified dates while the DirectQuery data node will exclude them. The initial-load.business-dates
is only used to
exclude dates from the DirectQuery data node when running in Horizontal Distribution and does not apply to a single JVM setup.
Deployment Options
We provide two main options to run with DirectQuery:
Option | How to run Atoti Market Risk |
---|---|
Operate with some data loaded in-memory and the rest available through DirectQuery | In Horizontal Distribution with in-memory Database |
Run purely on DirectQuery remote data | In a Single JVM on DirectQuery only |
Limitations
Given that the data nodes are distributed by AsOfDate, no two data nodes can contain the same Partition -
meaning that one AsOfDate cannot be present in another node. So to prevent any issues the initial-load.business-dates
property is used to set which dates
to load into the in-memory data node, while also specifying to the DirectQuery data node which AsOfDates to exclude.
note
This property must be provided with the same values to both data nodes.
Horizontal Distribution
In Horizontal Distribution you have access to the in-memory tools such as What-If, data updates and Sign-Off for the data loaded in-memory as well as access to a large number of historical dates.
When running in Horizontal Distribution you need to run two Nodes:
- Combined query and in-memory data node
- DirectQuery data node
JVM with Query Node and In-Memory Data Node (2-Node Distribution)
This JVM will consist of two distributed nodes: a query node and an in-memory data node.
You must ensure that the initial-load.business-dates
property is defined.
Start the JVM with the following Spring and Atoti Server properties, either in a property file (application.yaml) or as command-line arguments:
Property | Value | Description |
---|---|---|
spring.profiles.active |
forceNetty,local-content |
Turns on Netty messaging to communicate with nodes in other JVMs, keeping the in-memory node configuration. |
configuration.cubes.enable.* (except var , pnl , and sensi ) |
false |
Any cube other than VaR, Sensitivities, and PnL must be disabled as it is not supported in the DirectQuery configuration (except common/combined cube). |
initial-load.business-dates |
Dates to load in-memory | The AsOfDates to include in the in-memory data node |
note
When using spring.profiles.active
on the command line, you overwrite all the default profiles previously set in the configuration. Therefore, you need to specify them all again.
JVM with Query Node Only (3-Node Distribution)
This JVM will consist of the query node only.
Start the JVM with the following Spring and Atoti Server properties, either in a property file (application.yml) or as command-line arguments:
Property | Value | Description |
---|---|---|
spring.profiles.active |
dist-query-node,local-content |
Turns on Netty messaging to communicate with nodes in other JVMs, keeping the in-memory node configuration. |
configuration.cubes.enable.* (except var , pnl , and sensi ) |
false |
Any cube other than VaR, Sensitivities, and PnL must be disabled as it is not supported in the DirectQuery configuration (except common/combined cube). |
initial-load.business-dates |
Dates to load in-memory | The AsOfDates to include in the in-memory data node |
JVM with Data Node Only (In-memory) (3-Node Distribution)
This JVM will consist of the in-memory data node.
You must ensure that the initial-load.business-dates
property is defined.
Start the JVM with the following Spring and Atoti Server properties, either in a property file (application.yaml) or as command-line arguments:
Property | Value | Description |
---|---|---|
spring.profiles.active |
dist-query-node,local-content |
Turns on Netty messaging to communicate with nodes in other JVMs, keeping the in-memory node configuration. |
configuration.cubes.enable.* (except var , pnl , and sensi ) |
false |
Any cube other than VaR, Sensitivities, and PnL must be disabled as it is not supported in the DirectQuery configuration (except common/combined cube). |
initial-load.business-dates |
Dates to load in-memory | The AsOfDates to include in the in-memory data node |
server.port |
e.g. 10012 |
The exact value does not matter. The port must simply be different from any other node. |
content-service.db.url |
jdbc:h2:mem:content_service;DB_CLOSE_DELAY=-1 |
The ContentServer will be managed by the query node, so we specify to use an in-memory ContentServer for this data node. |
content-service.db.hibernate.hbm2ddl.auto |
create |
Since we are using an in-memory ContentServer, we will allow Atoti Server to automatically generate the correct ContentServer. |
Alternatively you can set up your content server (the h2
properties) by duplicating the database file and pointing each node to a different file to avoid a lock file error.
warning
You must configure your nodes so that they don’t use each other’s port and database.
In this setup use the following:
Property | Value | Description |
---|---|---|
hibernate.dbfile.name |
e.g. mr_ref_impl_content_service_d2 |
The database file |
JVM With DirectQuery Data Node (2- or 3-Node Distribution)
When starting the DirectQuery data node, you must also ensure that the initial-load.business-dates
property is
defined and in line with the values provided to the in-memory data node.
Start the DirectQuery data node with the following Spring and Atoti Server properties, either in a property file or as command-line arguments in addition to the required DirectQuery properties.
Property | Value | Description |
---|---|---|
spring.profiles.active |
dist-data-node |
Configures this instance to be a data node. |
server.port |
e.g. 10011 |
The exact value does not matter. The port must simply be different from any other node. |
configuration.cubes.enable.* (except var , pnl , and sensi ) |
false |
Any cube other than VaR, sensitivities and PnL must be disabled as it is not supported in the DirectQuery configuration (except common/combined cube). |
content-service.db.url |
jdbc:h2:mem:content_service;DB_CLOSE_DELAY=-1 |
The ContentServer will be managed by the query node, so we specify to use an in-memory ContentServer for this data node. |
content-service.db.hibernate.hbm2ddl.auto |
create |
Since we are using an in-memory ContentServer, we will allow Atoti Server to automatically generate the correct ContentServer. |
directquery.enabled |
true |
Enables the use of DirectQuery for this instance. |
directquery.database.type |
database_type e.g. snowflake |
The type of Database to connect to. |
directquery.database.database_type.* |
Various Database-specific properties | Any other Database-specific properties required (e.g user, password). |
initial-load.business-dates |
Dates loaded in JVM with in-memory data node. | The AsOfDates that were loaded into the in-memory data node. |
Alternatively you can set up your content server (the h2
properties) by duplicating the database file and pointing each node to a different file to avoid a lock file error.
warning
You must configure your nodes so that they do not use each other’s port and database.
In this setup use the following:
Property | Value | Description |
---|---|---|
hibernate.dbfile.name |
e.g. mr_ref_impl_content_service_d2 |
The database file |
Single JVM
Atoti Market Risk can be run purely on DirectQuery data in a single JVM.
By running in a single JVM with DirectQuery only, you can now see the DirectQuery data in the VaR/ES cube.
The configuration properties are as follows:
Property | Value | Description |
---|---|---|
configuration.cubes.enable.* (except var , pnl , and sensi ) |
false |
Any cube other than VaR, Sensitivities, and PnL must be disabled as it is not supported in the DirectQuery configuration (except common/combined cube). |
directquery.enabled |
true |
Enables the use of DirectQuery for this instance. |
directquery.database.type |
database_type |
The type of Database to connect to. |
directquery.database.database_type.XXX |
Various Database-specific properties | Any other Database-specific properties required. |
Single JVM Limitations
Running DirectQuery with a single JVM means running purely on DirectQuery. This will come at the compromise of query performance of Trade-level queries, while also not having access to tools available exclusively in-memory, such as What-If and Sign-Off.
note
Running both an in-memory data node and DirectQuery data node under a single JVM is not currently supported.
Horizontal Distribution of a DirectQuery Data Node and an in-memory Data Node using Sign-Off features
Sign-off is not supported with DirectQuery, but you may want to deploy a cluster in which there is:
- one data node that contains in-memory data and for which Sign-Off features are enabled
- one DirectQuery data node
For this deployment, two options are possible:
1. Adjustment fields are not present in the DirectQuery database
In this case, the DirectQuery database is used without any modifications to its schema.
In this setup, use the following configuration for the DirectQuery data node, in addition to the settings already defined to enable to enable horizontal distribution on the data node:
Property | Value | Description |
---|---|---|
configuration.sign-off.enable |
true | Flag used to enable Sign-Off-related features |
directquery.withAdjustments |
false | This property is only taken into account if the property configuration.sign-off.enable is true . Specify if columns handling adjustments (SOURCE and INPUT_TYPE ) are present in the database used with DirectQuery. If this property is false , those columns are not taken from the database. |
For a Sign-Off deployment, the Adjustments and Sign-Off dimensions are present in the Cube configuration. Therefore, those dimensions must also be present in the Cube’s definition on the DirectQuery data node to make horizontal distribution possible. To achieve this, “dummy” Adjustments and Sign-off hierarchies are created in the Cubes using DirectQuery.
2. Adjustment fields are present in the DirectQuery database
In this case, the DirectQuery database is used with additional fields in the tables where adjustments can be present. The additional fields needed are:
SOURCE
: Sign-Off adjustment source tagging (e.g. user input)INPUT_TYPE
: type of the adjustment entry (e.g. reversal)
For details on how to add them, see Adjustments in the DirectQuery database.