MDX Drillthrough
In business analysis, drillthrough is a term used to describe the act of exploring data from a condensed view by examining an exploded view of the underlying data.
In the context of data aggregation, this means it is possible to list the facts at a given location, helping explain a specific aggregate result.
Atoti gives the user the ability to explore and extract sets of facts stored in our database, the Atoti Datastore. To achieve this, Atoti offers a bespoke API based on locations (see Database Queries). However, experience has shown us that customers sometimes need to extract some very particular set of facts, that do not rely on location conditions, but rather on some specific pre-computation (e.g. filters on measure values). The MDX drillthrough allows querying for facts in the datastore, based on an MDX query that returns a single cell. The facts returned by the drillthrough are the ones that add to the contributors.COUNT on the location of the cell, with the same filters as the MDX query.
MDX Drillthrough Query
Basic Drillthrough Statement
The MDX drillthrough statement standard is defined by Microsoft.
An MDX drillthrough query looks very similar to an MDX query. It starts with the DRILLTHROUGH
keyword,
followed by a MDX select statement.
Any select statement can be used, with as complex filter as needed, provided that the statement is syntactically valid
and it returns a single cell.
Here is an example of an MDX drillthrough with a complex filter :
DRILLTHROUGH
SELECT FROM (
SELECT Filter(
[Time].[HistoricalDates].[AsOfDate].Members,
IsDate([Time].[HistoricalDates].CurrentMember.MemberValue)
AND DateDiff("d",[Time].[HistoricalDates].CurrentMember.MemberValue, Now()) = 0)
ON COLUMNS FROM (
SELECT {[Underlyings].[Products].[ALL].[AllMember].[LISTED].[OPTION],
[Underlyings].[Products].[ALL].[AllMember].[OTC].[BARRIER OPTION]}
ON COLUMNS FROM (
SELECT TopCount(Filter([Bookings].[Desk].Levels(1).Members,
NOT IsEmpty([Measures].[delta.SUM])), 1, [Measures].[delta.SUM])
ON COLUMNS FROM [EquityDerivativesCube])))
Real-time
The MDX drillthrough supports real time updates. However, those updates are limited to the set and subset of data determined when the query is launched (i.e. the location and the subcubes are evaluated only once, when the query is registered). Atoti will deliver the related updates for this location, and those subcubes, as and when they occur.
For example, the query above contains a TopCount
filter:
the real time updates will concern data contributing to the desk that has the best delta.SUM
at the moment the query was launched, even if further data updates change which desk is now the best.
Return Clause
It is possible to add a RETURN
clause to the MDX drillthrough query to select which columns are to be retrieved,
or in other words which fields of the datastore are wanted.
The following example extracts all the facts regarding DeskA, but only the Date, Desk and pnl attributes are returned:
DRILLTHROUGH
SELECT FROM [EquityDerivativesCube] WHERE [Bookings].[Desk].[ALL].[AllMember].[DeskA]
RETURN [Date], [Desk], [pnl]
The RETURN
clause:
- replaces the use of the context value
drillthrough.hiddencolumns
(this context value still exists though). Instead of indicating which columns need to be hidden in the context value, the user indicates which columns need to be seen directly in the query. - defines the order of the columns of an MDX drillthrough query.
- is not mandatory. If not present, all columns will be retrieved.
- should contain a list of quoted column header names (i.e. names of fields defined in the Atoti Schema) separated by a coma.
Paging
It is possible to cut the result of a drillthrough query into conveniently smaller pieces. This reduces the memory overhead resulting from a drillthrough query.
For example, let us suppose that a query returns 10 000 000 rows. It may be more convenient to receive the results in packets of 1 000 000 rows. This can be achieved by using additional keywords in the query:
MAXROWS
: indicates the maximum number of rows to be returned by the resulting rowset (cf. MSDN specification).FIRSTROW
: indicates the index of the first row to be retrieved (custom ActiveViam keyword).
So for the first million rows, we could have a query like:
DRILLTHROUGH
MAXROWS 1000000 //Max number of rows
FIRSTROW 0 //the index of the first row to retrieve
SELECT FROM [EquityDerivativesCube] WHERE [Bookings].[Desk].[ALL].[AllMember].[DeskA] //Regular MDX Statement
while the second million rows would be extracted with:
DRILLTHROUGH
MAXROWS 1000000 //Max number of rows
FIRSTROW 1000000 //the index of the first row to retrieve
SELECT FROM [EquityDerivativesCube] WHERE [Bookings].[Desk].[ALL].[AllMember].[DeskA] //Regular MDX Statement
and so on for all 10 groups of one million.
It is possible to use each of these keywords on its own, without the other.
If MAXROWS
is used on its own, the default value of FIRSTROW
is 0.
If FIRSTROW
is used on its own,the default value for MAXROWS
is -1,
which means "no limit except the one defined as a safeguard in the cube configuration".
The maxrows safeguard defined in the cube configuration prevents the server from crashing (with OutOfMemory exception)
because too many rows are returned via a drillthrough query.
Customization to address this limitation is available through the context value drillthrough.maxrows
.
It is important to be aware that keyword MAXROWS
does not override this context value.
FIRSTROW
is a custom keyword of Atoti and not part of the MSDN specification. A similar keyword,FIRSTROWSET
, does exist, but does not mean the same thing: FIRSTROWSET identifies the partition whose rowset is returned first.
MDX Drillthrough Function
As stated in the MSDN specification, several functions applicable to the RETURN
clause elements exist officially.
These functions are used to format the values returned by an MDX drillthrough query, column by column.
Symbol | Meaning |
---|---|
fully supported | |
will not be supported |
Function Name | Support | Comments |
---|---|---|
Name | The elements of the specified column are formatted using the ToString formatter (Object.toString() ). | |
Caption | The elements of the specified column are formatted using the formatter declared in the cube description. If none is defined, the ToString formatter is used. | |
MemberValue | The column is not formatted (default value) | |
UniqueName | ||
Key | ||
CustomRollup | There is no custom rollup in Atoti | |
CustomRollupProperties | ||
UnaryOperator |
By default, the elements of the columns returned by a query are Java objects. For example, in the sandbox, the risk AsOfDate field is a
LocalDate
(seeDatastoreDescriptionConfig
). Asking for[AsOfDate]
is equivalent toMemberValue([AsOfDate])
and will return theLocalDate
object, while asking forCaption([AsOfDate])
will return theDATE[yyyy-MM-dd]
formatted string, since this is the formatter for the AsOfDate field inEquityDerivativesCubeDimensionsConfig
.
Calculated Columns
In broad terms, a calculated column is an additional column, derived and calculated from other column values and then added to the drillthrough result.
The configuration of a calculated column is required to declare all its dependencies (other columns on which the calculated value depends). Doing this ensures that Atoti is able to retrieve the minimum set of columns needed to compute the calculated columns.
A calculated drillthrough column is an extended plugin value. An example is given in the sandbox project:
@AtotiExtendedPluginValue(
intf = ICalculatedDrillthroughColumn.class,
key = DoubleAdderColumn.PLUGIN_KEY)
public class DoubleAdderColumn extends ACalculatedDrillthroughColumn {
private static final long serialVersionUID = 1L;
public static final String PLUGIN_KEY = "DoubleAdder";
public DoubleAdderColumn(final String name, final String fields, final Properties properties) {
super(name, fields, properties);
}
@Override
public Object evaluate(final Object[] underlyingFields) {
final Double v1 = (Double) underlyingFields[0];
final Double v2 = (Double) underlyingFields[1];
// Return the sum
if (v1 == null) {
return v2;
} else if (v2 == null) {
return v1;
} else {
return v1 + v2;
}
}
@Override
public String getType() {
return PLUGIN_KEY;
}
}
The calculated column is then added to the drillthrough properties in the cube configuration
by referencing the plugin key and defining the underlying fields
(see EquityDerivativesCubeConfig#drillthroughProperties()
).
Calculated Column Sets
Calculated column sets allow to dynamically create calculated columns "on-the-fly" (i.e. when a query is launched),
depending on various parameters contained in an IDrillthroughQueryContext
.
A calculated drillthrough column set is an extended plugin value. An example is given in the sandbox project:
/**
* A drillthrough column set that is able to generate "on the fly"; that is to say when a
* drillthrough query is launched (in real time or not), a set of new (calculated) drillthrough
* columns. In this example, it creates six columns to display pnl values for each currency defined
* in the sandbox.
*
* <p>There are two important things to notice. The first is the use of {@link
* IDrillthroughQueryContext context} to generate new columns into which different attributes
* deduced from the context are injected. The second is the fact that conversion rates are computed
* when the columns are generated (when the query is launched). It means that in real time, the rate
* applied to the pnl values are the ones computed when the query was launched. <b>The rate is and
* must be the same as long as the continuous query lives.</b> This is mainly because calculated
* columns can't have an associated streams and handlers like post processors.
*
* @see IDrillthroughQueryContext
* @see PnlCurrencyColumn
* @author ActiveViam
*/
@AtotiExtendedPluginValue(
intf = ICalculatedDrillthroughColumnSet.class,
key = PnlCurrencyColumnSet.PLUGIN_KEY)
public class PnlCurrencyColumnSet extends ACalculatedDrillthroughColumnSet {
/** Serialization. */
private static final long serialVersionUID = 1L;
/** The logger. */
private static Logger logger = Logger.getLogger(PnlCurrencyColumnSet.class.getName());
/** Calculated column set plugin key. */
public static final String PLUGIN_KEY = "PnlCurrencyColumnSet";
protected static final String SEPARATOR = ",";
/**
* Key to retrieve from the {@link Properties} the prefix to apply to each column's name that will
* be generated.
*/
public static final String PREFIX_KEY = "prefix";
/** Key to retrieve from the {@link Properties} the existing currencies. */
public static final String CURRENCIES_KEY = "currencies";
/** List of available currencies. */
protected final String[] underlierCurrencies;
/** Store the prefix retrieve from the {@link Properties}. */
public final String baseName;
/**
* Constructor.
*
* @param properties properties associated to be given to this {@link
* ICalculatedDrillthroughColumnSet} to parameterize the implementation class.
*/
public PnlCurrencyColumnSet(final Properties properties) {
super(properties);
this.baseName = (String) properties.get(PREFIX_KEY);
this.underlierCurrencies = ((String) properties.get(CURRENCIES_KEY)).split(SEPARATOR);
}
@Override
public Collection<ICalculatedDrillthroughColumnDescription> generate(
final IDrillthroughQueryContext queryContext) {
// Retrieve the IQueryCache from the context. It is used to get
// access to the current database version.
final IQueryCache queryCache = queryContext.getContext().get(IQueryCache.class);
if (queryCache == null) {
logger.log(Level.FINE, "The query cache is null, cannot retrieve the database version");
return null;
}
// Get rates from the database for this version
final Map<String, TObjectDoubleMap<String>> currencies =
getAllRatesByCurrency(queryCache.getDatabaseVersion());
// underlying field of the columns to generate
final String underlyingFields =
ACalculatedDrillthroughColumn.buildFieldProperty(RISK__PNL, "Currency");
// For each currency, it creates a new column responsible for
// giving the pnl in a given currency.
final List<ICalculatedDrillthroughColumnDescription> descriptions = new ArrayList<>();
for (final String underlierCurrency : underlierCurrencies) {
// The rates for the given currency is added into properties
// The calculated column generated will introspect it to
// extract the conversion rate and convert the pnl value
final Properties properties = new Properties();
properties.put(PnlCurrencyColumn.RATE_KEY, currencies.get(underlierCurrency));
// Use a different name for each column. Here the name depends on the currency.
descriptions.add(
new CalculatedDrillthroughColumnDescription(
PnlCurrencyColumn.PLUGIN_KEY,
baseName + " " + underlierCurrency,
underlyingFields,
properties));
}
return descriptions;
}
/**
* Retrieve the all rates between each currency.
*
* @return a map containing the rates between each currency indexed by currency.
*/
protected Map<String, TObjectDoubleMap<String>> getAllRatesByCurrency(
final IDatabaseVersion databaseVersion) {
final var query =
databaseVersion
.getQueryManager()
.listQuery()
.forTable(FOREX_STORE_NAME)
.withoutCondition()
.selectingAllTableFields()
.toQuery();
final ICursor cursor = databaseVersion.getQueryRunner().listQuery(query).runCurrentThread();
final Map<String, TObjectDoubleMap<String>> currencies = new HashMap<>();
for (final IRecordReader reader : cursor) {
final String cur1 = (String) reader.read(FOREX_CURRENCY);
final String cur2 = (String) reader.read(FOREX_TARGET_CURRENCY);
final double rate = (double) reader.read(FOREX_RATE);
if (!currencies.containsKey(cur1)) {
currencies.put(cur1, new TObjectDoubleHashMap<>());
currencies.get(cur1).put(cur1, 1.0d); // 1 for the currency itself
}
currencies.get(cur1).put(cur2, rate);
}
return currencies;
}
@Override
public String getType() {
return PnlCurrencyColumnSet.PLUGIN_KEY;
}
}
Note that a calculated column defines the evaluate
method that allows
to return the column value based on the underlying fields.
By contrast, a calculated column set defines the generate
method, that does not explain directly how to evaluate
each column but that generates calculated column descriptions which, themselves, refer to calculated columns
through their plugin key. It's the calculated columns that do define the evaluate
method.
Calculated Members
When doing a MDX query, each cell corresponds to a combination of a location (eventually filtered) and a measure. When doing a drillthrough on a cell in a MDX query, we list the facts that contributed to that cell.
In the case of a simple measure, the facts correspond to the (eventually filtered) location, and we don't need to take the measure into account. However, things get a bit more complicated if the measure causes a change in the location (for example, yesterday's data, or the parent's total...). In that case, it becomes more complex: does the drillthrough user want to see the data for the selected location, or the (shifted) data that was used to compute the measure at the selected location? And what if we use several locations to compute the measure (percentage of parent for example, where we use both the queried location and its parent)?
Atoti partially supports the use of calculated members in an MDX drillthrough query, but with restrictions so that deciding which data should be displayed in the drillthrough is straightforward. We allow simple calculated members that do not modify the location, some very specific structures for percentage of parent and percentage of total (where we assume the data that the user wants for the drillthrough are the facts contributing to the queried location, not the parent or the total), and tuples (where we assume the data the user wants for the drillthrough are the facts contributing to the tuple, rather than the queried location).
The following table provides a list of supported calculated members in an MDX Drillthrough query:
Example | Support | Comment |
---|---|---|
[Measures].[custom] AS [Measures].[pnl.SUM] $ 10 | Basic operation with a measure.$ could be one of the following operators: + , - , * , / | |
[Measures].[custom] AS [Measures].[gamma.SUM] $ [Measures].[delta.SUM] | Operation between two (or more) measures. Not supported in polymorphic distribution.$ could be one of the following operators: + , - , * , / | |
[Measures].[custom] AS [Measures].[pnl.SUM], FORE_COLOR = IIf([Measures].[dummy] >= 0, RGB(0,0,0), RGB(255,255,255)) | Calculated member for dynamic formatting. Simple formatting is also supported. | |
[Measures].[custom] AS ([Measures].[pnl.SUM], [Bookings].[Desk].[ALL].[AllMember].[DeskA], [Time].[AsOfDate].[Tue Jan 27 01:00:00 CET 2015]) | The calculated member is a simple tuple. Be careful when using such formula, as it might change the location where the drillthrough will be performed. | |
[Measures].[custom] AS IIf(IsEmpty(([Bookings].[Bookings].CurrentMember.Parent,[Measures].[contributors.COUNT])), NULL, [Measures].[contributors.COUNT]/([Bookings].[Bookings].CurrentMember.Parent, [Measures].[contributors.COUNT])) | Percentage of parent: the location is not modified. | |
[Measures].[custom] AS IIf(IsEmpty(([Bookings].[Bookings].[ALL].[AllMember],[Measures].[contributors.COUNT])), NULL, [Measures].[contributors.COUNT]/([Bookings].[Bookings].[ALL].[AllMember], [Measures].[contributors.COUNT])) | Percentage of total: the location is not modified. | |
[Measures].[custom] AS ([Measures].[contributors.COUNT], [Date].[Date].[Calendar Year].[2012]) - ([Measures].[contributors.COUNT], [Date].[Date].[Calendar Year].[2011]) | ||
[Bookings].[Desk].[custom] AS [Bookings].[Desk].[ALL].[AllMember].[DeskA] | Not supported because the dimension of the calculated member is not Measures . | |
[Measures].[custom] AS Count(Descendants([Bookings].[Desk].CurrentMember ,[Bookings].[Desk].[Desk]), EXCLUDEEMPTY) | Distinct count formula. |
Limitations
- The keywords
MAXROWS
andFIRSTROW
must not be used when executing an MDX drillthrough query in real time. - Real time is only possible on the latest epoch of the cube.
- Only the declared calculated members are supported.
- The location and subcubes are evaluated once and once only, at the initialization of the query. See real time section.
How to Make Drillthrough Queries Compatible with Analysis Hierarchy Business Logic
The internal query engine knows which set of facts needs to be retrieved from the datastore, based on the MDX drillthrough query. However, some hierarchies, like analysis hierarchies, are defined by some customized analytical algorithms rather than the usual data introspection mechanism. This is why some projects will need to tailor the MDX drillthrough query to datastore query conversion by adding their own business logic during that process.
One can implement their own ILocationInterpreter
to do so, or extend the DefaultLocationInterpreter
.
This object is in charge of converting drillthrough query locations (IDrillthroughQuery#getLocations()
)
into a datastore condition (ICondition
).
It can be used to change the set of facts returned by a drillthrough query, to filter the result for instance.
By default (DefaultLocationInterpreter
), the location interpreter ignores the coordinates of the analysis hierarchies.
An example on how to extend DefaultLocationInterpreter
to see an analysis hierarchy in the drillthrough
can be found in the sandbox:
/**
* This {@link ILocationInterpreter} is able to create a {@link ICondition} that follows the
* business logic of the bucketing. The condition will be used while performing a drillthrough
* taking into account the location coordinates and subcubes on the Copper bucketing hierarchy.
*
* <p>Only trades whose date belong to the selected buckets (this selection can be deduced from
* locations and subcubes) will appear in the drillthrough result.
*
* <p>Pay attention that this <b>object is shared</b> among queries so it must support concurrency
* calls.
*
* @author ActiveViam
*/
public class TimeBucketLocationInterpreter extends DefaultLocationInterpreter {
/** Time buckets indexed by their name. */
public static final Map<Object, TimeBucket> bucketMap = new LinkedHashMap<>();
static {
bucketMap.put("1D", new TimeBucket(ChronoUnit.DAYS, 1));
bucketMap.put("2D", new TimeBucket(ChronoUnit.DAYS, 2));
bucketMap.put("3D", new TimeBucket(ChronoUnit.DAYS, 3));
bucketMap.put("1W", new TimeBucket(ChronoUnit.WEEKS, 1));
bucketMap.put("2W", new TimeBucket(ChronoUnit.WEEKS, 2));
bucketMap.put("1M", new TimeBucket(ChronoUnit.MONTHS, 1));
bucketMap.put("2M", new TimeBucket(ChronoUnit.MONTHS, 2));
bucketMap.put("3M", new TimeBucket(ChronoUnit.MONTHS, 3));
bucketMap.put("6M", new TimeBucket(ChronoUnit.MONTHS, 6));
bucketMap.put("1Y", new TimeBucket(ChronoUnit.YEARS, 1));
bucketMap.put("2Y", new TimeBucket(ChronoUnit.YEARS, 2));
bucketMap.put("5Y", new TimeBucket(ChronoUnit.YEARS, 5));
bucketMap.put("10Y", new TimeBucket(ChronoUnit.YEARS, 10));
bucketMap.put("20Y", new TimeBucket(ChronoUnit.YEARS, 20));
bucketMap.put("30Y", new TimeBucket(ChronoUnit.YEARS, 30));
bucketMap.put("50Y", new TimeBucket(ChronoUnit.YEARS, 50));
bucketMap.put("<30D", new TimeBucket(ChronoUnit.DAYS, 30));
}
public static NavigableMap<Long, Object> createBucketMap(final long timeReference) {
final NavigableMap<Long, Object> result = new TreeMap<>();
bucketMap.forEach(
(bucketName, timeBucket) -> result.put(timeBucket.getBoundary(timeReference), bucketName));
return result;
}
/**
* Relative time bucket.
*
* @param unit The unit.
* @param count The upper boundary of this bucket, in the given.
*/
public record TimeBucket(ChronoUnit unit, int count) {
/**
* Gives the smaller instant that represent this bucket.
*
* @param reference The reference from which we should start.
* @return The lower boundary of this bucket.
*/
public long getBoundary(final long reference) {
final ZonedDateTime ref = Instant.ofEpochMilli(reference).atZone(ZoneOffset.UTC);
return ref.plus(count, unit).toInstant().toEpochMilli();
}
@Override
public String toString() {
return this.getClass().getSimpleName() + " of " + this.count + " " + this.unit;
}
}
/** Bucket hierarchy property. */
public static final String BUCKET_HIERARCHY_PROPERTY = "bucketHierarchy";
/** Bucketed Level property. */
public static final String BUCKETED_LEVEL_PROPERTY = "bucketLevel";
/** The ordinal of the bucket level. */
public static final int BUCKET_LEVEL_ORDINAL = 1;
/** Name of the analysis hierachy. */
protected final String bucketHiearchyName;
/** Property of the bucketed level. */
protected final String bucketedLevel;
/** The current time in milliseconds. */
protected final long now;
/** The bucket level info. */
protected ILevelInfo bucketLevelInfo;
/** The bucketed level info. */
protected ILevelInfo bucketedLevelInfo;
/** The bucket hierarchy. */
protected final IHierarchy bucketHierarchy;
/**
* Default constructor.
*
* @param properties some additional {@link Properties properties}
*/
public TimeBucketLocationInterpreter(
final IActivePivotVersion pivot,
final ISelection underlyingView,
final Properties properties) {
super(underlyingView, properties);
// It will create the bucket map.
this.bucketedLevel = properties.getProperty(BUCKETED_LEVEL_PROPERTY);
this.bucketHiearchyName = properties.getProperty(BUCKET_HIERARCHY_PROPERTY);
this.now = System.currentTimeMillis();
// Initialization
this.bucketHierarchy = init(pivot);
}
/**
* Initialization of the location interpreter. It is called each time a condition is created (i.e
* per query).
*
* @param pivot the current cube
* @return the analysis hierarchy (TimeBucketDynamic in this case)
*/
protected IHierarchy init(final IActivePivot pivot) {
// Retrieve the bucket hierarchy.
final IHierarchy bucketHierarchy = HierarchiesUtil.getHierarchy(pivot, bucketHiearchyName);
if (bucketHierarchy.getLevels().size() != BUCKET_LEVEL_ORDINAL + 1) {
throw new ActiveViamRuntimeException(
"The bucket hierarchy must have exactly one level: "
+ bucketHierarchy.getName()
+ ", "
+ bucketHierarchy.getLevels());
}
// Extract the level info
bucketLevelInfo = bucketHierarchy.getLevels().get(BUCKET_LEVEL_ORDINAL);
// Retrieve the bucketed level info.
bucketedLevelInfo = HierarchiesUtil.getLevel(pivot, bucketedLevel);
return bucketHierarchy;
}
@Override
public ICondition buildCondition(
final IActivePivot pivot,
final Collection<ILocation> locations,
final Collection<String> measures) {
// The conditions built from the locations and taking into account the analysis hierarchy.
final ICondition locationCondition = super.buildCondition(pivot, locations, measures);
// The condition deduced from the subcubes.
final var subCubesCondition = getSubCubeCondition(pivot.getContext(), bucketHierarchy);
if (locationCondition == null && subCubesCondition == null) {
return null;
} else if (locationCondition != null && subCubesCondition != null) {
return BaseConditions.and(locationCondition, subCubesCondition);
} else if (locationCondition != null && subCubesCondition == null) {
return locationCondition;
} else {
return subCubesCondition; // Only the subcubes conditions is not null
}
}
@Override
protected List<ICondition> locationConditions(
final List<? extends IHierarchy> hierarchies, final ILocation location) {
// First, compute the condition using the default behavior,
// that is to say the one ignoring the analysis hierarchies.
final List<ICondition> originalConditions = super.locationConditions(hierarchies, location);
// Introspect the location
if (LocationUtil.isAtOrBelowLevel(location, bucketLevelInfo)) {
final Object[] values =
LocationUtil.extractValues(location, new ILevelInfo[] {bucketLevelInfo});
if (values != null) {
// The condition deduced from the location.
final var locationCondition = createBucketCondition(values);
if (locationCondition != null) {
originalConditions.add(locationCondition);
}
}
}
return originalConditions;
}
/**
* Extract a {@link ICondition condition} from the subcubes.
*
* @param context the current pivot context
* @param bucketHierarchy the bucket hierarchy (TimeBucketDynamic)
* @return the created condition. It can be null.
*/
protected ICondition getSubCubeCondition(
final IContext context, final IHierarchy bucketHierarchy) {
// The condition deduced from the subcubes.
ICondition subCubesCondition = null;
// Introspect the sub cubes
final ISubCubeProperties subCubes = context.get(ISubCubeProperties.class);
final int depthOfRestriction =
subCubes.isHierarchyRestricted(bucketHierarchy.getHierarchyInfo().getIdentifier());
if (subCubes.isHierarchyRestricted(bucketHierarchy.getHierarchyInfo().getIdentifier()) >= 0) {
final List<? extends IMember> axisMembers =
bucketHierarchy.retrieveMembers(BUCKET_LEVEL_ORDINAL + 1);
if (axisMembers != null) {
final Object[] grantedBuckets = new Object[axisMembers.size()];
int i = 0;
for (final IMember member : axisMembers) {
grantedBuckets[i++] = ((IAxisMember) member).getObjectPath()[depthOfRestriction];
}
subCubesCondition = createBucketCondition(grantedBuckets);
}
}
return subCubesCondition;
}
/**
* Create a {@link ICondition} from input buckets and relatively to the current time.
*
* @param buckets the bucket from which a {@link ICondition} will be created.
* @return the deduced {@link ICondition}. It can be returned null.
*/
protected ICondition createBucketCondition(final Object[] buckets) {
final NavigableSet<Object> elements = new TreeSet<>(bucketLevelInfo.getComparator());
for (final Object value : buckets) {
if (value != null) {
elements.add(value);
}
}
final Object lastBucket = elements.last();
if (lastBucket != null) {
final String fieldName = bucketedLevelInfo.getSelectionField();
final FieldPath fieldExpression = underlyingView.findFieldByAlias(fieldName).getFieldPath();
final Long lastDateElement = bucketMap.get(lastBucket).getBoundary(this.now);
final Calendar cal = Calendar.getInstance();
cal.setTimeInMillis(lastDateElement);
return BaseConditions.lessOrEqual(fieldExpression, cal.getTime());
}
return null;
}
}
Then an IDrillthroughExecutor
must be defined, that makes use of the created ILocationInterpreter
.
In the sandbox, you can look at TimeBucketDrillthroughExecutor
:
/**
* A drillthrough executor that uses the {@link TimeBucketLocationInterpreter} as {@link
* ILocationInterpreter}.
*
* @author ActiveViam
*/
@AtotiExtendedPluginValue(
intf = IDrillthroughExecutor.class,
key = TimeBucketDrillthroughExecutor.PLUGIN_KEY)
public class TimeBucketDrillthroughExecutor extends DrillthroughExecutor {
/** The plugin key. */
public static final String PLUGIN_KEY = "TIME_BUCKET_DT_EXECUTOR";
/**
* Default constructor.
*
* @param context the context needed for the execution of a drillthrough query
*/
public TimeBucketDrillthroughExecutor(final IDrillthroughSession context) {
super(context);
}
@Override
public String getType() {
return PLUGIN_KEY;
}
@Override
protected ILocationInterpreter createLocationInterpreter(
final ISelection cubeSelection, final Properties properties) {
return new TimeBucketLocationInterpreter(
this.context.getActivePivotVersion(), cubeSelection, properties);
}
}
Then the executor must be referenced when configuring the cube, using withDrillthroughExecutor
.