Skip to main content

MDX Drillthrough

In business analysis, drillthrough is a term used to describe the act of exploring data from a condensed view by examining an exploded view of the underlying data.

In the context of data aggregation, this means it is possible to list the facts at a given location, helping explain a specific aggregate result.

ActivePivot gives the user the ability to explore and extract sets of facts stored in our database, the ActivePivot Datastore. To achieve this, ActivePivot offers a bespoke API based on locations (see Datastore Queries). However, experience has shown us that customers sometimes need to extract some very particular set of facts, that do not rely on location conditions, but rather on some specific pre-computation (e.g. filters on measure values). The MDX drillthrough allows querying for facts in the datastore, based on an MDX query that returns a single cell. The facts returned by the drillthrough are the ones that add to the contributors.COUNT on the location of the cell, with the same filters as the MDX query.

MDX Drillthrough Query

Basic Drillthrough Statement

The MDX drillthrough statement standard is defined by Microsoft. An MDX drillthrough query looks very similar to an MDX query. It starts with the DRILLTHROUGH keyword, followed by a MDX select statement. Any select statement can be used, with as complex filter as needed, provided that the statement is syntactically valid and it returns a single cell.

Here is an example of an MDX drilltrough with a complex filter :

DRILLTHROUGH
SELECT FROM (
SELECT Filter(
[Time].[HistoricalDates].[AsOfDate].Members,
IsDate([Time].[HistoricalDates].CurrentMember.MemberValue)
AND DateDiff("d",[Time].[HistoricalDates].CurrentMember.MemberValue, Now()) = 0)
ON COLUMNS FROM (
SELECT {[Underlyings].[Products].[ALL].[AllMember].[LISTED].[OPTION],
[Underlyings].[Products].[ALL].[AllMember].[OTC].[BARRIER OPTION]}
ON COLUMNS FROM (
SELECT TopCount(Filter([Bookings].[Desk].Levels(1).Members,
NOT IsEmpty([Measures].[delta.SUM])), 1, [Measures].[delta.SUM])
ON COLUMNS FROM [EquityDerivativesCube])))

Real-time

The MDX drillthrough supports real time updates. However, those updates are limited to the set and subset of data determined when the query is launched (i.e the location and the subcubes are evaluated only once, when the query is registered). ActivePivot will deliver the related updates for this location, and those subcubes, as and when they occur.

For example, the query above contains a TopCount filter: the real time updates will concern data contributing to the desk that has the best delta.SUM at the moment the query was launched, even if further data updates change which desk is now the best.

Return Clause

It is possible to add a RETURN clause to the MDX drillthrough query to select which columns are to be retrieved, or in other words which fields of the datastore are wanted.

The following example extracts all the facts regarding DeskA, but only the Date, Desk and pnl attributes are returned:

DRILLTHROUGH 
SELECT FROM [EquityDerivativesCube] WHERE [Bookings].[Desk].[ALL].[AllMember].[DeskA]
RETURN [Date], [Desk], [pnl]

The RETURN clause:

  • replaces the use of the context value drillthrough.hiddencolumns (this context value still exists though). Instead of indicating which columns need to be hidden in the context value, the user indicates which columns need to be seen directly in the query.
  • defines the order of the columns of an MDX drillthrough query.
  • is not mandatory. If not present, all columns will be retrieved.
  • should contain a list of quoted column header names (i.e. names of fields defined in the ActivePivot Schema) separated by a coma.

Paging

It is possible to cut the result of a drillthrough query into conveniently smaller pieces. This reduces the memory overhead resulting from a drillthrough query.

For example, let us suppose that a query returns 10 000 000 rows. It may be more convenient to receive the results in packets of 1 000 000 rows. This can be achieved by using additional keywords in the query:

  • MAXROWS: indicates the maximum number of rows to be returned by the resulting rowset (cf. MSDN specification).
  • FIRSTROW: indicates the index of the first row to be retrieved (custom ActiveViam keyword).

So for the first million rows, we could have a query like:

DRILLTHROUGH 
MAXROWS 1000000 //Max number of rows
FIRSTROW 0 //the index of the first row to retrieve
SELECT FROM [EquityDerivativesCube] WHERE [Bookings].[Desk].[ALL].[AllMember].[DeskA] //Regular MDX Statement

while the second million rows would be extracted with:

DRILLTHROUGH
MAXROWS 1000000 //Max number of rows
FIRSTROW 1000000 //the index of the first row to retrieve
SELECT FROM [EquityDerivativesCube] WHERE [Bookings].[Desk].[ALL].[AllMember].[DeskA] //Regular MDX Statement

and so on for all 10 groups of one million.

It is possible to use each of these keywords on its own, without the other. If MAXROWS is used on its own, the default value of FIRSTROW is 0. If FIRSTROW is used on its own,the default value for MAXROWS is -1, which means "no limit except the one defined as a safeguard in the cube configuration".

The maxrows safeguard defined in the cube configuration prevents the server from crashing (with OutOfMemory exception) because too many rows are returned via a drillthrough query. Customization to address this limitation is available through the context value drillthrough.maxrows. It is important to be aware that keyword MAXROWS does not override this context value.

FIRSTROW is a custom keyword of ActivePivot and not part of the MSDN specification. A similar keyword, FIRSTROWSET, does exist, but does not mean the same thing: FIRSTROWSET identifies the partition whose rowset is returned first.

MDX Drillthrough Function

As stated in the MSDN specification, several functions applicable to the RETURN clause elements exist officially. These functions are used to format the values returned by an MDX drillthrough query, column by column.

SymbolMeaning
tickfully supported
crosswill not be supported
Function NameSupportComments
NametickThe elements of the specified column are formatted using the ToString formatter (Object.toString()).
CaptiontickThe elements of the specified column are formatted using the formatter declared in the cube description. If none is defined, the ToString formatter is used.
MemberValuetickThe column is not formatted (default value)
UniqueNamecross
Keycross
CustomRollupcrossThere is no custom rollup in ActivePivot
CustomRollupPropertiescross
UnaryOperatorcross

By default, the elements of the columns returned by a query are Java objects. For example, in the sandbox, the risk AsOfDate field is a LocalDate (see DatastoreDescriptionConfig). Asking for [AsOfDate] is equivalent to MemberValue([AsOfDate]) and will return the LocalDate object, while asking for Caption([AsOfDate]) will return the DATE[yyyy-MM-dd] formatted string, since this is the formatter for the AsOfDate field in EquityDerivativesCubeDimensionsConfig.

Calculated Columns

In broad terms, a calculated column is an additional column, derived and calculated from other column values and then added to the drillthrough result.

The configuration of a calculated column is required to declare all its dependencies (other columns on which the calculated value depends). Doing this ensures that ActivePivot is able to retrieve the minimum set of columns needed to compute the calculated columns.

A calculated drillthrough column is an extended plugin value. An example is given in the sandbox project:

@QuartetExtendedPluginValue(intf = ICalculatedDrillthroughColumn.class, key = DoubleAdderColumn.PLUGIN_KEY)
public class DoubleAdderColumn extends ACalculatedDrillthroughColumn {
private static final long serialVersionUID = 1L;
public static final String PLUGIN_KEY = "DoubleAdder";
public DoubleAdderColumn(String name, String fields, Properties properties) {
super(name, fields, properties);
}
@Override
public Object evaluate(Object[] underlyingFields) {
Double v1 = (Double) underlyingFields[0];
Double v2 = (Double) underlyingFields[1];
// Return the sum
if (v1 == null) {
return v2;
} else if (v2 == null) {
return v1;
} else {
return v1 + v2;
}
}
@Override
public String getType() {
return PLUGIN_KEY;
}
}

The calculated column is then added to the drillthrough properties in the cube configuration by referencing the plugin key and defining the underlying fields (see EquityDerivativesCubeConfig#drillthroughProperties()).

Calculated Column Sets

Caculated column sets allow to dynamically create calculated columns "on-the-fly" (i.e when a query is launched), depending on various parameters contained in an IDrillthroughQueryContext.

A calculated drillthrough column set is an extended plugin value. An example is given in the sandbox project:

/**
* A drillthrough column set that is able to generate "on the fly"; that is to say when a drillthrough query is
* launched (in real time or not), a set of new (calculated) drillthrough columns. In this example, it creates six
* columns to display pnl values for each currency defined in the sandbox.
* <p>
* There are two important things to notice. The first is the use of {@link IDrillthroughQueryContext context} to
* generate new columns into which different attributes deduced from the context are injected. The second is the
* fact that conversion rates are computed when the columns are generated (when the query is launched). It means
* that in real time, the rate applied to the pnl values are the ones computed when the query was launched. <b>The
* rate is and must be the same as long as the continuous query lives.</b> This is mainly because calculated
* columns can't have an associated streams and handlers like post processors.
*
* @see IDrillthroughQueryContext
* @see PnlCurrencyColumn
* @author ActiveViam
*/
@QuartetExtendedPluginValue(intf = ICalculatedDrillthroughColumnSet.class, key = PnlCurrencyColumnSet.PLUGIN_KEY)
public class PnlCurrencyColumnSet extends ACalculatedDrillthroughColumnSet {
/** Serialization. */
private static final long serialVersionUID = 1L;
/** The logger **/
private static Logger logger = Logger.getLogger(PnlCurrencyColumnSet.class.getName());
/** Calculated column set plugin key. */
public static final String PLUGIN_KEY = "PnlCurrencyColumnSet";
protected static final String SEPARATOR = ",";
/**
* Key to retrieve from the {@link Properties} the prefix to apply to each column's name that will be
* generated.
*/
public static final String PREFIX_KEY = "prefix";
/**
* Key to retrieve from the {@link Properties} the existing currencies.
*/
public static final String CURRENCIES_KEY = "currencies";
/** List of available currencies. */
protected final String[] underlierCurrencies;
/**
* Store the prefix retrieve from the {@link Properties}.
*/
public final String baseName;
/**
* Constructor.
*
* @param properties properties associated to be given to this {@link ICalculatedDrillthroughColumnSet} to
* parameterize the implementation class.
*/
public PnlCurrencyColumnSet(Properties properties) {
super(properties);
this.baseName = (String) properties.get(PREFIX_KEY);
this.underlierCurrencies = ((String) properties.get(CURRENCIES_KEY)).split(SEPARATOR);
}
@Override
public Collection<ICalculatedDrillthroughColumnDescription> generate(IDrillthroughQueryContext queryContext) {
// Retrieve the IQueryCache from the context. It is used to get
// access to the current datastore version.
final IQueryCache queryCache = queryContext.getContext().get(IQueryCache.class);
if (queryCache == null) {
logger.log(Level.FINE, "The query cache is null, cannot retrieve a " + IDatastoreVersion.class);
return null;
}
// Get rates from the datastore for this version
Map<String, TObjectDoubleMap<String>> currencies = getAllRatesByCurrency(queryCache.getDatastoreVersion());
// underlying field of the columns to generate
String underlyingFields = ACalculatedDrillthroughColumn.buildFieldProperty(RISK__PNL, "Currency");
// For each currency, it creates a new column responsible for
// giving the pnl in a given currency.
List<ICalculatedDrillthroughColumnDescription> descriptions = new ArrayList<>();
for (String underlierCurrency : underlierCurrencies) {
// The rates for the given currency is added into properties
// The calculated column generated will introspect it to
// extract the conversion rate and convert the pnl value
Properties properties = new Properties();
properties.put(PnlCurrencyColumn.RATE_KEY, currencies.get(underlierCurrency));
// Use a different name for each column. Here the name depends on the currency.
descriptions.add(
new CalculatedDrillthroughColumnDescription(
PnlCurrencyColumn.PLUGIN_KEY,
baseName + " " + underlierCurrency,
underlyingFields,
properties));
}
return descriptions;
}
/**
* Retrieve the all rates between each currency from the {@link IDatastoreVersion datastore}.
*
* @param datastoreVersion the {@link IDatastoreVersion datastore}.
* @return a map containing the rates between each currency indexed by currency.
*/
protected Map<String, TObjectDoubleMap<String>> getAllRatesByCurrency(IDatastoreVersion datastoreVersion) {
// Get the content of the datastore for this version
IDictionaryCursor cursor = datastoreVersion.getQueryRunner()
.forStore(FOREX_STORE_NAME)
.withoutCondition()
.selectingAllStoreFields()
.onCurrentThread()
.run();
Map<String, TObjectDoubleMap<String>> currencies = new HashMap<>();
for (IRecordReader reader : cursor) {
String cur1 = (String) reader.read(FOREX_CURRENCY);
String cur2 = (String) reader.read(FOREX_TARGET_CURRENCY);
double rate = (double) reader.read(FOREX_RATE);
if (!currencies.containsKey(cur1)) {
currencies.put(cur1, new TObjectDoubleHashMap<String>());
currencies.get(cur1).put(cur1, 1.0d);// 1 for the currency itself
}
currencies.get(cur1).put(cur2, rate);
}
return currencies;
}
@Override
public String getType() {
return PnlCurrencyColumnSet.PLUGIN_KEY;
}
}

Note that a calculated column defines the evaluate method that allows to return the column value based on the underlying fields. By contrast, a calculated column set defines the generate method, that does not explain directly how to evaluate each column but that generates calculated column descriptions which, themselves, refer to calculated columns through their plugin key. It's the calculated columns that do define the evaluate method.

Calculated Members

When doing a MDX query, each cell corresponds to a combination of a location (eventually filtered) and a measure. When doing a drillthrough on a cell in a MDX query, we list the facts that contributed to that cell.

In the case of a simple measure, the facts correspond to the (eventually filtered) location, and we don't need to take the measure into account. However, things get a bit more complicated if the measure causes a change in the location (for example, yesterday's data, or the parent's total...). In that case, it becomes more complex: does the drillthrough user want to see the data for the selected location, or the (shifted) data that was used to compute the measure at the selected location? And what if we use several locations to compute the measure (percentage of parent for example, where we use both the queried location and it's parent)?

ActivePivot partially supports the use of calculated members in an MDX drillthrough query, but with restrictions so that deciding which data should be displayed in the drillthrough is straightforward. We allow simple calculated members that do not modify the location, some very specific structures for percentage of parent and percentage of total (where we assume the data that the user wants for the drillthrough are the facts contributing to the queried location, not the parent or the total), and tuples (where we assume the data the user wants for the drillthrough are the facts contributing to the tuple, rather than the queried location).

The following table provides a list of supported calculated members in an MDX Drillthrough query:

ExampleSupportComment
[Measures].[custom] AS [Measures].[pnl.SUM] $ 10tickBasic operation with a measure.
$ could be one of the following operators: + , - , *, /
[Measures].[custom] AS [Measures].[gamma.SUM] $ [Measures].[delta.SUM]tickOperation between two (or more) measures. Not supported in polymorphic distribution.
$ could be one of the following operators: + , - , *, /
[Measures].[custom] AS [Measures].[pnl.SUM], FORE_COLOR = IIf([Measures].[toto] >= 0, RGB(0,0,0), RGB(255,255,255))tickCalculated member for dynamic formatting. Simple formatting is also supported.
[Measures].[custom] AS ([Measures].[pnl.SUM], [Bookings].[Desk].[ALL].[AllMember].[DeskA], [Time].[AsOfDate].[Tue Jan 27 01:00:00 CET 2015])tickThe calculated member is a simple tuple.
Be careful when using such formula, as it might change the location where the drillthrough will be performed.
[Measures].[custom] AS IIf(IsEmpty(([Bookings].[Bookings].CurrentMember.Parent,[Measures].[contributors.COUNT])), NULL, [Measures].[contributors.COUNT]/([Bookings].[Bookings].CurrentMember.Parent, [Measures].[contributors.COUNT]))tickPercentage of parent: the location is not modified.
[Measures].[custom] AS IIf(IsEmpty(([Bookings].[Bookings].[ALL].[AllMember],[Measures].[contributors.COUNT])), NULL, [Measures].[contributors.COUNT]/([Bookings].[Bookings].[ALL].[AllMember], [Measures].[contributors.COUNT]))tickPercentage of total: the location is not modified.
[Measures].[custom] AS ([Measures].[contributors.COUNT], [Date].[Date].[Calendar Year].[2012]) - ([Measures].[contributors.COUNT], [Date].[Date].[Calendar Year].[2011])cross
[Bookings].[Desk].[custom] AS [Bookings].[Desk].[ALL].[AllMember].[DeskA]crossNot supported because the dimension of the calculated member is not Measures.
[Measures].[custom] AS Count(Descendants([Bookings].[Desk].CurrentMember ,[Bookings].[Desk].[Desk]), EXCLUDEEMPTY)crossDistinct count formula.

Limitations

  1. The keywords MAXROWS and FIRSTROW must not be used when executing an MDX drillthrough query in real time.
  2. Real time is only possible on the latest epoch of the cube.
  3. Only the declared calculated members are supported.
  4. The location and subcubes are evaluated once and once only, at the initialization of the query. See real time section.

How to Make Drillthrough Queries Compatible with Analysis Hierarchy Business Logic

The internal query engine knows which set of facts needs to be retrieved from the datastore, based on the MDX drillthrough query. However, some hierarchies, like analysis hierarchies, are defined by some customized analytical algorithms rather than the usual data introspection mechanism. This is why some projects will need to tailor the MDX drillthrough query to datastore query conversion by adding their own business logic during that process.

One can implement their own ILocationInterpreter to do so, or extend the DefaultLocationInterpreter. This object is in charge of converting drillthrough query locations (IDrillthroughQuery#getLocations()) into a datastore condition (ICondition). It can be used to change the set of facts returned by a drillthrough query, to filter the result for instance. By default (DefaultLocationInterpreter), the location interpreter ignores the coordinates of the analysis hierarchies.

An example on how to extend DefaultLocationInterpreter to see an analysis hierarchy in the drillthrough can be found in the sandbox:

/**
* This {@link ILocationInterpreter} is able to create a {@link ICondition} that follows the business logic of
* {@link TimeBucketHierarchy}. The condition will be used while performing a drillthrough taking into account the
* location coordinates and subcubes on the {@link TimeBucketHierarchy analysis hierarchy}.
* <p>
* Only trades whose date belong to the selected buckets (this selection can be deduced from locations and
* subcubes) will appear in the drillthrough result.
* <p>
* Pay attention that this <b>object is shared</b> among queries so it must support concurrency calls.
*
* @author ActiveViam
*/
public class TimeBucketLocationInterpreter extends DefaultLocationInterpreter {
/** Bucket hierarchy property. */
public static final String BUCKET_HIERARCHY_PROPERTY = ABucketerPostProcessor.BUCKET_HIERARCHY_PROPERTY;
/** Bucketed Level property. */
public static final String BUCKETED_LEVEL_PROPERTY = ABucketerPostProcessor.BUCKETED_LEVEL_PROPERTY;
/** The ordinal of the bucket level. */
public static final int BUCKET_LEVEL_ORDINAL = ABucketerPostProcessor.BUCKET_LEVEL_ORDINAL;
/** Name of the analysis hierachy. */
protected final String bucketHiearchyName;
/** Property of the bucketed level. */
protected final String bucketedLevel;
/** The time bucketer used. */
protected DefaultTimeBucketer timeBucketer;
/** The current time in milliseconds. */
protected final long now;
/** The bucket level info. */
protected ILevelInfo bucketLevelInfo;
/** The bucketed level info. */
protected ILevelInfo bucketedLevelInfo;
/** The bucket hierarchy. */
protected final IHierarchy bucketHierarchy;
/**
* Default constructor.
*
* @param pivot The pivot on which the constructor is instantiated.
* @param fieldInformations the {@link IFieldInformation} used to search for a specific field
* @param properties some additional {@link Properties properties}
*/
public TimeBucketLocationInterpreter(
IActivePivotVersion pivot,
IFieldInformation fieldInformations,
Properties properties) {
super(pivot, fieldInformations, properties);
// It will create the bucket map.
this.timeBucketer = new DefaultTimeBucketer();
this.bucketedLevel = properties.getProperty(BUCKETED_LEVEL_PROPERTY);
this.bucketHiearchyName = properties.getProperty(BUCKET_HIERARCHY_PROPERTY);
this.now = System.currentTimeMillis();
// Initialization
this.bucketHierarchy = init(pivot);
}
/**
* Initialization of the location interpreter. It is called each time a condition is created (i.e per query).
*
* @param pivot the current cube
* @return the analysis hierarchy (TimeBucketDynamic in this case)
*/
protected IHierarchy init(IActivePivot pivot) {
// Retrieve the bucket hierarchy.
IHierarchy bucketHierarchy;
try {
bucketHierarchy = HierarchiesUtil.getHierarchy(pivot, bucketHiearchyName);
} catch (ActiveViamException e) {
throw new ActiveViamRuntimeException(
"Unable to find hierarchy " + bucketHiearchyName + " in cube " + pivot.getId());
}
if (bucketHierarchy.getLevels().size() != BUCKET_LEVEL_ORDINAL + 1) {
throw new ActiveViamRuntimeException(
"The bucket hierarchy must have exactly one level: " + bucketHierarchy.getName() + ", "
+ bucketHierarchy.getLevels());
}
// Extract the level info
bucketLevelInfo = bucketHierarchy.getLevels().get(BUCKET_LEVEL_ORDINAL).getLevelInfo();
// Retrieve the bucketed level info.
try {
bucketedLevelInfo = HierarchiesUtil.getLevel(pivot, bucketedLevel).getLevelInfo();
} catch (ActiveViamException e) {
throw new ActiveViamRuntimeException(
"Unable to find level " + bucketedLevel + " from cube " + pivot.getId());
}
return bucketHierarchy;
}
@Override
public ICondition buildCondition(
IActivePivot pivot,
Collection<ILocation> locations,
Collection<String> measures) {
// The conditions built from the locations and taking into account the analysis hierarchy.
ICondition locationCondition = super.buildCondition(pivot, locations, measures);
// The condition deduced from the subcubes.
IConstantCondition subCubesCondition = getSubCubeCondition(pivot.getContext(), bucketHierarchy);
if (locationCondition == null && subCubesCondition == null) {
return null;
} else if (locationCondition != null && subCubesCondition != null) {
return BaseConditions.And(locationCondition, subCubesCondition);
} else if (locationCondition != null && subCubesCondition == null) {
return locationCondition;
} else
return subCubesCondition;// Only the subcubes conditions is not null
}
@Override
protected List<ICondition> locationConditions(List<? extends IHierarchy> hierarchies, ILocation location) {
// First, compute the condition using the default behavior,
// that is to say the one ignoring the analysis hierarchies.
List<ICondition> originalConditions = super.locationConditions(hierarchies, location);
// Introspect the location
if (LocationUtil.isAtOrBelowLevel(location, bucketLevelInfo)) {
Object[] values = LocationUtil.extractValues(location, new ILevelInfo[] {bucketLevelInfo});
if (values != null) {
// The condition deduced from the location.
IConstantCondition locationCondition = createBucketCondition(values);
if (locationCondition != null) {
originalConditions.add(locationCondition);
}
}
}
return originalConditions;
}
/**
* Extract a {@link ICondition condition} from the subcubes.
*
* @param context the current pivot context
* @param bucketHierarchy the bucket hierarchy (TimeBucketDynamic)
* @return the created condition. It can be null.
*/
protected IConstantCondition getSubCubeCondition(IActivePivotContext context, IHierarchy bucketHierarchy) {
// The condition deduced from the subcubes.
IConstantCondition subCubesCondition = null;
// Introspect the sub cubes
ISubCubeProperties subCubes = context.get(ISubCubeProperties.class);
if (subCubes != null) {
ISubCubeTree subCubeTree = subCubes.getSubCubeTree(
bucketHierarchy.getHierarchyInfo().getDimensionInfo().getName(),
bucketHierarchy.getName());
if (subCubeTree != null) {
if (subCubeTree == AllExcludedSubCubeTree.getInstance()) {
subCubesCondition = BaseConditions.FALSE;
}
int depth = subCubeTree.getRestrictionDepth();
if (depth >= BUCKET_LEVEL_ORDINAL) {
Set<IAxisMember> axisMembers = subCubeTree
.retrieveAxisMembers(BUCKET_LEVEL_ORDINAL + 1, (IAxisHierarchy) bucketHierarchy);
if (axisMembers != null) {
if (axisMembers.size() == 0) {
subCubesCondition = BaseConditions.FALSE;
}
Object[] grantedBuckets = new Object[axisMembers.size()];
int i = 0;
for (IAxisMember member : axisMembers) {
grantedBuckets[i++] = member.getObjectPath()[depth];
}
subCubesCondition = createBucketCondition(grantedBuckets);
}
}
}
}
return subCubesCondition;
}
/**
* Create a {@link ICondition} from input buckets and relatively to the current time.
*
* @param buckets the bucket from which a {@link ICondition} will be created.
* @return the deduced {@link ICondition}. It can be returned null.
*/
protected IConstantCondition createBucketCondition(Object[] buckets) {
NavigableSet<Object> elements = new TreeSet<Object>(bucketLevelInfo.getComparator());
for (Object value : buckets) {
if (value != null) {
elements.add(value);
}
}
Object lastBucket = elements.last();
if (lastBucket != null) {
final String fieldName = bucketedLevelInfo.getField().getProperty().getName();
final String fieldExpression = fieldInformations.getFieldExpressions().get(fieldName);
Long lastDateElement = timeBucketer.getBoundary(this.now, lastBucket);
Calendar cal = Calendar.getInstance();
cal.setTimeInMillis(lastDateElement);
return BaseConditions.LesserOrEqual(fieldExpression, cal.getTime());
}
return null;
}
}

Then an IDrillthroughExecutor must be defined, that makes use of the created ILocationInterpreter.

In the sandbox, you can look at TimeBucketDrillthroughExecutor:

/**
* A drillthrough executor that uses the {@link TimeBucketLocationInterpreter} as {@link ILocationInterpreter}.
*
* @author ActiveViam
*/
@QuartetExtendedPluginValue(intf = IDrillthroughExecutor.class, key = TimeBucketDrillthroughExecutor.PLUGIN_KEY)
public class TimeBucketDrillthroughExecutor extends DrillthroughExecutor {
private static final long serialVersionUID = 5_11_0L;
/** The plugin key. */
@SuppressWarnings("hiding")
public static final String PLUGIN_KEY = "TIME_BUCKET_DT_EXECUTOR";
/**
* Default constructor.
*
* @param session the current {@link IActivePivotSession} bound to an {@link IActivePivotVersion version}
* @param properties some additional {@link Properties properties}
*/
public TimeBucketDrillthroughExecutor(IActivePivotSession session, Properties properties) {
super(session, properties);
}
@Override
public String getType() {
return PLUGIN_KEY;
}
@Override
protected ILocationInterpreter createLocationInterpreter(
final IActivePivotVersion pivot,
final IFilteredFieldInformation fieldInformation,
final Properties properties) {
return new TimeBucketLocationInterpreter(pivot, fieldInformation, properties);
}
}

Then the executor must be referenced when configuring the cube (see EquityDerivativesCubeConfig#configureCubeBuilder).