Migration notes
Migrate to 6.1.0-rc2
Public and Internal APIs
In this new release we defined the boundaries of the classes/APIs that are considered to be public and the ones that must not be used in projects. The goals of this major change are to ease the maintenance from both ActiveViam and the customer sides and to quickly deliver new features in future bug fix releases without introducing breaking changes.
Most of the classes that can be used in your projects have been moved to new packages containing .api.
.
If after migrating to this new version your project is using classes from Atoti Server that do not have .api.
in their packages, please contact us so that we can find a solution.
Only the classes containing .api.
in their packages will be guaranteed to be backward-compatible between bug fix releases.
You can use java-api-migration-tool to facilitate migrations. It will automatically update your imports with the new package and class names.
Modules update
composer-impl
and composer-intf
modules have been merged into a single composer-core
module.
The relationship between the datastore and the sources (CSV, JDBC) has been inverted. These sources now depend on the datastore and not the other way around.
This implies that if you want to use a source, you must import the corresponding artifact to your pom.xml
separately:
- CSV source:
com.activeviam.source:csv-source
- JDBC source:
com.activeviam.source:jdbc-source
- Parquet source:
com.activeviam:parquet-source
(not impacted by this change, it already had to be imported separately)
ActiveMonitor
ActiveMonitor no longer relies on org.apache.velocity.tools:velocity-tools-generic
.
This dependency allowed for dynamic message templates.
Users can elect to re-add such this extension by overriding the Spring Bean returning VelocityTemplateEngine
,
and setting their own extensions through setExtensions(...)
.
Example:
public class MessagingConfigurationOverride {
@Bean
public VelocityTemplateEngine velocityTemplateEngine() {
final VelocityTemplateEngine engine = new VelocityTemplateEngine();
final Map<String, Object> extensions = new HashMap<>();
extensions.put("numberTool", new NumberTool());
engine.setExtensions(extensions);
return engine;
}
private void usage() {
final VelocityTemplateEngine engine = new VelocityTemplateEngine();
final String template = "${number} as ${numberTool.integer($number)}";
final Map<String, Object> model = new HashMap<>();
model.put("number", 1.2);
engine.render(template, model); // Returns "1.2 as 1"
}
ActivePivot
- The global provider can be referred to using
IAggregateProviderDefinition.GLOBAL_PROVIDER_NAME
. - The implementations of
com.activeviam.activepivot.core.intf.api.realtime.IStream
now only supports the single parameter constructor. This parameter isIMultiVersionActivePivot
. - Classes
AStoredMeasureHandler
andStoredMeasureHandler
have been renamed and are now internal. The plugin keyStoredMeasureHandler.PLUGIN_TYPE
is available inIAggregatesContinuousHandler.BASIC_HANDLER_PLUGIN_KEY
. - IAdvancedAggregatesRetriever now has a
createPointLocationBuilder
method to build the PointLocation. getScope
inIAdvancedAggregatesRetriever
, inIAggregatesRetrievalResult
and inIIterableAggregatesRetrievalResult
has been renamedgetLocation
.- The classes
AStoredMeasureHandler
,StoredMeasureHandler
,StoredPrimitiveMeasureHandler
, andMultiAnalysisHierarchyMeasureHandler
have been renamed and are now internal. Their plugin keys are now available inIAggregatesContinuousHandler
as follows:StoredMeasureHandler.PLUGIN_TYPE
is nowIAggregatesContinuousHandler.BASIC_HANDLER_PLUGIN_KEY
.MultiAnalysisHierarchyMeasureHandler.PLUGIN_TYPE
is nowIAggregatesContinuousHandler.MULTI_ANALYSIS_HIERARCHY_MEASURE_HANDLER
.
- As aggregate providers classes are now internal, information about them can be retrieved using
IActivePivotVersion#getPartialAggregateProviderNames
andIActivePivotVersion#getAggregateProviderStatistics
. TransactionStream.PLUGIN_TYPE
was moved toIStream.ACTIVEPIVOT_PLUGIN_KEY
.
Annotations
Annotations QuartetExtendedPlugin
, QuartetExtendedPluginValue
, QuartetPlugin
, QuartetPluginValue
and QuartetType
have been renamed into AtotiExtendedPlugin
, AtotiExtendedPluginValue
, AtotiPlugin
, AtotiPluginValue
and AtotiType
.
AWS cloud source
Following the AWS end-of-support announcement for the Java SDK v1 effective December 31, 2025, the AWS cloud source has been updated to use the AWS Java SDK v2.
If using client side encryption, on S3 the metadata header x-amz-unencrypted-content-length
must be updated to x-amz-meta-unencrypted-content-length
.
Indeed, AWS recommends to use the x-amz-meta-
prefix for custom metadata headers, as it will not conflict with any headers they might add in the future.
Chunk allocator
The previous property ActiveViamProperty.CHUNK_ALLOCATOR_ALLOCATOR_PROPERTY
using the class name and which was set with -Dactiveviam.chunkAllocatorClass
has been replaced by a key.
The new property is ActiveViamProperty.CHUNK_ALLOCATOR_KEY_PROPERTY
and can be set with -Dactiveviam.chunkAllocatorKey
.
The core possible values of the key are slab
, direct
, mmap
, array
, direct_buffer
, heap_buffer
.
See the ChunkAllocators
class for more detail on these allocators.
Content service
Context values are no longer stored in the content service. Only KPIs and calculated members remain. Moreover, the structure of KPIs and calculated members has been changed and should be updated. A migration tool is available in the sandbox project to help with this task.
com.activeviam.migration.api.ContentServiceMigrator
, in the sandbox application, provides a method migrate
to perform the migration. To create an instance of this class, one must provide an instance of an IContentService
.
ContentServiceMigrationApp
, in the sandbox application, uses the ContentServiceMigrator
and the content service configuration defined in the project to launch the migration.
Construction
Implementations of IContentService
are now internal details. Instances must be created using the builders provided by com.activeviam.tech.contentserver.storage.api.IContentService#builder()
.
To create a Content Service storing data in-memory, you can do as follows:
final var contentService = IContentService.builder().inMemory().build();
To create a Content Service storing data in a database, you can do as follows:
final Properties hibernateProperties = new Properties();
hibernateProperties.setProperty(AvailableSettings.SHOW_SQL, "false");
hibernateProperties.setProperty(AvailableSettings.FORMAT_SQL, "false");
// ... any property wanted
final var config = new org.hibernate.cfg.Configuration().addProperties(hibernateProperties);
final var contentService =
IContentService.builder().withPersistence().configuration(config).build();
To create a Content Service rooted in a certain directory of another Content Service, use
final var contentService = IContentService.builder().inMemory().build();
final var prefixedService = IContentService.prefixed(contentService, "root/dir");
Content changes
The IActivePivotContentService
can no longer be used to store user IContextValue
.
Multiple methods surrounding the use of context values are removed, including getContextValue
, setContextValue
, removeContextValue
, etc...
Implementations of IEntitlementProvider
should be used instead.
Copper
The artifacts have been renamed from
activepivot-copper2-impl
andactivepivot-copper2-test
toactivepivot-copper
andactivepivot-copper-test
.Copper no longer support unknown level resolution. Each level must be now fully qualified, i.e. the dimension name and the hierarchy name must be provided alongside with the level name. Also, deprecated methods related to store fields are removed as well. Here are examples of possible refactoring:
6.0 code | 6.1 code |
---|---|
Copper.member("LEVEL") | Copper.member(Copper.level("DIMENSION", "HIERARCHY", "LEVEL")) |
Copper.hierarchy("HIERARCHY") | Copper.hierarchy("DIMENSION", "HIERARCHY") |
Copper.level("LEVEL") | Copper.level("DIMENSION", "HIERARCHY", "LEVEL") |
Copper.newHierarchy(...).fromValues(...).withMembers("STORE", "FIELD") | Copper.newHierarchy(...).fromValues(...).withMembers(new StoreField("STORE", "FIELD")) |
Copper.newHierarchy(...).fromStore(...).withLevel("LEVEL") | Copper.newHierarchy(...).fromStore(...).withLevel("LEVEL", FieldPath.of("LEVEL")) |
Copper.newHierarchy(...).fromStore(...).withLevel("LEVEL", "PATH/TO/FIELD") | Copper.newHierarchy(...).fromStore(...).withLevel("LEVEL", FieldPath.of("PATH", "TO", "FIELD") |
Window.orderBy("HIERARCHY") | Window.orderBy(Copper.hierarchy("DIMENSION", "HIERARCHY")) |
com.activeviam.copper.pivot.pp.LeadLagPostProcessor.Mode | com.activeviam.activepivot.copper.api.NavigationMode |
The method
CopperStore.withMapping(CopperLevel)
has been deleted. Please useCopperStore.withMapping(FieldPath, CopperLevel)
instead.com.activeviam.copper.ICopperContext
was moved tocom.activeviam.activepivot.core.intf.api.copper.ICopperContext
package.
Copper Testing API
The jUnit5 Extension exposing a Cube Tester Builder is now named TesterBuilderExtension
.
In order to use the new Extension, test classes must contain a public field of CubeTesterBuilderHolder
type annotated by the @TesterBuilder
annotation.
The methods previously available on the TesterBuilderExtension
are now to be used on the aforementioned CubeTesterBuilderHolder
field :
- 6.0 code :
public class TestClass {
@RegisterExtension
final CubeTesterBuilderExtension extension = new CubeTesterBuilderExtension();
@Test
void testMethod() {
final CubeTester app = extension.setBuilder([...])
.setData([...])
.build([...]);
[...]
}
- 6.1 code :
@ExtendWith(TesterBuilderExtension.class)
public class TestClass {
@TesterBuilder
public CubeTesterBuilderHolder applicationBuilder;
@Test
void testMethod() {
final CubeTester app = applicationBuilder.setBuilder([...])
.setData([...])
.build([...]);
[...]
}
CSV source
- Signature of
com.activeviam.source.csv.api.FileSystemCsvTopicFactory.createTopic
has changed from#createTopic(String topic, String path, ICsvParserConfiguration parserConfiguration)
to#createTopic(String topic, ICsvParserConfiguration parserConfiguration, String... paths)
.path
andparserConfiguration
have been swapped andpaths
are now varargs to allow passing a list of files. - For consistency
parserConfiguration
andpath
arguments have also been swapped incom.activeviam.source.csv.api.FileSystemCsvTopicFactory.createDirectoryTopic
andcom.activeviam.source.csv.api.FileSystemCsvTopicFactory.createPollingDirectoryTopic
Cube configuration
LevelIdentifier
,HierarchyIdentifier
andDimensionIdentifier
classes, respectively uniquely identify levels, hierarchies and dimensions. APIs will progressively be migrated from acceptingString
arguments to accepting the corresponding identifiers. These objects were introduced to replaceString
descriptions using@
symbol, which, because they accepted partially defined identifiers, reduced the robustness of Atoti's API.- The signature of
INumaNodeSelectorFactory.create
has been changed. The previous signature wascreate(IPartitioning providerPartitioning, IPartitioning storagePartitioning)
. The new one iscreate(IPartitioning providerPartitioning, IPartitioning storagePartitioning, INumaNodeSelector storageSelector)
. This will help to align the numa configuration of the datastore and the aggregate providers. - The old
AAnalysisHierarchy
has been removed, andAAnalysisHierarchyV2
has been renamed toAAnalysisHierarchy
. - In cube builder fluent API, method
.withFilter(...)
was renamed to.withFactFilter(...)
.
Cursors
Deprecated method ICursor.rewind()
has been removed. Now cursors are considered performing one-way pass over the data.
Data export service
The IDataExportService
implementations are now internal. Please use the builder to create one, like
IDataExportService service = new DataExportServiceBuilder()
.withManager(manager)
.withDirectory(Paths.of("tmp"))
.build();
Datastore transaction API
Methods taking the store id in the datastore transaction API have been removed, use the methods taking the store name instead:
- Use
IDatastoreTransactionStatistics.getStoreTransactionStatistics(String)
instead ofIDatastoreTransactionStatistics.getStoreTransactionStatistics(int)
. - Use
ITransactionalWriter.add(String, Object...)
instead ofITransactionalWriter.add(int, Object[])
. - Use
ITransactionalWriter.addAll(String, Collection<Object[]>)
instead ofITransactionalWriter.addAll(int, Collection<Object[]>)
. - Use
ITransactionalWriter.remove(String, Object...)
instead ofITransactionalWriter.remove(int, Object[])
. - Use
ITransactionalWriter.removeAll(String, Collection<Object[]>)
instead ofITransactionalWriter.removeAll(int, Collection<Object[]>)
. - Use
ITransactionManager.startTransaction(String...)
instead ofITransactionManager.startTransaction(int[])
. ITransactionManager.addRecords(int, IRecordBlock<? extends IRecordReader>)
has been removed.ITransactionManager.removeRecords(int, IRecordBlock<? extends IRecordReader>)
has been removed.IDatastoreSchemaTransactionInformation.getLockedStoreIds()
has been removed.
Distribution
QueryCubeSync
is replaced byDistributionTestHelper
.- The
IClusterDefinition#EXECUTE_IN_DATA_CUBE_PROPERTY
andIDistributedPostProcessor#EXECUTE_IN_DATA_CUBE_PROPERTY
properties are no longer supported. Use IDistributedPostprocessor#canBeDistributed(ILocation) to fully specify the distribution behavior of post-processor implementations. - The static constants defined in
IDistributedMessenger
(LOCAL_PLUGIN_KEY
,NETTY_PLUGIN_KEY
andDEFAULT_PLUGIN_KEY
) are now available incom.activeviam.activepivot.core.intf.api.description.IMessengerDefinition
class. Their values are unchanged.
Distribution Properties
- The
ActiveViamProperty
activeviam.distribution.endpoint.suffix
is removed and replaced withDataClusterDefinitionBuilder#withEndpointSuffix(String)
. - The
ActiveViamProperty
activeviam.distribution.endpoint.port
is removed and replaced withDataClusterDefinitionBuilder#withPortNumber(int)
. - The
ActiveViamProperty
activeviam.distribution.endpoint.host
is removed and replaced withDataClusterDefinitionBuilder#withAddress(String)
. - The
ActiveViamProperty
activeviam.distribution.endpoint.protocol
is removed and replaced withDataClusterDefinitionBuilder#withProtocol(String)
.
All can also be set through the fluent builders.
When defining the IDataClusterDefinition
through fluent builders, the method withUniqueIdentifierInCluster
was renamed to withCubeIdentifierInCluster
.
The suffix and port number can be set at this time, through methods withEndpointSuffix(String)
and withPort(int)
. These two ActiveViamProperties
are redundant
with Spring Boot's properties server.port
and server.servlet.context-path
. For this reason, we are
removing our properties. To avoid a loss of functionality, they can now be forwarded to the cluster definition.
Distribution Setup
- The setup of the Distribution does not require any explicit actions when using tools. The starter
com.activeviam.springboot:atoti-server-starter
does it automatically. So does the official testing framework. Check this section to look at the required approach for this setup.
This means that calls like below could be removed from project code bases.
- inject(IDistributedMessenger.class, plugin.key(), contextValueManager);
- inject(IDistributedSecurityManager.class, plugin.key(), userDetailsService);
Drillthrough
DrillthroughExecutor.createLocationInterpreter(Properties)
was changed to DrillthroughExecutor.createLocationInterpreter(ISelection, Properties)
. The first parameter is the selection that feeds the cubes.
Extensions and customizations
AStoreStream
The records sent to the AStoreStream
listener are now undictionarized.
You may access them more easily:
// Before
public class MyStoreStream extends AStoreStream<Set<String>, Set<String>> {
@Override
protected void collectAdded(
final IRecordBlock<? extends IRecordReader> records, final Set<String> collector) {
for (final IRecordReader r : records) {
collector.add(
(String)
resultFormat
.getDictionary(resultFormat.getDictionarizedRecordFormat().getFieldName(0))
.get()
.read(r.readInt(0)));
}
}
}
// After
public class MyStoreStream extends AStoreStream<Set<String>, Set<String>> {
@Override
protected void collectAdded(
final IRecordBlock<? extends IRecordReader> records, final Set<String> collector) {
for (final IRecordReader r : records) {
collector.add((String) r.readInt(0));
}
}
}
The resultFormat
field provided to the child classes of AStoreStream
is now an instance of IRecordFormat
.
It is a plain (undictionarized) record format associated to the query result format:
IPreparedQuery query;
this.resultFormat = query.getQueryResultFormat().getPlainRecordFormat();
JGroups
The JGroups version was upgraded to the latest version available (5.3).
List of noticeable changes in JGroups XML config files:
max_bundle_size
property does not exist anymoreenable_diagnostics
property does not exist anymoreuse_fork_join_pool
property does not exist anymorepbcast.STABLE.stability_delay
property does not exist anymoreFD
protocol does not exist anymore. One may useFD_ALL
andFD_HOST
instead, as described here- in
AUTH
, token parameters should now be prefixed withauth_token.
Also,MD5Token
does not exist anymore org.jgroups.aws.s3.NATIVE_S3_PING
protocol was renamed toaws.S3_PING
. See more info here
Location expansion
In 6.1, location expansion, previously done using LocationUtil#expand
, was revamped.
For memory allocation reasons, the associated methods now provide users with an iterator which will generate locations
on the fly, instead of accumulating them all at once in a collection.
The performance of the method was also improved.
Use LocationUtil#expandRangeLevels(ILocation, List)
to obtain an iterator that will iterate
over all point locations "contained" in a given range location.
If you want only a "partial" expansion, i.e. expansion only along one hierarchy or expansion
only to a certain level, use more generic LocationUtil#partialExpand(ILocation, List, List)
.
It takes an additional argument: collection of levels to expand.
If all range levels of the location are provided, the behavior is identical to
LocationUtil#expandRangeLevels(ILocation, List)
.
If none are provided, the initial location is returned.
See documentation of the method for more information.
Examples:
// Time hierarchy: Year\Month\Date, Currency hierarchy: AllMember\Currency.
final Location location = new Location(new Object[][] {{null, null, null}, {ILevel.ALLMEMBER, null}});
final List<ILevelInfo> expansionLevels = List.of(monthLevel);
final Iterator<ILocation> iterator = LocationUtil.partialExpand(location, expansionLevels, hierarchies);
// Expands levels Year and Month, creating such locations as
// 2024\01\*|AllMember\*, 2024\02\*|AllMember\*, ... , 2023\01\*|AllMember\*, ...
iterator.forEachRemaining(location -> { ... })
Mdx
The public API to execute Mdx queries is now
MdxQueryUtil
orIQueriesService
.MdxUtil
can no longer be used.The
activeviam.mdx.result.aggresiveAxisPositionLimitCheck
property is now replaced withMdxContext.setAggressiveAxisPositionLimitCheck
and is enabled by default.To change the visibility of a dimension, one must now use a
DimensionIdentifier
instead of its unique name.To change the visibility of a hierarchy, one must now use a
HierarchyIdentifier
instead of its unique name.To change the default members of a hierarchy, one must now use a
HierarchyIdentifier
instead of its unique name.To hide the sub total of a level, one must now use a
LevelIdentifier
instead of its unique name.
Partitioning
- Replace your
"hashX(fieldName)"
partitioning descriptions by the new name"moduloX(fieldName)"
.
Plugin system
The return type of IPluginValue#key()
is now String
instead of Object
.
Post-Processors
Leading and trailing spaces are no longer trimmed in level descriptions. This allows to perform queries
against data sources with field names containing leading and/or trailing spaces. However, this change may cause
failures when passing level descriptions to post-processors in string-encoded form (e.g.
"level1@hierarchy1@dimension1,level2@hierarchy2@dimension2"
). This is the case for the leafLevels
property of
ABaseDynamicAggregationPostProcessor
and its subclasses. Make sure that you have no extra spaces between names
and separators ('@'
and ','
).
Examples:
" level @ hierarchy "
is now parsed as{" level ", " hierarchy ", null}
and should be rewritten to"level@hierarchy "
."L1@H1, L2@H2"
is now parsed as[{"L1", "H1", null}, {" L2", "H2", null}]
and should be rewritten to"L1@H1,L2@H2"
.
Distribution
Queries within a distributed setup are now distributed, from the leaves and up to the first non-distributed post-processor of the operations chain, This changes from the current behavior where distributed queries were distributed, from the leaves and up to the last distributed post-processor of the operations chain.
Prefetchers
IPrefetcher.name()
is deprecated. Prefer passing name in the constructor of the prefetcher.
Examples:
Before | After |
---|---|
var prefetcher = IPrefetcher.name(name, new APrefetcher(measuresProvider)) | var prefetcher = new APrefetcher(name, measuresProvider) |
- The constructor
ALeafLevelsPrefetcher(IActivePivot, IMeasuresProvider, String...)
was removed. UseALeafLevelsPrefetcher(IActivePivot, IMeasuresProvider, Collection<String>)
instead. - The constructor
ParentPrefetcher(IActivePivot, IMeasuresProvider, Set<String>, String)
was removed. UseParentPrefetcher(IActivePivot, IMeasuresProvider, Set<String>, IHierarchyInfo)
instead with the help ofHierarchiesUtil.getHierarchy(IActivePivot, String)
.
ALocationShiftPostProcessor
The following fields, methods and constants have been renamed in ALocationShiftPostProcessor
.
Before | After |
---|---|
EVALUATE_FOR_MEASURES_PROPERTY | HELPER_MEASURES_PROPERTY |
UNDERLYING_PREFETCHER_NAME | HELPER_PREFETCHER_NAME |
targetMeasures | helperMeasures |
getTargetMeasures(...) | getHelperMeasures(...) |
createUnderlyingMeasuresPrefetcher() | createHelperPrefetcher() |
Registry
Registry initialization is now entirely done through a single entry point: Registry#initialize(RegistryContributions)
. For instance,
public static void main(String[] args) {
Registry.setContributionProvider(new ClasspathContributionProvider(...));
}
should be replaced
public static void main(String[] args) {
Registry.initialize(RegistryContributions.builder().packagesToScan(...).build());
}
Retrieval naming convention
The naming convention for external retrievals is changed: external (database) retrieval
is now called database retrieval
.
This change prevents any confusion between database retrievals that access an external database, through the DirectQuery feature, and retrievals that are associated with Copper joins, and were previously called External
.
These changes are applied to the related classes as well as REST API.
ActivePivot REST API version is changed from 8
to 9x1
.
Schema selection
ISelectionDescriptionBuilder.withField(String name)
is no longer deprecated. However, the name argument must now be a field name only, and not a field expression.
Schema rebuild
ScheduledActivePivotSchemaRebuilder
and PeriodicActivePivotSchemaRebuilder
have been removed. The scheduling must now be handled in your project. The new method to call is IActivePivotManager.rebuild(String... pivotsId)
.
Spring
- The property
activeviam.jwt.generate
changed toactiveviam.jwt.enabled
. - The property
activeviam.jwt.check.user_details
changed toactiveviam.jwt.check_user_details
-.
changed to a ``_.
Defining an application with a Datastore and an ActivePivotManager
Atoti Server 6.0 used to offer the configuration class ActivePivotWithDatastoreConfig
creating fully-defined instances of Datastore and ActivePivotManager from descriptions. Historically, this alleviated the pain of finding the proper methods to invoke, their arguments and their orders to build these elements. With the introduction of application builders in 6.0, this pain is now gone, hence the need for the class.
Building an Atoti Server application operating on top of a Datastore from descriptions can be done with the following lines of code:
// This assumes that `datastoreDescription` is defined as a `IDatastoreSchemaDescription`
// and that `managerDescription` is defined as a `IManagerDescription`.
// In this example, no permission manager is used to control access to branches
StartBuilding.application()
.withDatastore(datastoreDescription)
.withManager(managerDescription)
.withoutBranchRestrictions()
.build();
From this, by exposing the descriptions as beans, it is possible to write a generic configuration class taking these elements to build and expose the applicative ActivePivotManager and its database. The following snippet shows how to do so:
@Configuration
@RequiredArgsConstructor
public class ActivePivotWithDatastoreConfig implements IDatastoreConfig, IActivePivotConfig {
private final IActivePivotManagerDescriptionConfig apManagerConfig;
private final IDatastoreSchemaDescriptionConfig datastoreDescriptionConfig;
private final IActivePivotBranchPermissionsManagerConfig branchPermissionsManagerConfig;
@Bean
protected ApplicationWithDatastore applicationWithDatastore() {
return StartBuilding.application()
.withDatastore(this.datastoreDescriptionConfig.datastoreSchemaDescription())
.withManager(this.apManagerConfig.managerDescription())
.withEpochPolicy(this.apManagerConfig.epochManagementPolicy())
.withBranchPermissionsManager(
this.branchPermissionsManagerConfig.branchPermissionsManager())
.build();
}
@Bean
@Override
public IActivePivotManager activePivotManager() {
return applicationWithDatastore().getManager();
}
@Bean
@Override
public IDatastore database() {
return applicationWithDatastore().getDatastore();
}
}
Note that this code comes from the Sandbox project.
Defining the Content Service as a main Bean
The Content Service used to be defined as an internal detail of the ActivePivotContentService
before being weirdly extracted from it to be published as a Bean.
This was done automatically in com.qfs.server.cfg.content.IActivePivotContentServiceConfig
for users.
With 6.1, the Content Service is promoted to a central Bean created by users, through the new builders of com.activeviam.tech.contentserver.storage.api.ContentServiceBuilders
. To properly build the ActivePivotContentService
from this central bean, use the method com.activeviam.activepivot.server.spring.api.content.ActivePivotContentServiceBuilder#with(IContentService)
from the builders.
The code should change as below:
// Before
public class ContentServiceConfig implements IActivePivotContentServiceConfig {
@Override @Bean
public IActivePivotContentService activePivotContentService() {
return new ActivePivotContentServiceBuilder()
.withoutPersistence()
.needInitialization("ROLE_ADMIN", "ROLE_ADMIN")
.build();
}
@Override @Bean
public IContentService contentService() {
// Let's not extract the Content Service from another component anymore
return activePivotContentService().getContentService().getUnderlying();
}
}
// After
public class ContentServiceConfig implements IActivePivotContentServiceConfig {
@Override @Bean
public IContentService contentService() {
// Service defined here as a main Bean
return ContentServiceBuilders.inMemory().build();
}
@Override @Bean
public IActivePivotContentService activePivotContentService() {
return new ActivePivotContentServiceBuilder()
.with(contentService())
.needInitialization("ROLE_ADMIN", "ROLE_ADMIN")
.build();
}
}
Defining a branch permission manager using the Content Service
The classes FullAccessBranchPermissionsManagerConfig
and ContentServiceBranchPermissionsManager
have been
removed. A new builder is available to create the permission manager using the Content Service. This builder will
be used to create the whole config class fitted to your need.
The following snippet illustrates how to do so:
/**
* Sandbox configuration class creating the manager of branch permissions.
*
* @author ActiveViam
*/
@Configuration
@RequiredArgsConstructor
public class ActivePivotBranchPermissionsManagerConfig
implements IActivePivotBranchPermissionsManagerConfig {
private final IContentServiceConfig contentServiceConfig;
@Bean
@Override
public IBranchPermissionsManager branchPermissionsManager() {
final CachedBranchPermissionsManager manager =
new CachedBranchPermissionsManager(
ContentServiceBranchPermissionsManagerBuilder.create()
.contentService(this.contentServiceConfig.contentService())
.allowedBranchCreators(Set.of(ROLE_ADMIN, ROLE_USER))
.defaultBranchOwners(Set.of(ROLE_ADMIN))
.build());
manager.setBranchPermissions(
IEpoch.MASTER_BRANCH_NAME,
new BranchPermissions(
Collections.singleton(ROLE_ADMIN), IBranchPermissions.ALL_USERS_ALLOWED));
return manager;
}
}
Other changes: Moved methods
Before | After |
---|---|
com.qfs.QfsWebUtils.url(java.lang.String...) | com.activeviam.web.core.api.IUrlBuilder.url(java.lang.String...) |
com.qfs.QfsWebUtils.url(boolean, java.lang.String...) | com.activeviam.web.core.api.IUrlBuilder.url(boolean, java.lang.String...) |
com.qfs.QfsWebUtils.normalize(java.lang.String, boolean) | com.activeviam.web.core.api.IUrlBuilder.url(boolean, java.lang.String...) |
Migrate to 6.0
Starting from 6.0, many classes visible from Atoti jars are now considered internal classes.
These internal classes are located in packages containing internal
or private_
.
Examples:
com.activeviam.database.sql.internal.schema.ISqlFieldDescription
, indicated by the "..sql.internal.schema..."com.activeviam.database.bigquery.private_.BigqueryDataVersion
, indicated by the "..bigquery.private_..."
These classes may be changed at any point in time, without regard for backward compatibility, without warning, without mention in the changelog, and without migration notes.
Users must not use them. This will never be an issue as they are never exposed to users The APIs always return their interfaces or wrap them. They are only visible through debuggers.
The main change in this version is the introduction of the Database API. This API will allow to work on top of the Datastore but also on top of external databases. The Datastore now implements the new Database API and the core components also use it rather than the Datastore API.
Datastore
Operation | Before | Now |
---|---|---|
Compile a record query | datastore.getLatestVersion().getQueryManager().compile(query) | datastore.getQueryManager().listQuery() |
Compile a GetByKey query | DatastoreQueryHelper.createGetByKeyQuery() | datastore.getQueryManager().getByKeyQuery() |
Execute a query | datastore.getHead(branch).getQueryRunner().forQuery(query) | datastore.getHead(branch).getQueryRunner().xxxQuery() |
Get the metadata | datastore.getQueryMetadata().getMetadata() | datastore.getEntityResolver() |
- Field dictionaries are no longer accessible (
datastore.getQueryMetadata().getDictionaries()
before) but you should never need them. Indeed, the new way to retrieve a field's default value (datastore.getEntityResolver().findTable(tableName).getField(fieldName).getType().getDefaultValue()
) always returns an un-dictionarized value, as opposed as before when usingrecordFormat.getDefault(fieldIndex)
that returns the dictionarized value when the field is dictionarized. - The
ICursor
interface now implementsAutoCloseable
. If cursors are used in your project, do not forget to close them, as they could leak database connections. - Methods to create conditions in
BaseConditions
now require arguments of the safe typeFieldPath
instead of legacy field expression strings. These methods have also been renamed to start with a lower case as they are not classes:Equal
is nowequal
,And
is nowand
... Note thatLesser
andLesserOrEqual
have been renamedless
andlessOrEqual
.True()
andFalse()
have been removed and can be replaced byBaseConditions.TRUE
andBaseConditions.FALSE
. SelectionField
is replaced byAliasedField
. The equivalent ofnew SelectionField("alias", "reference/field")
is nowAliasedField.create("alias", FieldPath.of("reference", "field"))
.- The constructors of
NonNullableFieldsPostProcessor
andDictionarizeFieldsPostProcessor
now useReachableField
instead ofString
. - The constructor of
StoreDescriptionBuilder
is now protected. UseStoreDescription.builder()
instead. - In
StoreDescriptionBuilder
, all operations on fields should be done before defining the partitioning. For example, you cannot calladdField()
afterwithPartitioning()
. - The return type of
DatastoreSchemaDescriptionUtil.createPath(String...)
has changed. See the javadoc. DynamicConditions
was removed. Its methods are inBaseConditions
. Sub-interfaces ofICondition
and existing implementations have been moved and are now internal classes.ICustomCondition
remains a public class, though it has been moved intocom.activeviam.datastore.condition
package.- The API to create Dynamic conditions has changed. It is now mandatory to name a parameter using
as(String)
method. Parameter indexes are no longer supported. - Other changes:
Before | Now |
---|---|
datastoreVersion.getQueryRunner() | datastoreVersion.getDatastoreQueryRunner() |
datastoreVersion.getQueryManager() | datastoreVersion.getDatastoreQueryManager() |
Spring
The interface IActivePivotManagerDescriptionConfig
no longer contains the datastore description.
The IActivePivotManagerDescriptionConfig#userManagerDescription()
was renamed managerDescription()
. userSchemaDescription()
was moved to and renamed IDatastoreSchemaDescriptionConfig#datastoreSchemaDescription()
.
One now has to define an IDatastoreSchemaDescriptionConfig
that will contain the datastore description.
ActivePivotConfig
was renamedActivePivotWithDatastoreConfig
.IDatastoreConfig.datastore()
was renamedIDatastoreConfig.database()
.
CXF & SOAP
Both SOAP services and CXF dependencies were completely dropped.
Following this, all remote services require an AAuthenticator
instead of a ClientPool
.
Here is a list of replacements for each dropped service from com.qfs.webservices
:
Webservice interface name | Replacement |
---|---|
IIdGenerator, IStreamingService, ILongPollingService | Atoti Websocket API |
IQueriesService | com.qfs.pivot.rest.query.IQueriesRestService |
IAdministrationService | com.qfs.pivot.rest.discovery.IDiscoveryRestService |
IConfigurationService | com.qfs.pivot.rest.configuration.IConfigurationRestService |
ILicensingService | No replacement |
ActivePivot
Aggregation module changes
Cloneable aggregated values
The API to use when writing aggregation bindings with cloneable aggregated values has been rewritten to avoid user mistakes.
Previously, users achieved such a task by extending ABasicAggregationBindingCloneable
or one of its sub-classes - AAggregationBindingCloneable
, AVectorAggregationBinding
, SumVectorAggregationBinding
, ... It often requires to carefully write aggregated values using #write(int, Object)
in addition to an obscure flag using #writeBoolean
. Reads and updates would also require reading such flags and values, not to mutate a value shared through the system.
The new API focuses on user ease. When writing a value, implementors must choose between #writeReadOnlyAggregate(int, Object)
, preserving the original value as much as possible or #writeWritableAggregate(int, Object)
, informing this class that the provided value can be mutated.
Similarly, when reading an existing aggregate, implementors can access the possibly untouched value through #readReadOnlyAggregate(int)
or automatically access a writable value using #readWritableAggregate(int)
.
At any point, implementors are free to call the write methods to replace the current aggregates.
With the described changes, aggregating values change to something like this (in pseudo-code):
void copy(int from, int to){
// The aggregate is read-only because we pulled it directly from the input reader
// We avoid creating a copy of the value that may not be necessary
- this.output.write(to, this.input.read(from))
- this.output.writeBoolean(to, false);
+ writeReadOnlyAggregate(to, readInput(from));
}
void aggregate(int from, int to) {
- IVector aggregate = this.output.read(to);
- if (this.output.readBoolean(to)) {
- aggregate = cloneAggregate(aggregate);
- this.output.write(to, aggregate);
- this.output.writeBoolean(to, true);
- }
+ IVector aggregate = readWritableAggregate(to);
// We can safely modify the content of `aggregate`
aggregate.plus(readInput(from));
}
When implementing an aggregation that uses AVectorAggregationBinding
to bind columns, ensure that the IChunkFactory
creates marked chunk, for instance writing:
public class AggregationVector extends AAggregation<?> {
@Override
public IChunkFactory<?> createChunkFactory(
final boolean isTransient,
final IAllocationSettings allocationSettings) {
return ChunkFactories
.chunkMarkedVectorFactory(getAggregatedDataType(), isTransient, allocationSettings);
}
}
The Aggregation Function documentation provides an up-to-date guide for writing this type of aggregation function.
Deprecated classes
The following abstract classes are deprecated and will be removed in a future version:
AGenericBaseAggregationFunction
AGenericAggregationFunction
AGenericVectorAggregationFunction
andAVectorAggregationFunction
(See example)
Implementers must now write their aggregation functions using the standard unique API. The Aggregation Function documentation provides an up-to-date guide for writing an aggregation function.
Generalization of Aggregation Functions
The client facing (through the Registry) IAggregationFunction
has been extensively modified.
@QuartetPluginValue(intf = IAggregationFunction.class)
and @QuartetPluginValue(intf = IUserDefinedAggregationFunction.class)
must be changed to @QuartetPluginValue(intf = IGenericAggregationFunction.class)
.
A new IGenericAggregationFunction
has been introduced, which allows you to create an aggregation based on an arbitrary number of data sources.
IAggregationFunction
represents a specialization of this new interface, for the creation of aggregations based on a single source of data.
We recommend clients implement AAggregationFunction
when implementing aggregation functions based on a single source of data.
The interface IGenericAggregationFunction
contains the following methods:
int getAggregatedType(int[] sourceTypes)
, which deduces the type of the aggregated values, based on the types of the sources of data. It must throw if the given types are not supported. It can be used to implement:IGenericAggregation createAggregation(List<String> sourceIdentifiers, int[] sourceTypes)
, which creates an aggregation specialized for the given sources of data.IGenericAggregationFunction withRemovalSupport()
, which specializes the current aggregation function with capabilities to disaggregate from the resulting column.
The interface IAggregationFunction
has lost the responsibility to create a chunk factory, which is now the prerogative of the IAggregation
interface.
IMultiSourceAggregationFunction
was also introduced, with an associated base abstract class AMultiSourceAggregationFunction
that mirrors AAggregationFunction
.
IMultiSourceAggregation
was also introduced, with an associated base abstract class AMultiSourceAggregation
that mirrors AAggregation
.
We recommend clients implement AAggregation
, AMultiSourceAggregation
or AUserDefinedAggregation
instead of implementing the interfaces, as they already give default implementations for most of the methods.
Post Processors
- Legacy post processors have been replaced by their corresponding new implementation. This means all
V2
post processors introduced in 5.11 have become the base post processors to use. Migration notes are available in post processor migration section. - Post Processors no longer need to implement
Serializable
.
Builders
com.activeviam.pivot.builders.StartBuilding
was removed and is replaced bycom.activeviam.builders.StartBuilding
.com.activeviam.builders.StartBuilding.entitlementsProvider()
was moved tocom.activeviam.pivot.security.builders.EntitlementsProvider.builder()
.The
withProperty(String, String)
methods on the dimension, hierarchy and level builders where respectively renamedwithDimensionProperty
,withHierarchyProperty
andwithLevelProperty
to avoid ambiguities.
Copper
Parts of the Copper API causing boxing were removed. Migration notes are available in copper migration section.
CopperStore.field(String)
now takes a field name instead of an expression. If you used an expression, you must now useCopperStore.field(FieldPath)
.CopperStore.withMapping(String, CopperLevel)
now takes a field name instead of an expression. If you used an expression, you must now useCopperStore.withMapping(FieldPath, CopperLevel)
. Same forCopperStore.withMapping(String, String)
.
Testing API
The CubeTesterBuilderExtension
provided for in the Copper Testing API now required a Supplier<CubeTesterBuilder>
as first argument.
The migration can be straightforwardly done by replacing the provided testerBuilder
instance by the ()->testerBuilder
supplier in your test classes.
Partial providers
- The default implementation of
IPartialProviderSelector.select
has been removed, any custom implementation ofIPartialProviderSelector
must now implement this method.
Datastore Service
This service is now called the Database Service. The classes and interfaces related to the Datastore Service have been renamed. datastore
was replaced by database
to emphasize that this service works on any IDatabase
.
Here are some examples:
IDatastoreServiceConfiguration
was renamedIDatabaseServiceConfiguration
.DatastoreRestServicesConfig
was renamedDatabaseRestServicesConfig
.DatastoreService
was renamedReadOnlyDatabaseService
.DatastoreRestService
was renamedDatabaseRestServiceController
.JsonDatastoreQueryResult
was renamedJsonDatabaseQueryResult
.SecurityFromCubeToDatastoreFilterHook
was removed.
The public vocabulary has also changed, with "database"/"table"/"join" being used instead of "datastore"/"store"/"reference". Examples of impacted classes:
IDatastoreFieldDescription
was renamedIDatabaseFieldDescription
IDatastoreReferenceDescription
was renamedIDatabaseJoinDescription
JsonDatastoreReference
was renamedJsonDatabaseJoin
IStoreSecurity
was renamedITableSecurity
IStorePermission
was renamedITablePermissions
- the endpoint
datastore/data/stores
was renameddatabase/data/tables
- the endpoint
datastore/data/storeNames
was renameddatabase/data/tableNames
- the endpoint
datastore/discovery/references
was renameddatabase/discovery/joins
Analysis Hierarchy
Constructor of AAnalysisHierarchy
now requires the list of ILevelInfo
of the corresponding hierarchy levels.
In any cases, implementations of Java-based analysis hierarchy should be migrated to AAnalysisHierarchyV2
. Either the description
of the analysis must be given to the cube description, or an IAnalysisHierarchyDescriptionProvider
plugin value should be created to go along with the custom Analysis Hierarchy. This provider description must have the same plugin key as the Analysis Hierarchy.
Introspecting Analysis Hierarchies
In order to define the introspecting levels of an analysis hierarchy, one must now set
the selectionField
of the levels of the created AnalysisHierarchyDescription according to the needs of the
hierarchy.
Analysis hierarchies with at least now one level with a specified selectionField
in their description
are now implicitly introspecting hierarchies and must implement
AMultiVersionAnalysisHierarchy#processIntrospectedMember([...])
to define their introspection logic.
Limits regarding the ordering between introspecting and non-introspecting levels for an Analysis Hierarchy are unchanged (impossible to define an introspecting level deeper than a non-introspecting one).
Hierarchy descriptions
Hierarchy descriptions now must use the IHierarchyDescription
interface. Depending on the type of described hierarchy, the sub-interface to use will change:
- "Standard" hierarchy descriptions (e.g. hierarchies whose levels are filled by levels of the Selection) use the
IAxisHierarchyDescription
interface which remains unchanged. - All Analysis hierarchy descriptions now use the
IAnalysisHierarchyDescription
interface and now must explicitly contain information relative to their levels. - The fluent builder path for Analysis hierarchies has changed, from
withHierarchy(hierarchyName)
.withPluginKey(pluginKey)
.withProperty(propertyKey, propertyValue)
to :
withAnalysisHierarchy(IAnalysisHierarchyDescription)
or
withAnalysisHierarchy(hierarchyName, pluginKey)
.withCustomizations(hierarchyDescription -> {
hierarchyDescription.putProperty(propertyKey, propertyValue);
})
The second option allows for customization of the created Analysis Hierarchy's description. This description will be created by the associated IAnalysisHierarchyDescriptionProvider
.
AAnalysisHierarchy.buildDiscriminatorPaths()
was removed. It is replaced by a more efficient method:buildDiscriminatorPathsIterator(IDatabaseVersion)
.
-AxisLevelDescription(String levelName)
was removed. A selectionField
(formerly propertyName
) must now be specified in the constructor to avoid unexpected behavior.
- The property
IAnalysisHierarchy#LEVEL_TYPES_PROPERTY
used to specify the types of a Java-defined Analysis Hierarchy has been moved toIAxisLevelDescription#LEVEL_TYPE_PROPERTY
, and is to be defined in the properties of the descriptions of the levels of the hierarchy. The expected content is now a single entry matching the type of the level. See the copper documentation.
Aggregation Procedure
DatastorePrefetchRequest
is replaced by DatabasePrefetchRequest
. The field expressions are now represented with the FieldPath
class instead of String
.
Memory Allocation Monitoring
We are no longer reporting the off-heap memory allocated with java's internal MBeans.
Instead, you can use the newly created MBeans located in DirectMemoryMonitoring
for the information you wanted regarding memory consumption from ActivePivot.
Additionally, MAC can be used for deeper investigations.
Other changes
StartBuilding.selection(IDatastoreSchemaDescription)
was removed. UseStartBuilding.selection(description..asDatabaseSchema())
instead.ADatastoreVersionAwareProperty
was renamedADatabaseVersionAwareProperty
.com.qfs.store.IDatastoreVersionAware
was renamedcom.activeviam.database.api.IDatabaseVersionAware
.IDatastoreSelectionSession
was renamedISchemaSelectionSession
.IFilteredDatastoreSelectionSession
was renamedIActivePivotSchemaSelectionSession
.IDatastoreAware
was changed and will be replaced byIDatabaseAware
.AStoreStream.dictionaries
is replaced withAStoreStream.resultFormat.getDictionary(...)
.- All ActivePivot named threads are now prefixed by
activeviam
and suffixed byworker-i
wherei
is the thread count number. - Creating a
QueriesTimeLimit
with a Duration lower than 1 second will now throw aIllegalArgumentException
.
Content Server Rest API
- The new Rest Bulk API for Content Server was developed. The old API for bulk operations was removed.
It is no longer possible to use POST method with query parameters
moveAll
,putAll
ordeleteAll
because mappings for them are deleted. Also, the query parametermetadataOnly
is replaced by a parameterincludeContent
that is a logical opposite ofmetadataOnly
. Do not forget to update the value of this parameter fromTrue
toFalse
and vice versa when renaming.
CSV Source
- The class
CSVSourceConfiguration
is now created throughCSVSourceConfigurationBuilder
, and is given the methodfileComparator
that previously belonged to the classCSVSource
.
Rest Services
All our Rest endpoints were migrated to Spring MVC. The migration implies a few changes:
Removal of the
JsonResponse
wrapper. The data will now be directly in the body of the request and the status in the HTTP Status code of the request. The errors in AP are still returned as JsonError with the full stack traceEvery rest endpoint is now prefaced by the activeviam namespace. For example : localhost:XXXX/activeviam/content/rest/v7/
The
datastore
endpoint has been changed todatabase
.The content server rest service now handle the path of the file requested or given through Query parameters not Path variables. The parameter for import, permissions, etc. needs now to be
true
and doesn't allowvoid
.The Activemonitor's monitors site separator is now an underscore not a slash.
ActiveMonitor
IParameterAwareActivePivotManagerDescriptionConfig
was renamedIParameterAwareDatastoreSchemaDescriptionConfig
. You will need to add a Spring configuration implementingIActivePivotManagerDescription
in your project.
Properties
- The properties used for the external configuration of ActivePivot and ActiveMonitor applications in
ActiveViamProperty
andJwtConfig
have been changed to uniformize their naming conventions. Properties are now either :- Prefixed by "activeviam"
- Prefixed by an ActiveViam technical product name such as "activepivot" or "activemonitor"
List of the affected properties with the new corresponding naming :
- In
JwtConfig
:
Before | Now |
---|---|
qfs.jwt.expiration | activeviam.jwt.expiration |
qfs.jwt.key.private | activeviam.jwt.key.private |
qfs.jwt.key.public | activeviam.jwt.key.public |
qfs.jwt.claim_key.authorities | activeviam.jwt.claim_key.authorities |
qfs.jwt.claim_key.principal | activeviam.jwt.claim_key.principal |
qfs.jwt.generate | activeviam.jwt.generate |
qfs.jwt.check.user_details | activeviam.jwt.check.user_details |
- In
ActiveViamProperty
:
Before | Now |
---|---|
contentServer.remote.import.timeout | activeviam.contentServer.remote.import.timeout |
qfs.handler.stored.maxSize | activeviam.handler.stored.maxSize |
qfs.mmap.threshold | activeviam.mmap.threshold |
qfs.mmap.tracking | activeviam.mmap.tracking |
qfs.compression.chunkset.disable | activeviam.compression.chunkset.disable |
qfs.stream.configuration.gc.delay | activeviam.stream.configuration.gc.delay |
qfs.mdx.formula.callStackDepthLimit | activeviam.mdx.formula.callStackDepthLimit |
completer.log.suppressed | activeviam.completer.log.suppressed |
qfs.contentservice.root | activeviam.contentservice.root |
nativeMemoryCacheRatio | activeviam.nativeMemoryCacheRatio |
qfs.slab.memory.allocation.strategy | activeviam.slab.memory.allocation.strategy |
defaultChunkSize | activeviam.defaultChunkSize |
minimalChunkSize | activeviam.minimalChunkSize |
chunkGarbageCollectionFactor | activeviam.chunkGarbageCollectionFactor |
chunkAllocatorClass | activeviam.chunkAllocatorClass |
qfs.vectors.defaultBlockSize | activeviam.vectors.defaultBlockSize |
qfs.vectors.garbageCollectionFactor | activeviam.vectors.garbageCollectionFactor |
qfs.vectors.allocator.pool.size | activeviam.vectors.allocator.pool.size |
qfs.vectors.swap.directory.numberFiles | activeviam.vectors.swap.directory.numberFiles |
vectorDelimiter | activeviam.vectorDelimiter |
queries.continuous.primitive.fullFork | activeviam.queries.continuous.primitive.fullFork |
postprocessor.partitionOnRangeLevelsByDefault | activeviam.postprocessor.partitionOnRangeLevelsByDefault |
qfs.distribution.gossip.router.port | activeviam.distribution.gossip.router.port |
qfs.distribution.gossip.router.enable | activeviam.distribution.gossip.router.enable |
logicalAddress | activeviam.logicalAddress |
protocolPath | activeviam.protocolPath |
qfs.distribution.netty.externalAddress | activeviam.distribution.netty.externalAddress |
qfs.distribution.netty.portRange | activeviam.distribution.netty.portRange |
nettyMessageMaxSize | activeviam.distribution.nettyMessageMaxSize |
qfs.distribution.netty.bindAddress | activeviam.distribution.netty.bindAddress |
qfs.streaming.MDX.attemptLimit | activeviam.streaming.MDX.attemptLimit |
qfs.streaming.GET_AGGREGATES.attemptLimit | activeviam.streaming.GET_AGGREGATES.attemptLimit |
xmlaDiscoveryApproxCardinality | activeviam.xmlaDiscoveryApproxCardinality |
qfs.pool.policy | activeviam.pool.policy |
qfs.pool.size | activeviam.pool.size |
qfs.pool.scheduler.size | activeviam.pool.scheduler.size |
quartet.property.separator | activeviam.property.separator |
qfs.collectible.query.type | activeviam.collectible.query.type |
qfs.mmap.sampling.depth | activeviam.mmap.sampling.depth |
qfs.mmap.sampling.start | activeviam.mmap.sampling.start |
qfs.mmap.sampling.percent | activeviam.mmap.sampling.percent |
qfs.branch.master | activeviam.branch.master |
qfs.pool.nodes | activeviam.pool.nodes |
qfs.pool.procs | activeviam.pool.procs |
qfs.activepivot.expiredqueries.polling | activeviam.activepivot.expiredqueries.polling |
qfs.streaming.initialPublicationMode | activeviam.streaming.initialPublicationMode |
qfs.exportservice.rootpath | activeviam.exportservice.rootpath |
duringThreadNumber | activeviam.duringThreadNumber |
maxDuringThreadNumber | activeviam.maxDuringThreadNumber |
qfs.distribution.remote.pool.size | activeviam.distribution.remote.pool.size |
qfs.distribution.log.size.threshold | activeviam.distribution.log.size.threshold |
qfs.distribution.maxPendingDiscoveries | activeviam.distribution.maxPendingDiscoveries |
qfs.conflation.maxQueueSize | activeviam.conflation.maxQueueSize |
qfs.conflation.maxDelayTime | activeviam.conflation.maxDelayTime |
repository.cache.isolated_transactions | activeviam.repository.cache.isolated_transactions |
repository.poll.period | activeviam.repository.poll.period |
repository.poll.period.max | activeviam.repository.poll.period.max |
repository.poll.log.threshold | activeviam.repository.poll.log.threshold |
repository.daemon.waitStableDistribution | activeviam.repository.daemon.waitStableDistribution |
qfs.server.namespace.parent | NO LONGER USED (see Rest Services section) |
continuousQueryMonitoring | activeviam.continuousQueryMonitoring |
qfs.selection.listener.catchUpMaxTime | activeviam.selection.listener.catchUpMaxTime |
com.activeviam.json.strongPrimitiveParsing | activeviam.json.strongPrimitiveParsing |
com.activeviam.directquery.enableAutoVectorizer | activeviam.directquery.enableAutoVectorizer |
com.activeviam.directquery.cubeFeedingTimeoutInSeconds | activeviam.directquery.cubeFeedingTimeoutInSeconds |
com.activeviam.directquery.snowflake.maxresultsetsize | activeviam.directquery.snowflake.maxresultsetsize |
mdx.negativecellset.limit | activeviam.mdx.negativecellset.limit |
qfs.activecollector.renew.frequency | activeviam.activecollector.renew.frequency |
activepivot.snl.url | activemonitor.activepivot.url |
live.snl.url | activemonitor.ui.url |
sentinel.poll.period | activemonitor.poll.period |
sentinel.poll.period.max | activemonitor.poll.period.max |
sentinel.poll.log.threshold | activemonitor.poll.log.threshold |
sentinel.daemon.waitStableDistribution | activemonitor.daemon.waitStableDistribution |
sentinel.periodicExecutors.poolSize | activemonitor.periodicExecutors.poolSize |
Migrate to 5.11
Spring
Spring was updated to version 2.6. As a result of the necessary configuration changes, the class
com.activeviam.cfg.SpringRestServiceInitializer
must be added to your Spring configuration,
unless you already import com.qfs.server.cfg.impl.ActiveViamRestServicesConfig
or
com.qfs.content.cfg.impl.StandaloneContentServerRestConfig
to configure the REST services.
Copper
The performances of the calculations defined in Copper have been improved. The transient memory generated
at query time will be reduced by limiting the boxing phenomenon. (The use, for instance, of java.lang.Integer
instead of int
)
As a consequence, multiple methods are now deprecated. We strongly advise users to migrate out of any deprecated methods, as these changes are strictly brought to users for transient memory performance purposes.
Combining multiple
CopperElements
throughCopper.combine(...)
to create a new measure is no longer done withCopperMeasure map(SerializableFunction<IRecordReader, Object> mapper)
. The method is now calledmapToObject
to follow the changes brought to other methods. For instance,Copper.combine(Copper.sum(RISK__PNL), Copper.member(City))
.mapToInt(reader -> {
if (reader.read(1).equals("NYC") {
return 0;
} else {
return reader.readDouble(0) > 0 ? 1 : -1;
}
})Combining multiple
CopperMeasure
throughCopper.combine(...)
to create a new measure is no longer done withCopperMeasure map(SerializableFunction<IRecordReader, Object> mapper)
. Instead, the new method's output type will dictate the method that should be used. For instance, if these measures are combined into a measure that outputsint
, one should useCopperMeasure mapToInt(SerializableToIntFunction<IArrayReader> mapper)
. For instance,Copper.combine(Copper.sum(RISK__PNL), Copper.constant(0d).withName(PNL_LIMIT))
.mapToInt(a -> a.readDouble(0) >= a.readDouble(1) ? 1 : -1)Modifying a measure follows the same rule :
Copper.sum(RISK__PNL).mapToDouble(reader -> reader.readDouble(0) * 2d)
The aforementioned changes make
CopperMeasure#withType()
relatively redundant: it is removed for measures that have already specified it through the choice of mapping method.Contrary to the specialized implementations like
mapToInt(SerializableToIntFunction<IArrayReader>)
,mapToDouble(SerializableToDoubleFunction<IArrayReader>
, etc... The methodmapToObject
takes as argument aSerializableBiConsumer<IArrayReader, IWritableCell>
. The provided cell must be used to write the result:Copper.sum(RISK__PNL).mapToObject((reader, cell) -> {
if (reader.readDouble(0) <= 1) {
cell.writeDouble(reader.readDouble(0));
}
});From now on, only two types of measures support setting a specific output type: when using the various
mapToDouble
/mapToLong
methods and the newmap
, or right after aplus
/minus
/etc. usingwithType
.
Post Processors
To also reduce the memory consumption of IPostProcessor
s and IEvaluator
s, we added new implementations
of these components.
AAdvancedPostProcessor<outputType>
becomesAAdvancedPostProcessorV2
and does not declare a generic output type. All Post Processors must now receive the propertyoutputType
in their initialization properties.
IEvaluatorV2
's evaluation method isvoid evaluate(ILocation location, IRecordReader aggregatedMeasures, IWritableCell resultCell)
. Instead of reading underlying values from anObject[]
, one now needs to read them from the givenIRecordReader
, using the appropriate specialized method, and write the result of the evaluation into theresultCell
. For instance,@Override
public Double evaluate(ILocation location, final Object[] measures) {
if (measures[0] == null) {
return null;
} else {
return 10D + ((Number) measures[0]).doubleValue();
}
}
becomes
```java
@Override
public void evaluate(ILocation location, IRecordReader aggregatedMeasures, final IWritableCell resultCell) {
if (!aggregatedMeasures.isNull(0)) {
resultCell.writeDouble(10D + aggregatedMeasures.readDouble(0));
}
}
In a similar fashion, all abstract implementations of
ITransformProcedure
are now deprecated. It is now necessary to directly implement the interface. As before, one can now either implement the methodtransform
or the methoddoTransform
, depending on the expected behavior.Similarly, the method
evaluateLeaf
of theADynamicAggregationPostProcessor
was reworked to provide a cell inADynamicAggregationPostProcessorV2
The method
IPostProcessor#init(Properties)
has been completely reworked, with clear methods to implement or override for each of the most important parts of the post processor.
Distribution
Now, cubes within a distributed cluster have a unique identifier that will be used to expose their remote addresses through
the query cube they are related to.
This introduces a breaking change in the distributed messenger constructors: ADistributedMessenger
now include CubeEndPointInfo
in its constructor, a data structure holding information regarding cube addresses and identifiers. These data are solely resolved by core ActivePivot,
thus one has just to include the new field in any subsequent messenger.
Analysis Hierarchies
The implementations of Analysis Hierarchies have been cleaned as part of 5.11. Users are expected to extend and override AAnalysisHierarchyV2
.
Most users should not see a difference because Analysis Hierarchies were mostly populated with static members.
The contract between the two classes has been better described. There is now a clear separation between static members and members generated from the ActivePivot Schema.
Static members are still created by AAnalysisHierarchyV2#buildDiscriminatorPaths
.
However, to process introspected members, the new method is #processIntrospectedMember
, replacing a judicious override of #contributeMember
.
Instead of consuming a member and having to call the super
method to do the contribution, users receive the introspected members and their counts, as well a consumer function.
It is up to the function to generate as many members as it wants and to pass each one of them to the consumer to publish them.
As before, AAnalysisHierarchyV2
sources are available for inspection and inspiration.
Datastore
- To create datastore queries one should now give a
SelectionField
instead of aString
which was the path to the field. ReferencedField
was renamedReachableField
.ISelectionField
was removed. It is replaced bySelectionField
.SelectionField.getName()
was renamedgetAlias()
.
getMostRecentVersion()
should never be used by a project. It is an internal method. Replace all calls togetMostRecentVersion()
by a call togetHead()
.
Vectors
All methods in IVectorBinding
were changed from:
void methodName(IVector source, IVector destination)
to:
void methodName(IVector left, IVector right)
This change is performed to align all the vector APIs together.
For instance, doing vectorA.minus(vectorB)
is now, under the hood, calling binding.minus(vectorA, vectorB)
, keeping the arguments in their initial order.
This should better prevent mistakes in the implementations of methods such as IVectorBinding.applyAsDouble(left, right, DoubleBinaryOperator)
.
Users writing their own vector implementations and their own bindings for these vectors will probably need to ensure their bindings are in order.
Content Service UI
ContentServerResourceServerConfig
is replaced by AdminUIResourceServerConfig
and content-server-ui.jar
is replaced by admin-ui.jar
.
Content Server and ActiveMonitor Database
Migrating H2 to v2
The dependency version of h2 for ActivePivot and ActiveMonitor was upgraded from 1.4.200 to 2.1.214. This version change
makes h2-generated .db
files incompatible between 6.0 and the prior versions.
Unwanted .db
files can simply be deleted, or restoring the entries of your current database can be attempted using the h2 migration notes.
SOAP
We no longer return a full ActivePivotManagerDescription
in our webservices with retrieveManagerDescription
but instead return a lighter description
called SoapActivePivotDiscovery
in retrieveActivePivotDiscovery
. This discovery will contain all the information
you previously wanted from an ActivePivotManagerDescription
including a list of Catalogs and a list of Cubes
(And in each cube, its measures, dimensions, hierarchies and levels).
retrieveActivePivotDescription
also no longer return a IActivePivotDescription
but a SoapCubeDiscovery
which is a lighter
description of your ActivePivot.
retrieveSchemaDescription
no longer exists in our webservices.
XML
Building ActivePivot from an XML description file is no longer available. Users are invited to use our fluent builder API using StartBuilding
.
ActiveviamProperties
The options for the ActiveViam Property -Dactiveviam.chunkAllocatorClass
were moved to specialized packages. See the Javadoc of the property CHUNK_ALLOCATOR_CLASS_PROPERTY
.
Migrate to 5.10
Datastore
Primary
indexes are now calledUnique
indexes (All the rows in an index of this type must be unique).TransactionManagerUtil.resetBranch
was moved inITransactionManager
.- Rename
ARemoveUnknowKeyListener
toARemoveUnknownKeyListener
to fix typo. - Move
NumaSelectorFromStoreDescription
fromcom.qfs.desc
tocom.qfs.desc.impl
package. - Move
ADefaultEpochPolicy
fromcom.qfs.multiversion
tocom.qfs.multiversion.impl
package. - Rename
ImmutableRecordList
toImmutableRecordBlock
to match the naming pattern used by sibling classes.
ActivePivot
- Change the hierarchy builder:
hierarchyBuilder.factless().withStoreName(...)
was replaced byhierarchyBuilder.fromStore(...)
. It cannot be called afterfactless()
nornotFactless()
. - Rename
MultiVersionAxisHierarchy.FIELD_NAME_PROPERTY
toMultiVersionAxisHierarchy.FIELD_EXPRESSION_PROPERTY
.
Copper
- All published Copper measure must now be named. Expressions like
Copper.sum("pnl").publish(context)
have to be replaced withCopper.sum("pnl").withName("pnl.SUM").publish(context)
. Copper.newSingleLevelHierarchy(...).from(CopperStoreField)
was replaced byCopper.newHierarchy(...).fromField(CopperStoreField)
.Copper.newSingleLevelHierarchy(...).from(CopperLevelValues)
was replaced byCopper.newHierarchy(...).fromValues(CopperLevelValues)
.
Distribution
- The
JGroupsMessenger
has been removed. Now, ActivePivot relies on two messenger types, theLocalMessenger
and theNettyMessenger
. Note that theNettyMessenger
still uses JGroups for member discovery and group membership. More details on this topic in Distributed Messenger - The property
NotificationMember#AWAIT_NOTIFICATIONS
has been renamed toIDataClusterDefinition#AWAIT_NOTIFICATIONS_TIME
.
Functional Interfaces
The functional interfaces in package com.qfs.func
have been moved and renamed.
Old location | New location |
---|---|
com.qfs.func.IBiIntPredicate | com.activeviam.tech.base.internal.function.BiIntPredicate |
com.qfs.func.IFloatUnaryOperator | com.activeviam.tech.base.internal.function.FloatUnaryOperator |
com.qfs.func.IntBiFunction | com.activeviam.tech.base.internal.function.IntBiFunction |
com.qfs.func.ITriFunction | com.activeviam.tech.base.internal.function.TriFunction |
com.quartetfs.fwk.util.FloatBinaryOperator | com.activeviam.tech.base.internal.function.FloatBinaryOperator |
com.qfs.func.IEither | com.activeviam.tech.base.internal.function.IEither |
com.qfs.func.IOptionalEither | com.activeviam.util.function.IOptionalEither |
com.qfs.func.impl.Either | com.activeviam.util.function.impl.Either |
com.qfs.func.impl.OptionalEither | com.activeviam.util.function.impl.OptionalEither |
Query Results Context Value
The default behavior for QueriesResultLimit
context value have changed from QueriesResultLimit#withoutLimit()
to QueriesResultLimit#defaultLimit()
(the default transient and intermediate result limit amount to 100,000 and 1,000,000 point locations respectively). The default limits may appear low for certain projects,
in this case the configuration can be easily overridden. This is explained in the respective documentation section.
ActiveViam Properties
activeviam.datastore.query.maxLookupsOnPrimary
was renamedactiveviam.datastore.query.maxLookupsOnUnique
.ActiveViamProperty.MAX_LOOKUPS_ON_PRIMARY_INDEX
was renamedActiveViamProperty.MAX_LOOKUPS_ON_UNIQUE_INDEX
.
Spring Configuration Changes
The ActivePivot Configuration does not automatically contain the ContextValueFilter
bean anymore,
thus preventing conflicts with Spring Boot.
Azure Cloud Source migration
The Azure Cloud Source has migrated its dependency to the Azure Blob Storage SDK from v8 to v12.
A migration guide from v8 to v12 can be found on the Azure SDK repository.
Renamed Classes
All classes of the Azure Cloud Source have been changed to include the word Azure
as a prefix. As a result, the following classes had their name changed:
5.9 | 5.10 |
---|---|
CloudBlobPath | AzureBlobPath |
BlockBlobPath | AzureBlockBlobPath |
AppendBlobPath | AzureAppendBlobPath |
PageBlobPath | AzurePageBlobPath |
Changed Classes
The handling of client-side encryption has undergone some major changes to better match the new architecture of the Azure Blob Storage SDK. Changes about this are detailed in the below section.
IAzureCloudDirectory
:- Now extends
ICloudDirectory<BlobClientBase>
instead ofICloudDirectory<CloudBlob>
. getTBlob(String)
return types were updated with the new types from the Azure SDK.
- Now extends
Removed
IAzureEntityPath.upload(InputStream, long, BlobRequestOptions)
.Prefer using
getUnderlying()
to get a reference to the underlying blob client and use blob-specific configuration for uploads.Uploading with an unknown length is not supported anymore for page blobs, and will throw an
UnsupportedOperationException
if attempted.Records#getGlobalDefaultValue
directly uses the content type value passed as anint
instead ofILiteralType
.
Client-side Encryption
In the Azure Blob Storage SDK, client-side encryption is now implemented in the package com.azure:azure-storage-blob-cryptography.
With the SDK upgrade, client-side encryption cannot be configured at the Azure service client level anymore. It needs to be specified on a per-blob basis, before uploading or downloading content.
As a result, two new classes have been introduced to handle client-side encryption:
AzureEncryptedCloudDirectory
: a variant ofAzureCloudDirectory
that additionally holds encryption keys for client-side encryption, and that can produceAzureEncryptedBlobPath
s to existing and non-existing encrypted blobs.AzureEncryptedBlobPath
: a path to an encrypted blob; essentially a wrapper around anEncryptedBlobClient
.
The other classes
AzureBlobPath
,AzureBlockBlobPath
,AzureAppendBlobPath
,AzurePageBlobPath
andAzureCloudDirectory
are all unaware of client-side encryption. They will not encrypt uploaded data and will not decrypt downloaded data.
They accept three additional arguments in their constructor compared to their non-encrypted counterparts:
- a key-wrapping algorithm (
String
) - a key encryption key (
com.azure.core.cryptography.AsyncKeyEncryptionKey
) - a key encryption key resolver (
com.azure.core.cryptography.AsyncKeyEncryptionKeyResolver
)
If the directory or path only needs to perform either only downloads or only uploads,
some arguments are not required and can be set to null
:
| operations | required |
|---------------------|----------------------------------------------|
| download | key encryption key resolver |
| upload | key-wrapping algorithm, key encryption key |
| download and upload | all arguments |
The possible key wrapping algorithms are specified in the class
KeyWrapAlgorithm
from com.azure:azure-security-keyvault-keys
(the dependency is not included in the Azure Cloud Source).
Some snippets of AsyncKeyEncryptionKey
creation from a local key or from an
Azure Key Vault can be found in the
readme of com.azure:azure-storage-blob-cryptography.
Important: the new implementation of client-side encryption in Azure Blob Storage SDK v12 only permits uploads to client-side-encrypted block blobs. The creation of client-side-encrypted blobs of other types (append and page) is not supported anymore.
Downloads of client-side encrypted page and append blobs that were already created through other means is still possible through the
AzureEncryptedBlobPath
.
Migrate to 5.9
Java 8 is no longer supported. ActivePivot 5.9 is compatible with Java 11.
ActivePivot
New Copper API
Shipped as an experimental feature in ActivePivot 5.8.3, the new Copper API is officially replacing the one released with ActivePivot 5.7.0, which is now abandoned.
Shift measure creation
The Copper shift measure creation methods have changed : CopperMeasure#at([...])
methods have been changed to a unified CopperMeasure#shift(CopperLevelsAt...)
method.
Find more information in our user guide for updated examples.
Parent Value
The method Copper.parentValue(measure, hierarchy)
has been changed to measure.parentValueOn(hierarchy)
.
Setting the designated type of measure
To improve clarity, the method CopperMeasure#cast([...])
has been renamed to CopperMeasure#withType([...])
.
Level and hierarchy representation
LevelCoordinate
has been renamed and is now LevelIdentifier
.
HierarchyCoordinate
has been renamed and is now HierarchyIdentifier
.
Test jar
The Copper test utils that were previously packaged in a test-jar
have been moved to their own module:
<groupId>com.activeviam.activepivot</groupId>
<artifactId>activepivot-copper-test</artifactId>
Spring Configuration Changes
Schema configuration
ActivePivotConfig
now requires an IActivePivotManagerDescriptionConfig
.
IDatastoreDescriptionConfig
was merged into IActivePivotManagerDescriptionConfig
and
IParameterAwareDatastoreDescriptionConfig
was renamed IParameterAwareActivePivotManagerDescriptionConfig
.
In IActivePivotManagerDescriptionConfig
you must now implement userManagerDescription()
(respectively named userSchemaDescription()
) instead of managerDescription()
(respectively schemaDescription()
).
The way the description post-processors are applied has been changed. You are impacted if you explicitly build the Datastore or
the ActivePivotManager in your project instead of using DatastoreConfig
and ActivePivotConfig
.
In this case, define an IActivePivotManagerDescriptionConfig
and give the results of the managerDescription()
and schemaDescription()
default methods to the builders.
IActivePivotConfig.activePivotManagerDescription()
was removed and replaced by IActivePivotManagerDescriptionConfig.managerDescription()
.
CORS configuration
The CORS configuration now relies on Spring standards. Instead of our own ICorsFilterConfig
that created a
SpringCorsFilter
, we moved to Spring CorsConfigurationSource
. As internal components require a knowledge of the
CORS configuration before we can define the actual CorsConfigurationSource, a new interface ICorsConfig
has been introduced, detailing the information of a standard CORS configuration. This is used in internal services to create REST and WS
services compatible with the configuration. This is used in the sandbox project to create the final CORS configuration.
ICorsConfig
only requires you to define the list of allowed origins. The other methods provide the accepted and exposed headers the authorized methods.
Those have default implementations compatible with the ActivePivot stack.
SpringCorsFilter
has been deleted. Thanks to CorsConfigurationSource
, Spring automatically creates the filter.
Miscellaneous
ActivePivotServicesConfig
removes an internal attribute, creating an unused dependency to IDatastoreConfig
.
If needed, you can restore this dependency by extending the configuration class and restoring the attribute.
I18nConfig
was renamed AI18nConfig
. Many of the static methods have changed to instance methods to allow
easier extensions and refinements.
Aggregation Procedures
Interface
The interface was slightly changed to support more features in Copper (getDatastorePrefetch
and createContext
).
This change should be transparent for procedures extending AAnalysisAggregationProcedure
.
Additional validation checks
To offer support for Aggregation Procedures depending on other Procedures, ActivePivot uses the property
IAnalysisAggregationProcedureDescription#getUnderlyingLevels()
to identify the dependencies between Procedures.
This drives a change to the API, preventing Procedures to define their own handled levels in the list of underlying levels.
This constraint is enforced by the validation procedure of the description. Configurations may need to be updated. Particularly,
in 5.8, you always had to define a handled hierarchy from builders. You can now avoid this with the method
withoutHierarchies()
.
Option to disable the Epoch level
The epoch level of the epoch dimension can now be enabled or disabled.
The epoch dimension is still enabled by default. The epoch level is now disabled by default. To enable it,
use IEpochDimensionDescription.setEpochLevelEnabled(boolean)
or IEpochDimensionBuilder.withEpochLevel
.
The IEpochDimensionBuilder.withBranchesLevel
method was renamed IEpochDimensionBuilder.withBranchLevel
.
The methods in IEpochDimensionDescription
were also renamed from *epochsLevel*
and *branchesLevel*
(e.g. getEpochsLevelComparator
or getBranchesLevelFormatter
) to *epochLevel*
and *branchLevel*
(e.g. getEpochLevelComparator
and getBranchLevelFormatter
).
Distributed Cube no longer supports the epoch level
New default aggregation functions
Aggregation functions that maintain a history state, like AVG, MIN, or MAX are memory intensive. Thus, for the MIN and MAX
aggregation functions, the new default behavior is append-only. This means that disaggregation is not supported for these
two functions. For disaggregation support, please use MIN_HISTORY
and MAX_HISTORY
aggregation functions
(e.g. value.MIN_HISTORY
or value.MAX_HISTORY
).
Default Distributed Message Size Limits
Previous ActivePivot versions had a maximum default distributed message size of 1GB for every message type. This limit has been reduced. The default maximum size is:
- 64MB for
IInitialDiscoveryMessage
messages - 4MB for
ITransactionCommittedMessage
messages - 4MB for
ScopedAggregatesRetrievalResultTransporter
answers wrapped inBroadcastResult
messages - 32MB for
DrillthroughMessageWithHeadersAnswers
answers wrapped inBroadcastResult
messages - 2MB for any other message types
You can still override these values using the property NETTY_MAX_SIZE_PROPERTY
of the ActiveViamProperty
.
In addition to the older configuration method using class names, it is possible to configure the message default size limit using the constants (enum)
defined in NettyStreamUtils#MessageType
as follows:
INITIAL_DISCOVERY_MESSAGE
for initial discovery messagesITRANSACTION_COMMITTED_MESSAGE
for transaction commit messageSCOPEDAGGREGATES_RETRIEVAL_TRANSPORTER
for single Gaq query result messagesDRILLTHROUGH_HEADER
for drillthrough header query result messagesDRILLTHROUGH_MESSAGE_WITH_HEADERS_ANSWER
for drillthrough query result messagesGLOBAL_MESSAGE
for global configuration
For instance, the following statement used in prior versions:
- -Dqfs.distribution.netty.message.maxSize=com.quartetfs.biz.pivot.cellset.impl.ScopedAggregatesRetrievalResultTransporter=42m,com.qfs.messenger.message.IInitialDiscoveryMessage=200m
becomes:
+ -Dactiveviam.distribution.nettyMessageMaxSize=com.quartetfs.biz.pivot.cellset.impl.ScopedAggregatesRetrievalResultTransporter=42m,com.activeviam.messenger.message.IInitialDiscoveryMessage=200m
or
+ -Dactiveviam.distribution.nettyMessageMaxSize=singleGaqResult=42m,initialDiscovery=200m
Miscellaneous
ALocationShiftPostProcessor#handleNoUnderlyingMeasure
is renamed intoALocationShiftPostProcessor#handleNoUnderlyingMeasure
to fix the typo - missing "g".
REST and WS APIs
With the introduction of a new REST call to export query plans and another REST endpoint to forward client traces to a tracing server, the version of ActivePivot REST API has changed from v4 to v5. With the addition of the metadata to the WS updates of the Content Service entries, the WS API has changed from v4 to v5.
As components of the REST and WS API versions remain synchronized, these changes result in the following URL changes:
Before | After |
---|---|
pivot/rest/v4/... | pivot/rest/v5/... |
pivot/ws/v4/... | pivot/ws/v5/... |
content/rest/v4/... | content/rest/v5/... |
content/ws/v4/... | content/ws/v5/... |
Impacted REST services:
- ActivePivot services for queries, context configurations, etc
- Datastore service for queries and updates
- Content Service
- Tracing Service
CSV Source
The CSV Source can now accept incomplete lines (where the number of columns is smaller than the
expected number of attributes of a record). You can set this in CSVParserConfiguration
's full constructor, or through
its setAcceptIncompleteLines(boolean)
. It can finally be set through ParserContext#setAcceptIncompleteLines(boolean)
method. This sets this property on its underlying CSVParserConfiguration
.
You can also sample the input files to work on a small portion of the input files. The sampling policy
is passed as an argument of type ICsvParserConfigPolicy
to the constructor of CSVParserConfiguration
, or using
the method CSVParserConfiguration#setParserPolicy(ICsvParserConfigPolicy)
.
The Core Product ships basic policies to load the first lines of one or more files, or load a limited number of files.
Those are available in ICsvParserConfigPolicy
static methods.
Parquet Source
The Parquet Source benefited from the same sampling policies as the CSV Source. They implement IParquetReaderPolicy
,
that are passed in the constructor of ParquetParser
. Like the CSV Source, basic policies are available in
IParquetReaderPolicy
to load a limited number of lines or files.
Other changes
LogWriteException
is now a runtime exception. This exception is still thrown when an error occurs during the writing of the Transaction Log.- One of the signatures of
ITransactionManager.performInTransaction
was removed.
Migrate to 5.8
Announcement
JDK 11 Support
ActivePivot 5.8 supports both JDK 8 and JDK 11. The Java version used by
Maven when compiling ActivePivot depends on the JAVA_HOME
environment
variable, or the current java version if the JAVA_HOME
environment variable
is not set. For more information, see
the dedicated page.
ActivePivot
HierarchyUtil
The following methods do not return null but throw an UnknownOlapElementRuntimeException
when the element is not found:
public static IHierarchy getHierarchy(IActivePivot pivot, String hierarchyDescription) {...}
public static IHierarchy getHierarchy(IActivePivot pivot, String dimensionName, String hierarchyName) {...}
public static ILevel getLevel(IActivePivot pivot, String levelDescription) {...}
public static ILevel getLevel(final IActivePivot pivot, final String dimensionName, final String hierarchyName, final String levelName) {...}
Filters on analysis hierarchies, PostProcessors and V2
In 5.7 the behavior of aggregate retrievers has been changed: the retrievals with filters on an analysis hierarchy now return null. In 5.7.3 we introduced new versions of post processors (ABasicPostProcessorV2
, ABaseDynamicAggregationPostProcessorV2
, ADynamicAggregationPostProcessorV2
and AFilteringPostProcessorV2
) that automatically remove the filters on analysis hierarchies in the prefetcher of the post processor, and add the filter back while expanding the result on the analysis hierarchies. It was introduced as V2 to avoid changing the behavior of the post processor in a bugfix release.
In 5.8 the post processors V2 have been removed and the core post
processors ABasicPostProcessor
, ABaseDynamicAggregationPostProcessor
, ADynamicAggregationPostProcessor
and AFilteringPostProcessor
now perform this filter removal if the "analysisLevels" property is
used.
Keep in mind that AAdvancedPostProcessor
does not handle those filters. If you
implement AAdvancedPostProcessor
directly,
this confluence page
explains how to handle them.
In order to solve this problem, ABasicPostProcessor
(previously ABaseDynamicPostProcessors
) now
uses named prefetchers. They will use the
constant ABasicPostProcessor.BASIC_POST_PROCESSOR_PREFETCHER
(
previously ABaseDynamicAggregationPostProcessor.DYNAMIC_AGGREGATION_POST_PROCESSOR_PREFETCHER
) as
the name of the prefetcher to retrieve. If you define your own prefetcher you need to name it
correctly:
IPrefetcher<?> namedPrefetcher = IPrefetcher.name(ABasicPostProcessor.BASIC_POST_PROCESSOR_PREFETCHER, prefetcher);
CubeFilterBuilder
ICubeFilterBuilder.includeMembers([...])
and excludeMembers([...])
now only accept List<String>
as members path name argument.
The previous signature that accepted List<?>
as members has been moved to ICubeFilterBuilder.includeMembersWithConditions([...])
and excludeMembersWithConditions([...])
.
The includeMembersWithConditions([...])
and excludeMembersWithConditions([...])
methods should only be used when building the CubeFilter with IConditions
.
Range Sharing configuration
Range sharing can no longer be configured with a boolean. You must use an integer or use the withRangeSharing() method on the cube builder, that is, .withProperty("rangeSharing", "false")
has been replaced by .withoutRangeSharing()
.
AStoreStream
Listeners on a continuous query no longer receive an initial view when registering to this query.
This was not the case with com.quartetfs.biz.pivot.postprocessing.streams.impl.AStoreStream
, and has been fixed.
AStoreStream is a selection listener that registers on a datastore selection. This operation can occur in the middle of a transaction.
The javadoc of com.quartetfs.biz.pivot.postprocessing.streams.impl.AStoreStream#registerSelection
gives some pointers for concrete implementations.
Datastore
Duplicate key handling
Replacement: IDuplicateKeyWithinTransactionListener
→ IDuplicateKeyHandler
The IDuplicateKeyWithinTransactionListener
interface has been replaced by the IDuplicateKeyHandler
interface, as
described in the Changelog.
With IDuplicateKeyWithinTransactionListener
you could only define a custom behavior when two records had the same key
fields within the transaction but no record with these key fields existed in the datastore yet. You can still
define such a behavior using the IDuplicateKeyHandler.selectDuplicateKeyWithinTransaction
function.
However, it is now also possible to define a custom behavior when a record in the transaction has the same key fields
as a record already in the datastore, by defining the IDuplicateKeyHandler.selectDuplicateKeyInDatastore
function.
The default behavior, which is to always update with the latest record, has not changed.
The store description builder has been modified: you should replace
.onDuplicateKeyWithinTransaction().logException()
with .withDuplicateKeyHandler(DuplicateKeyHandlers.LOG_WITHIN_TRANSACTION)
Selection Builder
When using the datastore selection builder, you can add all fields reachable by references from a given store using
.withAllreachableFields()
. The only policy used to be that all fields with same name had to be reachable from one
another, and that the furthest would be used.
The selection builder now walks the field graph using references between stores, and keeps track of the paths leading
to the same field. If there are no collisions, nothing happens. If there are, three options are available:
- resolve conflicts automatically by choosing the furthest field and throw an exception if two fields with the same
name are not reachable from one another (example:
ref1/field
andref2/field
). This is how it used to be in 5.7 and is the default behavior. - resolve conflicts automatically by choosing the closest field and throw an exception if two fields with the same
name are not reachable from one another (example:
ref1/field
andref2/field
). - resolve conflicts manually. You can use a
FieldsCollisionHandler
to transform a map of {field name, [possible paths]} into a map of {name, full expression} (the expression is of the formref1/ref2/fieldName
). This allows you to keep one, the other, both (with custom field name), or none.
Test jar
The datastore test utils that were previously packaged in a test-jar
have been moved to their own module:
<groupId>com.activeviam.tech</groupId>
<artifactId>datastore-test</artifactId>
Removal of LINE storage
The in-line storage type provided by ActivePivot 5.0 as a safe alternative to the new columnar
storage has been removed. Only one value remains possible: StorageType.COLUMN
.
Changes are required only if your Datastore stores were explicitly configured to use the LINE storage. If so, either replace it with COLUMN as below, or remove the call to configure the storage to use the default version.
new StoreDescriptionBuilder().withStoreName("<store>")
.withField(...).asKeyField()
... // Other configuration calls
- .withStorage(IStoreDescription.StorageType.LINE)
+ .withStorage(IStoreDescription.StorageType.COLUMN)
...
.build();
Format of partitions sizes
StoreUtils.getPartitionSizes(storeVersion)
now returns values of -1 in the array for non-existing partitions (either dropped or never initialized) at the given storeVersion
.
JDBC Source
The constructors for IJDBCSource
implementations without an appendBatchSize
have been removed.
The constructors for IJDBCSource
implementations no longer require a driverClassName
in the arguments when an IConnectionSupplier
is provided.
The SimpleConnectionSupplier
implementation of IConnectionSupplier
now requires a driverClassName
argument.
The appendQueueSize
attribute has been removed from all IJDBCSource
implementation constructors, and is now a field of the JDBCTopic
class.
JSON Web Tokens (JWT)
The RSA key pair defined by the qfs.jwt.key.[public|private]
variables in jwt.properties
should now be encoded with at least 2048 bits instead of the previous 1024. A new pair can be generated using JwtUtil
.
StreamingService
The signature of IStreamingService.updateStreamRanges
changed. It now takes an IAxisRanges
instead of an array of IRangeProperties
.
Spring configuration
With the introduction of the centralized property accessor ActiveViamProperty
, the environment properties will be available only if you import the ActiveViamPropertyFromSpringConfig
configuration class. For more information, see Properties in ActivePivot.
Memory analysis
With 5.8.0, Memory Analysis services have been improved. Some elements have been migrated to the dedicated application
Memory Analysis Cube - DatastoreFeederVisitor
, other classes have been renamed to better emphasize that the services are designed for
Memory analysis only and not monitoring.
Renamed classes:
IMemoryMonitoringService
->IMemoryAnalysisService
MemoryMonitoringService
->MemoryAnalysisService
MonitoringStatisticSerializerUtil
->MemoryStatisticSerializerUtil
Removed classes:
For the sake of readability, only major classes are mentioned.
DatastoreFeederVisitor
MemoryMonitoringDatastoreDescription
Previous migration notes
You can find the previous Migration Notes in our old Confluence documentation: