atoti.QueryCube.unload_members_from_data_cube()#

QueryCube.unload_members_from_data_cube(members, *, data_cube_id, level, scenario_name='Base')#

Unload the given members of a level from a data cube.

This is mostly used for data rollover.

Note

This requires the query cube to have been created with allow_data_duplication set to True and with non empty distributing_levels.

Parameters:
  • members (Set[bool | int | float | date | datetime | time | str]) – The members to unload.

  • data_cube_id (str) – The ID of the data cube from which to unload the members. This must be equal to the id_in_cluster argument passed to create_cube().

  • level (HasIdentifier[LevelIdentifier] | LevelIdentifier) – The level containing the members to unload.

  • scenario_name (str) – The name of the scenario from which facts must unloaded.

Return type:

None

Example

Setting up the cubes:

>>> query_session = tt.QuerySession.start()
>>> data_session = tt.Session.start()
>>> def query_by_city():
...     cube = query_session.session.cubes["Query cube"]
...     l, m = cube.levels, cube.measures
...     return cube.query(m["Number.SUM"], levels=[l["City"]])
>>> def wait_for_data(*, expected_city_count: int):
...     max_attempts = 30
...     for _ in range(max_attempts):
...         try:
...             if len(query_by_city().index) == expected_city_count:
...                 return
...         except:
...             pass
...         sleep(1)
...     raise RuntimeError(f"Failed {max_attempts} attempts.")
>>> data_session.clusters["Cluster"] = query_session.session.clusters[
...     "Cluster"
... ] = tt.ClusterDefinition(
...     application_names={"Cities"},
...     discovery_protocol=JdbcPingDiscoveryProtocol(
...         f"jdbc:h2:{mkdtemp('atoti-cluster')}/db",
...         username="sa",
...         password="",
...     ),
...     authentication_token=token_urlsafe(),
... )
>>> query_session.query_cubes["Query cube"] = tt.QueryCubeDefinition(
...     query_session.session.clusters["Query cube"],
...     application_names={"Cities"},
...     allow_data_duplication=True,
...     distributing_levels={("Cities", "City", "City")},
... )
>>> data = pd.DataFrame(
...     columns=["City", "Number"],
...     data=[
...         ("Paris", 20.0),
...         ("London", 5.0),
...         ("NYC", 7.0),
...     ],
... )
>>> table = data_session.read_pandas(
...     data, keys={"City"}, table_name="Cities"
... )
>>> data_cube = data_session.create_cube(table, id_in_cluster="Europe")
>>> wait_for_data(expected_city_count=3)
>>> query_by_city()
       Number.SUM
City
London       5.00
NYC          7.00
Paris       20.00

Unloading the facts associated with the London and NYC members:

>>> query_cube = query_session.query_cubes["Query cube"]
>>> query_cube.unload_members_from_data_cube(
...     {"London", "NYC"},
...     data_cube_id="Europe",
...     level=data_cube.levels["City"],
... )
>>> wait_for_data(expected_city_count=1)
>>> query_by_city()
      Number.SUM
City
Paris      20.00