Vectors in DirectQuery
Atoti+ Datastore technology allow users to store, retrieve and aggregate vectors as any other value. Atoti+ DirectQuery offers the same features, whatever the underlying database is.
Some databases, like Clickhouse, offer a native support for vectors, as the Array type, with pre-build functions supporting Atoti+ use-cases. Other databases do offer vectors types, like BigQuery Array or PostgreSQL Array but lack of the appropriate methods, such as aggregation on these vectors. DirectQuery fills this technical gap with technics emulating vectors. When a database natively support vectors, emulation techniques are not implemented, as they would only impair performance while not providing interesting benefits.
In the following sections, databases listed in each section are to be considered as supporting the featured vector support.
Database | Native Support | Emulation via multi row values | Emulation via multi column values |
---|---|---|---|
Amazon Redshift | |||
ClickHouse | |||
Google BigQuery | |||
Microsoft Azure Synapse | |||
Snowflake |
Native support
A table defining fields of the type Array are automatically represented as vectors. This is the most efficient way of dealing with vectors. The Database automatically provides the best solution for storage. Join operations are simpler because this is a single row in the Database.
However, aggregations on such values is not natively supported in most of the Database Engines. When available, DirectQuery makes use of the existing functions. Otherwise, such data type is not rejected at query time.
Fully supported:
- Clickhouse
Partially supported:
- can only support run list queries but no aggregation queries.
- Snowflake
- BigQuery
Emulation with multiple columns
Vectors of fixed size can be represented as many columns in a table. Then, one can instruct DirectQuery to regroup all those columns as a vector.
Id | Category | Price1 | Price2 | .. | PriceN |
---|---|---|---|---|---|
1 | category_1 | 10 | 20 | .. | 307 |
2 | category_2 | 12 | 420 | .. | 748 |
In the table above, the table contains two standard fields Id
and Category
. Columns Price1
to PriceN
represent the vector exploded across N columns. Being the scene, DirectQuery will apply the same operation to each column and return a final vector value to calling code.
The only condition for this emulation is that all vectors share the same size throughout the table. This size matches the number of columns grouped to form the vector.
It is possible to put multiple vectors in the same table. Each vector is identified by its prefix name. In such case, the vectors can have any size.
Id | Category | Price1 | .. | PriceN | Quantity1 | .. | QuantityM |
---|---|---|---|---|---|---|---|
1 | cat1 | 1.99 | .. | 0.99 | 73 | .. | 17 |
where N and M can be different.
Full support:
- Snowflake
- BigQuery
- Azure Synapse
- Redshift
Emulation with multiple rows
Vectors of any size can be represented as many rows in a table. Then, one can instruct DirectQuery to regroup all those rows as a vector, using a field as the pivot value indexing all vectors.
Id | Category | Index | Price |
---|---|---|---|
1 | category_1 | 0 | 10 |
1 | category_1 | 1 | 20 |
1 | category_1 | .. | .. |
1 | category_1 | N | 307 |
2 | category_2 | 0 | 12 |
2 | category_2 | 1 | 420 |
2 | category_2 | .. | .. |
2 | category_2 | M | 748 |
The table above contains one vector field Price
, indexed by Index
. Being a technical field, Index
is automatically hidden from the "vectorized" table. In addition, DirectQuery automatically configure the remaining columns Id
and Category
as primary keys of the table.
There are no constraint on the size of distinct vectors. Here, we can have N < M or N = M or N > M.
It is possible for such a table to define multiple vectors. The following now contains two vectors Price
and Quantity
. Index
remains the indexing column.
Id | Category | Index | Price | Quantity |
---|---|---|---|---|
1 | category_1 | 0 | 10.99 | 7 |
1 | category_1 | .. | .. | .. |
1 | category_1 | N | 307.01 | 4 |
In this case, DirectQuery requires that all vectors have the same size for a given key. Here,
for (Id, Category) = (1, category_1)
, Price
and Quantity
both have N elements.
Full support:
- Snowflake
- BigQuery
- Azure Synapse
- Redshift