Skip to main content

Vectors in DirectQuery

Atoti Datastore technology allow users to store, retrieve and aggregate vectors as any other value. Atoti DirectQuery offers the same features, whatever the underlying database is.

Some databases, like Clickhouse, offer a native support for vectors, as the Array type, with pre-build functions supporting Atoti use-cases. Other databases do offer vectors types, like BigQuery Array or PostgreSQL Array but lack of the appropriate methods, such as aggregation on these vectors. DirectQuery fills this technical gap with technics emulating vectors. Generally, when a database natively support vectors, emulation techniques are not implemented, as they would only impair performance while not providing interesting benefits.

In the following sections, databases listed in each section are to be considered as supporting the featured vector support.

DatabaseNative SupportEmulation via multi row valuesEmulation via multi column values
Amazon Redshifttickticktick
Google BigQuerytickticktick
Microsoft Azure Synapsetickticktick
Microsoft SQL Servertickticktick

Native support

A table defining fields of the type Array are automatically represented as vectors. This is the most efficient way of dealing with vectors. The Database automatically provides the best solution for storage. Join operations are simpler because this is a single row in the Database.
However, aggregations on such values is not natively supported in most of the Database Engines. When available, DirectQuery makes use of the existing functions. Otherwise, such data type is not rejected at query time.

Full support: out of the box:

  • Clickhouse

Full support: using UDAFs (User Defined Aggregation Functions)

  • Databricks (only on Databricks Clusters and not Databricks SQL Warehouses)

Partially supported:

  • can only support run list queries but no aggregation queries.
    • BigQuery
    • Databricks SQL Warehouse
    • Snowflake

Emulation with multiple columns

Vectors of fixed size can be represented as many columns in a table. Then, one can instruct DirectQuery to regroup all those columns as a vector.


In the table above, the table contains two standard fields Id and Category. Columns Price1 to PriceN represent the vector exploded across N columns. Being the scene, DirectQuery will apply the same operation to each column and return a final vector value to calling code.

The only condition for this emulation is that all vectors share the same size throughout the table. This size matches the number of columns grouped to form the vector.

It is possible to put multiple vectors in the same table. Each vector is identified by its prefix name. In such case, the vectors can have any size.


where N and M can be different.

Full support:

  • BigQuery
  • Databricks
  • Microsoft Azure Synapse
  • Microsoft SQL Server
  • Redshift
  • Snowflake


Multi column vectors have a few limitations:

  • All vectors must have the same size
  • Some databases have a limit in the number of columns in a single table, generally a few thousands, so it cannot be used to store huge vectors.
  • The generated SQL string to query Multi Column vector tables is big. On some database it will take up to a few seconds to compile this query before running it.

Emulation with multiple rows

Vectors of any size can be represented as many rows in a table. Then, one can instruct DirectQuery to regroup all those rows as a vector, using a field as the pivot value indexing all vectors.


The table above contains one vector field Price, indexed by Index. Being a technical field, Index is automatically hidden from the "vectorized" table. In addition, DirectQuery automatically configure the remaining columns Id and Category as primary keys of the table.
There are no constraint on the size of distinct vectors. Here, we can have N < M or N = M or N > M.

It is possible for such a table to define multiple vectors. The following now contains two vectors Price and Quantity. Index remains the indexing column.


In this case, DirectQuery requires that all vectors have the same size for a given key. Here, for (Id, Category) = (1, category_1), Price and Quantity both have N elements.

Full support:

  • BigQuery
  • Databricks
  • Microsoft Azure Synapse
  • Microsoft SQL Server
  • Redshift
  • Snowflake