Tuning Consolidation

The best scenario for maximizing the performance of reads is to have a single fragment. The only way to result in a single fragment is by (i) performing a single write (which may not be possible in applications where the data is much larger than RAM), (ii) writing in global order, i.e., appending data to your fragments (which may not be possible in applications where the data do not arrive in global order), and (iii) frequently consolidating your fragments, which is the most reasonable choice for most applications. However, properly tuning consolidation for an application may be challenging.

We provide a few tips for maximizing the consolidation performance.

  • Perform dense writes in subarrays that align with the space tile boundaries. This will prevent TileDB from filling with empty cell values, which may make finding consolidatable fragments more difficult.

  • Update the (dense) array by trying to rewrite the same dense subarrays. This helps the pre-processing clean up process, which will try to delete older fully overwritten fragments rapidly without performing consolidation.

  • For sparse arrays (or sparse writes in dense arrays), perform writes of approximately equal sizes. This will lead to balanced consolidation.

  • It may be a good idea to invoke consolidation after every write, tuning sm.consolidation.step_min_frags, sm.consolidation.step_max_frags and sm.consolidation.steps to emulate the way LSM-Trees work. Specifically, choose a reasonable value for sm.consolidation.step_min_frags and sm.consolidation.step_max_frags, e.g., 2-20. This will ensure that only small number of fragments gets consolidated per step. Then you can set the number of steps (sm.consolidation.steps) to something large, so that consolidation proceeds recursively until there is a single fragment (or very few fragments). If consolidation is invoked after each write, the consolidation cost will be amortized over all ingestion processes in the lifetime of your system. Note that in that case, the consolidation times will be quite variable. Sometimes no consolidation will be needed at all, sometimes a few fast consolidation steps may be performed (involving a few small fragments), and sometimes (although much less frequently), consolidation may take much longer as it may be consolidating very large fragments. Nevertheless, this approach leads to a great amortized overall ingestion time resulting in very few fragments and, hence, fast reads.

  • Increase the buffer size used internally during consolidation. This is controlled by config parameter sm.consolidation.buffer_size, which determines the buffer size used per attribute when reading from the old fragments and before writing to the new consolidated fragment. A larger buffer size increases the overall performance.

  • Very large fragments will not perform optimally when reading arrays. For that reason, you can change the sm.consolidation.max_fragment_size parameter to split the result of the conslidation into multiple fragments that will be smaller in size than the set value in bytes.

See Configuration for information on how to set the above mentioned configuration parameters.

Last updated