PDAL
Installation
You can run PDAL and TileDB code in Python using conda environment packages.
If Python packages are not required the pdal
conda package can be used instead
Both of the packages above will provide updated PDAL and TileDB libraries for your conda environment.
Ingesting LAS Files
First, create a TileDB config file tiledb.config
where you can set any TileDB configuration parameter (e.g., AWS keys if you would like to write to a TileDB array on S3). Make sure you also add the following, as currently TileDB does not handle duplicate points (this will change in a future version).
Then create a PDAL pipeline to translate some LAS data to a TileDB array, by storing the following in a file called pipeline.json
:
See PDAL documentation for information on available options for the TileDB PDAL writer. You can execute the pipeline with PDAL that will carry out the ingestion as follows:
We now have points and attributes stored in an array called sample_array
. This write uses the streaming mode of PDAL.
You can view this sample_array
directly from TileDB as follows (we demonstrate using TileDB's Python API, but any other API would work as well):
Parallel Writes
PDAL is single-threaded, but coupled with TileDB's parallel write support, can become a powerful tool for ingesting enormous amounts of point cloud data into TileDB. The PDAL driver supports appending to an existing dataset and we use this with Dask to create a parallel update.
We demonstrate parallel ingestion with the code below. Make sure to remove or move the sample_array
created in the previous example.
Parallel Reads
Although the TileDB driver is parallel (i.e., it uses multiple threads for decompression and IO), PDAL is single-threaded and therefore some tasks may benefit from additional boosting. Take for instance the following PDAL command that counts the number of points in the dataset using the TileDB driver.
We can write a simple script in Python with Dask and direct access to TileDB to perform the same operation completely in parallel:
In both cases we get the answer of 31530863 (for a 750MB compressed array). With single-threaded PDAL, the output from the time
command is the following on m5a.2xlarge
machine on AWS:
The above Python script using Dask is significantly faster:
Last updated