Title Insurance vs. a Survey

Drone Mapping: How to Select a LiDAR System


January 5, 2024

Ahmed El-Rabbany, Ph.D., P.Eng.
Toronto Metropolitan University

Selecting a survey-grade LiDAR system for drone-based topographic mapping can be challenging, especially when the system’s data sheet is difficult to read and/or lacks critical information. To appropriately select a commercial drone LiDAR system, users should consider the characteristics of four different sensors, namely the LiDAR sensor itself, the GNSS system, the inertial measurement unit (IMU), and the RGB camera. In addition, the accompanying software and its capabilities, including strip adjustment, are critical elements to consider. As a plus, a mission planning software, perhaps from the same manufacturer, should also be considered.

Survey-grade LiDAR systems will typically integrate GNSS and IMU data, as the two complement each other. While GNSS provides the initialization and the corrections to the inertial system, the latter provides high rate data and bridges the GNSS gaps when the satellite signal is blocked or temporarily lost for a short period (a few 10s of seconds). Typically, GNSS will be a high-end multi-constellation, multi-frequency system! This, however, is not necessarily the case with the IMU. High-quality GNSS/IMU results in precise trajectory and orientation (heading, pitch, roll) of the drone (and consequently, the LiDAR system). As the drone orientation (AKA attitude) plays a critical role in the accuracy of the resulting LiDAR point cloud, and since the orientation is obtained essentially from the IMU, the quality of the IMU is critical. In fact, a poor-quality IMU not only affects the accuracy of point cloud coordinates, but also requires frequent on-the-job calibration, which decreases the productivity. The user should consult the data sheet (specifications) for information about the GNSS/IMU sensor quality as well as the positioning and attitude accuracy.

Two survey-grade drone LiDAR sensors are currently available on the market, namely mechanical spinning (single- and multi-beam) and solid state. High-end drone LiDAR sensors are typically single-beam mechanical spinning. Examples of these LiDAR sensors are CHCNav AlphaAir 10 and Riegl VUX-120. Multi-beam mechanical spinning LiDAR sensors tend to be more noisier than the single-beam counterpart. Examples of multi-beam spinning LiDAR sensors include the Hesai XT32M2X and the Velodyne Ultra Puck. Solid-state LiDAR sensors, on the other hand, are typically low-cost sensors. Examples of LiDAR systems that make use of solid-state LiDAR sensors include CHCNav AlphaAir 450, and DJI Zenmuse L1 and L2.

When selecting a LiDAR sensor, a number of factors must be considered. These include the sensor characteristics, sensor weight and cost, and the application. Among the important sensor characteristics are the maximum measuring range, its accuracy/precision and the corresponding measuring conditions (e.g., target reflectivity, measuring altitude, incident angle, ambient weather condition), the operating flight above-ground level (AGL), the laser beam footprint (or equivalently, beam diversion), the scan speed and rate, the number of returns (echoes), and the field of view (FOV). The maximum measuring range represents the maximum slant distance (not the altitude), which is typically given for a 20% target reflectivity [note: target reflectivity is a function of surface composition (materials), for example: snow and limestone are highly reflecting surfaces (about 80%); sand (about 60%); concrete (about 30-40%); asphalt (about 10-20%)]. Range uncertainty varies with target geometry and size, distance to target, scan incident angle, environmental condition (e.g. fog, dust, bright sunlight), and target reflectivity. Typical specifications will provide the maximum range for flat targets with size larger than the laser footprint, excellent atmospheric visibility condition, and perpendicular scan incident angle. However, it should be pointed out that the maximum range will be reduced if laser pulse hits more than one target, as the total laser transmitter power will be split. As well, the range uncertainly will be increased as the distance to target increases, under poor environmental conditions and lower target reflectivity. Furthermore, the range uncertainly will be increased if incident angle is not perpendicular–the larger the incident angle is, the larger the range uncertainly.

When comparing different systems, users must make sure that the comparison is performed under the same conditions! For example, some manufacturers provide range precision and accuracy at a 30 m or 50 m range, while other provide them at a 100 m or even 150 m. As the uncertainty increases with distance (altitude), a 1 cm precision at a 50 m altitude, for example, will be larger when it is estimated at a 100 m altitude and much larger when it is estimated at a 150 m altitude.

The maximum operating flight above-ground level (AGL) will always be smaller than the maximum measuring range of the LiDAR system. Typically, manufacturers will provide both of the maximum AGL and maximum measuring range of their LiDAR systems in the data sheet. As such, when comparing different systems, a user should consider both. Users should also take into consideration that the maximum drone flight altitude in Canada is limited to 400 feet, or 120 m (unless the pilot-in-command obtains a Special Flight Operation Certificate (SFOC) from Transport Canada).

An important element that must be considered when examining the laser beam footprint (or beam diversion) is how it is defined! In fact, the laser beam spreads out as it travels away from the LiDAR sensor. The angular measure of the diameter of the laser beam is called beam diversion. Additionally, the laser beam profile does not have sharp edges – i.e., the radiant energy falls off gradually away from the centre of the beam. Most laser beam profiles can be approximated using the so-called Gaussian function (i.e., falls off approximately exponentially). As such, there are three ways that are commonly used to define LiDAR beam footprint (beam diversion): (1) the footprint at 50% peak intensity, i.e., when the radiant energy falls off to 50% of the peak intensity (also known as full width at half maximum, FWHM); (2) the 1/e point (corresponds to 36.8% of peak intensity); and (3) the 1/e2 point (corresponds to 13.5% of peak intensity). To correctly compare LiDAR sensors, the same definition must be used. It should also be pointed out that the beam divergence (or footprint) given in the data sheet represents the one measured at a zero scan (incident) angle (i.e., in the vertical direction). As the scan incident angle increases, the footprint increases. For example, consider a flat terrain and a LiDAR system with the following specifications: max. range = 150 m, max. AGL = 90 m, FOV = 360°, footprint = 15*10 cm (at 50% peak intensity). For such a system, although the system’s FOV is 360°, the actual FOV at the maximum range and maximum AGL will be limited to 106°! In addition, the footprint at the edge of the actual FOV (i.e., at an incident angle of 53°) will be about 25*17 cm (at 50% peak intensity). If we consider the more realistic 1/e2 point, rather than the 50%, the beam footprint will be about 42*29 cm. In fact, because of the large footprint (and uncertainty) at the outer beams, it is typically recommended to limit the FOV to a maximum of 80° or 90°.

The importance of the laser footprint size is that the smaller the footprint is, the higher the precision of distance measurement and the finer the resolution of topographic details that can be obtained. In other words, a small laser beam footprint (i.e., a small beam divergence) results in a higher quality digital terrain model (DTM) of the project area. On the other hand, a LiDAR with a large footprint beam will typically result in a lower precision of distance measurement (laser energy spreads over a larger area on the ground, which increases the noise) and a coarser resolution of topographic details. The number of returns, or echoes, and scan speed are also critical, especially for areas with high vegetation. Typically, for a survey-grade LiDAR sensor, an emitted pulse will have no returns, one return, or multiple returns. The no-return situation occurs, for example, when the distance between the senor and the target exceeds the maximum range. The one-return case, on the other hand, occurs, for example, when the laser pulse hits the ground surface with no other targets in the way. In forests or areas with high vegetation, the laser pulse will hit different parts of the forest (e.g. branches, leaves) till it reaches the bare ground, or perhaps loses all of its energy before it reaches the bare ground. Each target hit will reflect a signal (a return) to the LiDAR sensor with a different strength (intensity), which plays an important role in classifying the different objects in the project area. A high-end LiDAR will have multiple returns, and the last return will define the bare ground to a high degree of probability. The AlphaAir 10 LiDAR, for example, has a very small beam divergence of 0.33 mrad (corresponding footprint is 3.3*3.3 cm) at a 100 m altitude (50% peak intensity) and can provide up to 8 returns (sensevillegeo.com). This means that the likelihood of hitting the bare ground is very high, even for high vegetated areas, and the resulting DTM will be of high resolution.

Scan speed, pulse repetition rate (PRR), scan point density and spatial point distribution (pattern) are critical elements that distinguish a LiDAR sensor from another! Scan speed represents the number of scan lines (AKA scans or swaths) per second, while PRR (AKA pulse repetition frequency) represents the number of laser pulses that the sensor emit per second. Scan line spacing on the ground is directly related to the scan speed and drone speed. For example, for the AlphAir 10 LiDAR, scan speed can be as high as 250 lines per second. For such a scan speed, the corresponding line spacing on the ground for a drone travelling at 10 m/sec would be 4 cm! PRR, on the other hand, is directly related to the number of measurements per second of the sensor – for a single return, the two are equal if no pulses are lost! Point density refers to the number of points (i.e., measurements) per square meter. Ideally, point density should be uniform (i.e., point-to-point distance is more or less constant) and as high as possible to ensure that we obtain an accurate and detailed DTM of the project area. Typically, however, LiDAR sensors have different scan mechanisms and may not necessarily have the same scan patterns! While the spatial point distribution of some sensors is unform, other sensors have different scan patterns, including sinusoidal, zig-zag, and elliptical. If the point density is very low, users should additionally consider the scan pattern when examining a LiDAR sensor. This is especially important when scanning an area with a substantial elevation difference (hilly terrain, open pit mine) or dense vegetation.

The PRR is typically used to define the proper drone flying height, drone speed, and point density during data acquisition. As the flying height increases, the energy of the arriving laser pulse becomes weaker to the extent that some LiDAR sensors would not be able to provide range measurements to low reflecting targets (e.g., 10%). Lowering the PRR increases the per-pulse energy, which in turn might help increase the flying altitude. However, a higher PRR allows for a faster drone speed while maintaining an appropriate point density. On the other hand, a low PRR means that we have to fly the drone at a lower speed to maintain a similar point density. This translates to a longer data acquisition time and an expensive project execution! The AlphaAir 10 LiDAR system, for example, has a high PRR of 500 kHz (500,000 measurement per seconds, single return) at an altitude of up to 120 m (maximum in Canada) and a target reflection of 10%. This allows the drone to fly at a high altitude and a high speed while maintaining a high point density.

Some drone LiDAR systems use low-resolution RGB cameras for the purpose of colouring the LiDAR point cloud, while others employ high-resolution RGB cameras. The latter not only provides colouring to the LiDAR point cloud, but also can be used to generate high-quality orthomosaic of the project area. In fact, combining LiDAR data with a high-resolution imaging provides greater advantages over either system alone! To ensure that there are no gaps in the coloured LiDAR point cloud, the camera field of view must be the same or larger than the LiDAR sensor field of view.

The overall system weight (including all sensors) must also be considered when selecting a drone LiDAR, as it directly affects the flight time! The higher the payload weight is, the lower the flight time! In addition, many users have already acquired the popular DJI M300/M350 and they are potentially getting a LiDAR system that can be carried by that drone. As per the DJI specifications, however, the DJI M300/M350 can carry a payload of up 2.70 kg (including all sensors). For that load, the maximum flight time for the DJI M300/M350 is estimated (by DJI) to be 31 minutes. This, however, is estimated for a new set of batteries. In addition, since it is recommended to leave about 15% of the battery charge for any emergency situation, the maximum practical flight time for a 2.7 kg payload will be around 26 minutes. As the batteries get old, that time will be further reduced. If the payload weighs more than 2.7 kg, it cannot be carried by the DJI M300/M350 and another drone must be used!

The accompanying processing software, its capabilities and ease of use must be considered when comparing different LiDAR+Imaging systems. Ideally, the software should be all-in-one, which means that it is capable of processing the captured raw data (GNSS, IMU, LiDAR, and images) without the need to invest in a third-party software or optional add-ons! The software must also be capable of producing accurate platform trajectory, point cloud and image georeferencing, filtering, and colorization. As well, the software must be capable of handling the layering problems of multiple point clouds (e.g., in-between flight paths) through an efficient strip adjustment algorithm. Moreover, the software should support rapid generation of digital ortho  and 3D models, which take advantage of images and point clouds. Furthermore, the software should support visualization of massive datasets with multiple colorization options, including elevation, intensity, RGB, and others. It must include different tools to check and analyze the obtained results. These include supporting trajectory slicing and stratification checking, which allow for detection of misalignments across the entire project. Additionally, it should be capable of elevation accuracy verification though by control points. Finally, the software should be capable of producing multiple accuracy reports to help address quality control issues.

In conclusion, when evaluating a LiDAR system, the combined performance must be considered, including data quality, productivity (coverage) over a specific period of time, and the overall system accuracy or precision. The latter must consider the combined uncertainty, i.e. the contributions of all sensors! These include positioning, attitude, range, beam footprint, incident angle, among others, as discussed above. Unfortunately, it is not uncommon to observe that the uncertainty of a LiDAR “sensor range” is mixed with the overall LiDAR “system” (payload) uncertainty. The uncertainty of the sensor “range” is about the precision of the measured distance, while the uncertainty of the overall LiDAR “system” is about the precision of the resulting point cloud coordinates! A good strategy to compare drone LiDAR systems is to collect, process and analyze actual data with the systems in question, at the same site and under the same conditions. Ideally, the site should be comprehensive, which contains heavy and light vegetated areas, asphalt, structures, and terrain elevation difference. Part of the analysis should include accuracy assessment of the coordinates at strategically-located check points. In addition, the analysis should include point density verification using non-overlapping (i.e., individual) strips from the resulting point cloud over a flat non-vegetated area (one return), a vegetated area (multiple returns and bare earth), and an area with a substantial elevation difference to verify whether the point cloud distribution satisfies the creation of an accurate DTM. Unfortunately, if this is requested as a demo, companies will likely charge fees to execute it!