Preparing and Using Survey Point Clouds for Hull Surface Modelling in PolyCAD

Introduction

This article discusses the process of importing, preparing and using a Point Cloud to produce a hull surface in PolyCAD using the X-Topology tools. The specific objectives and a technical summary of the approach used to support Point Cloud datasets is presented before the process is introduced. The process may be summarised as follows:

  • Import a Point Cloud Dataset contained within a text file, e57 file or from an existing Point Cloud Database
  • Validation and Transformation of the Dataset into the desired Position and Orientation
  • Optimising the dataset by generating a spatial map
  • Visualising the dataset
  • Generating X-Topology Curves from the Point Cloud.
  • Validating the Surface against the Dataset.
  • The challenges of automatically fitting a Surface to the Cloud Points

Background

The Challenge

The Point Cloud features in PolyCAD is to provide a means to generate Hull Surfaces from realistic survey scenarios, i.e. when the capture scene contains supporting and work artefacts typical of dry, floating or hard standing areas. For generic software capable of fitting surfaces to surface data this case often represents a challenge as leaving this information in affects the accuracy of the surface and removing it excludes this data from supporting the surface definition process even if it is not used.

Typical fitting algorithms produce surfaces with definition points which are uniformly distributed over the sampled data. To fit to specific shape features with any degree of accuracy it’s necessary to use a large number of points. Solid modelling tools may then be required to trim back the edges or pick out features. At the end of this process, the resulting definition can be complex and can limit the range of engineering software applications it can be used in. As the marine industry relies on surface geometry and there are many good cost effective engineering analysis tools that perform adequately with simpler representations there is a need to efficiently produce hull surfaces that accurately capture shape with an optimal amount of definition points.

The Solution

The Marine Industry has been at the forefront of surface representation and definition for hull forms because this information is critical to the design and production of ships. Few applications support the Point Cloud dataset since it’s a different challenge to support the large amount of data involved. PolyCAD’s X-Topology Curve and Surface modelling tools implement a marine approach to hull surface definition using a curve network to manage the generation of surface patches to capture precise features such as knuckles and vary the size of patches representing large planar areas with minimal patches and high curvature areas in more detail. Generating a surface definition using curves allows the modeller to selectively capture the areas of the hull in the survey data that are good, ignore those are poor and manually create definition in important areas which many be severely disrupted, such as along the keel. The implementation of curve fitting methods such as Least Squares is far simpler than corresponding case for a surface and algorithm is easily extended to incorporate Design Intent concepts in the curve network and traditional smoothing (fairing) techniques. To support the modeller, the surface can be analysed with respect to the Point Cloud at different tolerances. These statistics can be presented graphically to highlight to the modeller area of the surface that needs to be improved and communicate to managers or customers the level of accuracy being achieved.

The Technology

Files

The Point Cloud tools inside PolyCAD work on the basis of holding as little as possible data in memory and optimising the file/database operations as much as possible. Database size is only limited by the maximum the operating system can support. However, since data is usually loaded in by text file and these are less efficient in storage terms than the binary database there additional limitation. In reality, for hull surfaces very large dataset do not offer any advantage in regards precision but it will slow down processing activities the modeller’s appreciation of the geometry contained within.

Visualisation

Point Cloud visualisation is achieved by presenting points as simple pixels and colour or intensity is supported. Visualisation of the entire cloud when modelling a surface is not particularly effective as it is difficult to perceive depth, and therefore shape. Elaborate and incremental techniques are available in other tools but this can be disruptive when modelling. Sectioning the Point Cloud to reveal the hull contours is a far more effective way of appreciating shape when working with a hull surface. The maximum size for a single visualisation is 1million points, limited by the safe maximum accepted quantity of data that can be uploaded to graphics cards by software drivers, around 32mb. Multiple visualisation volumes are supported and these can be used to vary the level of detail displayed in different areas of the survey data.

Spatial Map

Without structure large datasets can be slow to access, more so when the data is stored in files. To improve performance, the dataset can be optimised into a spatial map to capture the organisation points and understand where data is stored. Visualisation and Sectioning operation will run significantly faster when mapped as it will not be necessary to search the entire dataset. Surface Validation will not function without a spatial map.

In PolyCAD, the Point Cloud is structured using voxels (volume elements / cuboids) arranged in an Octree. This structure can account for the different spatial density of points in space, generating more subdivision as the density increases. Optimising the dataset can take time, grab a coffee. Only a limited amount of organisation can happen in memory and a number of temporary files are used. Space to accommodate the dataset at least twice needs to be available in the folder containing Windows temporary files for the optimisation to be successful. Optimisation is performed on a copy of the database and files swapped if the process is successful. The best time to perform optimisation is directly after the position and orientation of the dataset has been checked and/or updated. However, if the dataset is large and several iterations of transformation are required to precisely position the dataset it may be best to optimise the dataset before and after the transformation process.

Transformation

The Point Cloud may have to be positioned and oriented if it is not in exactly the right position and orientation. Initially, transformations are applied to the coordinate data and queries as a matrix operation when the data is interfaced. This allows the cloud to be quickly repositioned without having to update the entire database. The separate Cloud Point Visualisation and Cloud Point Section entities are not updated when the Cloud Point Dataset is transformed as it is likely that they are no longer positioned correctly. It is best to delete but they can be manually updated. When the spatial map is generated coordinate points are transformed during the process and the internal transformation matrix is reset.

Preparing a Cloud Point Dataset for Modelling

There are three options for importing Cloud Point survey data into the application. Data can be imported from text files, from e57 file or from an existing Point Cloud Database.

Importing Data Points from a Text File

In order to use a Laser Point Cloud within PolyCAD, the data needs to be converted into a Point Cloud Database (pcdb). Text file data can transformed into a Point Cloud Database using the option on the toolbar. Select the Cloud Point Surveys Tab and in the Import / Convert Group there is a button titled Import Text to Database.

Click the button to display the Import Wizard. On the first page the process to import a dataset is introduced.

Click the Next button to display the next page. This page is used to select the text file to import and to display the 1 st 100 lines of the file as a preview of the data structure. Press the Next button to continue.

Next, the first 100 lines of the text file is reviewed to identify coordinate values and any potential colour as RGB or intensity values. An analysis is made to identify the most common number of numerical parameters on each line considering space, comma or tab separated value format. These lines are considered to be those that contain coordinate data. Lines with a different number of parameters are ignored. The result of this analysis is displayed on the next page, applied to the first 100 lines. Column headers identify X, Y and Z coordinate parameters. Additional columns may contain colour or intensity parameters and may be optionally selected to be imported by changing the drop down menu to categorise the parameter. In the case of colour, a selection should be made for all three Red, Green and Blue component. The Next button will be unavailable if an invalid colour selection is made. Points are displayed in a single colour if no information is available.

Click the Next button. If colour or intensity information is available and is selected to be imported the next page provides the opportunity to define the mapping of the colour parameters value ranges to RGB or Intensity. Select the range of values that is most appropriate based on experience and a review of the preview data. Click Next to Continue. Note that if colour or is not available this page is skipped.

The next page defined the Point Cloud Database file name. Click the Convert button to begin the import process. Checking the Automatically Import Cloud Point Dataset into PolyCAD will automatically generate the Cloud Point Dataset entity in the current project once this process is completed. The import process will take time. Once complete Click Finish.

Importing Laser Scan from an e57 format file

The e57 format is a digital data file containing both points and associated image data. The point data can be stored in the file as in floating point or as compressed integer data. One of the benefits of the e57 format is that contents of the file is described avoiding the need to define what each data stream represents and, in the case of colour or intensity, define the valid range of the data. E57 data files are transformed into a Point Cloud Database using the option on the toolbar. Select the Cloud Point Surveys Tab and in the Import / Convert Group there is a button titled Import E57 to Database.

Click the button to display the Import Wizard. On the first page the process to import a dataset is introduced.

Click the Next button to display the next page. This page displays a summary of the datasets contained within the file, whether they are saved as Cartesian or Spherical coordinates, whether RBG or Intensity data is associated with the points and whether the dataset has an associated transformation. Individual datasets can be removed from the import using the checkbox selection on the left. Press the Next to continue.

Next, a range of options are presented which include outputting the file data to a text file, displaying the XML definition of the e57 data embedded in the file data in addition to the option to convert the data to the Point Cloud Database.

The first option will create a Point Cloud Database. The file name is defined and the process to convert the data can be started. By default, the data will be automatically added to the 3D Model but this can be disable using the provided checkbox. One the conversion has been completed, press the Close button to return to the 3D Modelling environment.

The second option outputs the e57 data to a text file to check the quality of the data and, if there is an error in decompressing the data identify where in the data stream this takes place. Set the name of the text file and press Convert to perform the transformation.

The final option will display the XML definition of the e57 allowing the contents of the definition to be reviewed. If particular interest is the data which identifies the number of points and contents in each block of data. This can be used to confirm that the data has been transformed into a Point Cloud Database correctly.

Importing an Existing Cloud Point Dataset

If a Cloud Point Dataset is already available, select the Cloud Point Surveys Tab and in the Cloud Points Group there is a button titled Cloud Point Dataset.

In the Attributes window, the following Form (pictured left) is displayed. Click the Select Dataset File button to select the Point Cloud Database File. The Form (picture right) will show summary information about the Dataset covering the number of cloud points, number of spatial map cells, the availability of visualisation dataset, colour and transformation information. Click the Create Cloud Point Dataset to continue.

The main visualisation is stored in a separate file and may not be present. If so, a box with extents equal to the dataset will be displayed.

Creating the Visualisation Dataset

The Main Visualisation Dataset is a subset of the overall point cloud dataset. If there is no Map it is uniformly sampled from the dataset in the order the points appear in the database. If there is a map then uniform sampling takes place from the map.

The visualisation is generated by specifying the number of points to be shown in the Sample Count. Ultimately, this number is determined by the graphic performance of the computer that is running on. However, as the number of visible points increases scene can start to loose definition as it becomes a solid block of colour. Once a value has been entered, Click the Sample Visualisation Points Button.

Clipping the Visualisation Dataset and Geometry Sampling

The Point Cloud dataset can be spatially clipped using a box to eliminate areas of the point cloud that are not required or to focus in on particular areas of detail. This feature is used to limit the dataset visualisation and, optionally, the range of data sampled from point cloud by analysis features. It does not remove any data stored in point cloud database.

The Clipped Volume is specified using the values in the Clip Volume group. Enable the check box to turn on Clipping. Additionally, the clip volume can be interactively manipulated in the graphics view during graphical editing, access this through the F2 button shortcut or ‘Cloud Points Dataset -> Edit’ on the Right-Click menu. A box volume representing the clipping limits can also be draw interactively by clicking the Select button in the Clip Volume Group.Once the Clip Volume has been determined or if clipping is to be removed, press the ‘Update Visualisation’ button.

To display colour, check the Shade Point Data in the Visualisation Colour Adjustment group. If captured colour is poor the brightness and contrast may be adjust to try to improve quality. The Enhance Colour Intensity option remaps the colour to a red-green model and may highlight points which match the background colour.

Spatial Optimisation Map

The Spatial Map optimises the database into a structure where the position of points in particular areas can be quickly identified in the file. The map significantly improves the time required to generate a section or visualisation volume and is necessary for the surface validation. To begin the optimisation process click the Generate Map… button in the Dataset Optimisation Map group.

The Spatial Map Generation form is used to define the parameters used to generate the map and display the analysis when the parameters are applied to the data. Parameters control:

  • Size of the initial grid cells
  • The number of points in a cell before subdividing to a new tree level
  • The maximum number of levels tree

Using these parameters we want to obtain a spatial map where:

The number of cells (Initial Cell Count) is between 20000 and 30000. The approximate Initial Cell Size should be chosen by automatically but can be adjusted.

To avoid either the maximum number of cells being in the first level or last level (left bar chart). This chart should be hump shaped.

To eliminate any cells with exceeding high numbers of points by subdividing and generating more levels. Ideally, we want around 2000 points per cell but if the dataset is large this may need to be increased to avoid requiring more than 8 levels. For small datasets, this value can be reduced. In this respect, points per cell bar chart should show that as the number of point per cell increases, the number of populated cells should decrease. Clicking the Analysis Point Cloud button performs the analysis and reports back.

Once a set of parameters has been identified, press the Sort Point Cloud button. This process will take some time.

Transforming the Dataset

If the dataset is not in the correct position or orientation its position can be changed using the standard transformation tools available from any of the main tabs pages. Transforming the dataset with respect to some geometric feature can be achieved by introducing a Cloud Point Section at the location so that the points can be snapped to. A potential workflow for lining up the baseline in the Point Cloud with the modelling environment is as follows.

Two Cloud Point Sections will be used to identify the baseline. Cloud Point Sections offer snapping allowing a Polyline to be drawn between. By rotating the Polyline so it faces along the X-axis we orientate the Point Cloud within the software. Subsequently, translation can be used to line up the position of the Point Cloud.

Create two Cloud Point Sections both which intersect the baseline in the Point Cloud. One section should be as close an origin such as the aft perpendicular as possible although in the example below the baseline on the hull does not extent that far aft.

Next, draw a Polyline snapped to the two Cloud Point Sections along the baseline.

Hide the Cloud Point Sections to avoid snapping to points. Select the Cloud Point Dataset and the Polyline. Using the Rotation Transformation, snap the Edit handle to the aft end of the Polyline. In the transformation tool, click the Align Handle button. Using the mouse, select the Edit Handle Arc close to the nearest axis to the baseline and rotate the Edit Handle so that it snaps and aligns with the Polyline. Repeat for the other Arc adjacent to the Axis.

Click the Align Handle to turn the alignment mode off and Click Transform to World to apply a transformation which rotates the Edit Handle into the World frame of reference. Click the Rotate Selection button perform the transformation and finish.

Select the Cloud Point Dataset and Polyline and use the Translation tool to move the Cloud Point Dataset into position. Plane and Polyline tools may be used to project positions from Point Cloud onto the baseline to get the precise longitudinal position lined up.

Cloud Point Sections

Cloud Point Sections create sectional cuts through the dataset allowing, visualisation, digitisation and curve generation. Cloud Points Sections are generated by taking a planar cut through the cloud and selecting points within a certain distance of the plane (half of the specified thickness parameter). Cloud Point Sections can be based on X, Y or Z Planes or referenced to Plane entities within the model allowing other definitions. Polylines can be generated from the Cloud Point Sections by editing (F2) and dragging a rectangle to selecting the points to generate the curve.

Note that Cloud Sections are not stored and are automatically regenerated when the geometry file is loaded.

To create a new Cloud Point Section, select the Cloud Point Surveys Tab and in the Cloud Points Group there is a button titled Cloud Point Sections.

The Create Cloud Section tool will appear in the Attributes window. Specify the plane type position in the Intersection Group. This can be done numerically using the screen controls or interactively placed using the buttons with arrow icons. Cloud Sections can be defined on the principle planes or reference a Plane entity. The thickness of the section and the size of the map may be selected in the Cloud Section Parameters. A check box is provided to Clip the data to the same volume as the visualisation. The position of the section plane is indicated graphically on screen as a bounding rectangle displayed in the 3D view. Press the “Generate Cloud Section” button to generate the Cloud Section Entity once the parameters have been setup.

Individual Curves (Polylines) can be generated from the section by selecting points to sample in Edit Mode, i.e. the F2 short cut or ‘Cloud Points Section -> Edit’ on the Right-Click menu. The mouse may be used to drag and select areas of the sectional cut. Shift and Ctrl buttons may be used to make additional selections. If the Cloud Point Section Map is visible, cells of the grid may be added to the selection by clicking on cells. This approach allows the inclusion of unwanted cloud points to be avoided. Once a suitable select of points has been selected a Polyline can be generated by pressing the “Generate Polyline” button in the Entity Properties.

The limits of Cloud Points Section can be clipped to Auxiliary Cloud Visualisation entities.

Auxiliary Cloud Visualisation

The Auxiliary Cloud Visualisation entity provides further views from the Point Cloud. Rather than continually resample the main visualisation to focus in on detail. The entity has all the same visualisation as the Cloud Point Dataset. Furthermore, the limits of an Auxiliary Cloud Visualisation may be used to limit the extents of a Cloud Point Section. Consequently, both a of Visualisation and Section entities may be used in combination to develop an insight into a particular area of interest without affecting entities already setup to visualise the overall survey.

Sampling the Point Cloud when Fitting X-Topology Curves

Clouds Point Datasets can be sampled when generating X-Topology Curves using the Least Squares Fitting tools. This tool provides the “best of all worlds” from point of view that it can fit curves, selectively, to any geometry including point clouds and the resulting curve or curve segments can be reviewed in respect of curvature quality or deviation from the original data. A choice can then be made whether to reselect the fitting plane or the intersection geometry, change the number of control points or apply smoothing.

The X-Topology Least Square Curve fitting tool may be found on the X-Topology Surface Modelling Toolbar in X-Topology Curve Group with the caption “Intersect Least Squares”. Once an intersection plane has be defined, the option to intersect the point cloud with a Cloud Point section and add the select points to the fitting selection become available. Information on modelling with X-Topology should be referenced for more detail on surface modelling.

Validation

While comparing the surface visually with the Point Cloud provides an impression of accuracy, only quantitative analysis can provide actionable feedback across the surface to support decisions such as when the surface is within tolerance or areas of the surface to focus attention when it isn’t. Using a similar approach to isophote and surface curvature the distance measures of the surface from the point cloud can be displayed using coloured shading.

Analysis Process

To present a view of accuracy considering the amount of data available it’s necessary to use statistics. It’s possible to use some elaborate analysis but using approaches and terms or presenting results that are unfamiliar to users is unhelpful. Furthermore, considering the amount of data involved its necessary to come up with algorithms that are able to process the surface in an acceptable amount of time. The approach samples the point cloud at each visual mesh point derived from the surface. The cloud points in the vicinity of each mesh point are selected, using a box volume with dimensions defined by the user, and the distance normal to the surface analysed to develop the mean, standard deviation and minimum and maximum statistics. This works well across most of the surface but results can become confused along the stem, where it’s possible to sample across features such as the centre line or around sharp knuckles such as the transom. Presently the software does not detect these scenarios although there is an option to prevent point sampling across the centre plane. As there may be many scenarios where either the surface intentionally differs from the Point Cloud Survey data or sampling captures additional points of the cloud an addition Form providing a deeper selection of results is provided which displays graphical comparisons of intermediate analysis information at Surface, individual Patch and mesh point level.

Results

Results are presented as a shaded surface and a number of different options are available:

Absolute Distance: shades the surface based on the absolute distance of the surface to the mean distance of cloud points sampled normal to the surface. This provides an overall impression of the accuracy of the surface.

Sided Distance shades the surface based on the distance of the surface to the mean distance of cloud points sampled normal to the surface and uses different colours depending on the sign of the distance. This highlight, in addition to accuracy, the variation of the surface fit on each side of the cloud point ‘surface’.

Absolute Standard Deviation: is similar to absolute distance except that it will account for the level of noise or scatter in the point cloud. It may be necessary to use Standard Deviation is cases where noisy data does not a high level of statistical confidence for the selected tolerance.

Sided Standard Deviation: combines the concepts of the Sided Distance analysis using Standard Deviation as a measure.

Within Cloud (Absolute): shades Green if surface is found between the minimum and maximum samples from the point cloud, red otherwise.

Within Cloud (Sided): shades Green if surface is found between the minimum and maximum samples from the point cloud, otherwise blue if the surface distance is less than the minimum sample, red if the surface distance is greater than maximum sample.

Results can be presented in two (absolute) or three (sided) colours highlighting within or outside of tolerance. The presentation can also be shown as banded colours where the tolerance value controls the thickness of each band. This presentation highlights areas provides finer comparison of the distances between the surface and the point cloud across the whole domain. Regions surface where there isn’t any point cloud data to validate accuracy are rendered transparent.

Validating the surface

To present the validation, create a Cloud/Surface Comparison Entity which may be found on the Cloud Point Surveys Toolbar in Cloud Points Group.

Select the Dataset and Surface to compare together. If there is only one Dataset and only one surface they will selected by default.

Since it takes some time to analyse the cloud dataset against surface the entity will initially display as the outline of the surface patches. This is also the case when reloading from file as the results are not stored. Press the Analyse button to generate the results. A progress window will displayed indicating the status of the analysis. The size of the sample volume "window" can be adjusted. Sampling can also be prevented from analysing across the centre plane when the surface normal is greater than the given angle with the centre plane.

Once analysed, the display options become available. As illustrated in the previous section, the surface can be shaded to indicate the tolerances in terms of absolute and sided (signed) distance, absolute and sided (signed) standard deviation and within the ‘thickness’ of the cloud. The tolerance is specified in the Tolerance field. The accuracy can also be displayed as coloured bands, by ticking the check box. In this case, the Tolerance field specifies the width of bands and the number of bands is defined in the following field. The shading colour model is selected in the shading Menu.

Several view options are available. The Mesh option displays the raw geometry of the surface and may be turned off once analysis information is available. The Analysis Shading check box controls whether shading is displayed. The Mean Patch Surface option displays the surface geometry corrected to the mean position. Displaying this option alongside the original mesh provides another visual indication of the differences between the surface and the cloud points. The final option, Min/Max/Mean displays at each surface mesh point the mean, min and max positions of the sampled data. This option can display a significant amount of information which will slow down the software especially for surfaces with large numbers of patches. The same information can be displayed for individual patches and points using Analyse Surface form accessible from the Global Statistics group.

The global statistics group presents a numerical appraisal of the surface accuracy indicating overall Mean, Min and Max Error, Standard Deviation and 95% confidence interval. The percentage of the surface within a particular tolerance is also listed.

Deeper Analysis

While the main graphical area of PolyCAD provides visual feedback on the tolerance of the surface it’s sometimes necessary to ask deeper questions about what is happening in a specific area or around a specific feature of the surface. Each location sampled on the surface uses a cuboid with long axis parallel to the surface normal to collect points. This selection process is simplistic and may collect points from parts of the cloud that are not directly adjacent to the surface area. For example, points may be sampled from the other side of a knuckle if the corner angle is particular large. In these situations the opportunity to visually check the position of the cloud points against the surface in selected areas provides an insight into the tolerance information presented in the main graphics view. This appreciation may be used to tune the size of the sample window Volume to reduce the sampling of cloud points which aren’t directly adjacent areas of the surface being analysed.

The Surface/Cloud Point Analysis Form allows Cloud Points and Surface to be displayed together. Individual patches can be selected for further analysis and patches visualised sequentially by stepping through the collection. Visualisation of the point cloud is constrained the volume immediately around the patch.

Once a patch is displayed analysis can look at the information associated with an individual sample point. In addition to the features available in the main graphics display, its possible to display the associated Cloud Dataset Map cells associated with the point, the size of the captured volume and the captured sample points.

So why not fit a "Mean" Surface to the Cloud Points

Considering that a "mean" surface can be calculated why not fit the surface directly to the Cloud Dataset. Certainly this is mathematically possible and a surface can be fitted when the data is distributed tidily. However, rarely is the data tidy or sometimes even complete, there are areas of the Cloud which may choose not to fit to, for example around appendages and there are other areas that where calculating the mean position is a challenge, such as around larger angled knuckles.

From a technical perspective these challenges may be addressed by exploring the use of feature detection and by using algorithms that a tolerant of missing data. However, it’s now necessary to direct the software by identifying areas of the Cloud that should be fitted directly and ignored. This adds an overhead and additional complexity to the user experience. Poor surface quality may be expected as it transitions between well-defined areas of fit to others where there is limited data.

These types of challenges are already highlighted by the X-Topology Surface Fit entity which fits a surface to the X-Topology Lofting Curves. X-Topology Lofted Curves create a far higher quality sample data compared with a Cloud Point Dataset. However, small surface features can often fall within the spacing between to lofting curves producing no fit data for associated surface patches. Is possible to avoid bad patches by using specific fitting algorithms but this introduced the situation as described above where the majority of the surface is accurately supported and of good quality but there may be other areas which aren’t. In this case, as the features are small, it can be difficult to highlight this to users in an obvious and informative manner.