Citation
Structured light system design for nondestructive evaluation imaging

Material Information

Title:
Structured light system design for nondestructive evaluation imaging
Creator:
Buggaveeti, Praneeth
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English

Thesis/Dissertation Information

Degree:
Master's ( Master of science)
Degree Grantor:
University of Colorado Denver
Degree Divisions:
Department of Electrical Engineering, CU Denver
Degree Disciplines:
Electrical engineering
Committee Chair:
Deng, Yiming
Committee Members:
Connors, Dan
Liu, Chao

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.

Downloads

This item has the following downloads:


Full Text
STRUCTURED LIGHT SYSTEM DESIGN FOR NONDESTRUCTIVE
EVALUATION IMAGING by
PRANEETH BUGGAVEETI
Bachelor of Science, Jawaharlal Nehru Technological University, 2011
A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Master of Science Electrical Engineering
2016


This thesis for the Master of Science degree by Praneeth Buggaveeti has been approved for the Department of Electrical Engineering by
Yiming Deng, Chair Dan Connors Chao Liu
April 29, 2016
n


Buggaveeti, Praneeth (M.S., Electrical Engineering)
Structured Light System Design For Nondestructive Evaluation Imaging Thesis directed by Assistant Professor Yiming Deng
ABSTRACT
In this thesis project, we have developed and implemented a prototype for evaluation of natural gas plastic Pipes. We implemented this on raspberry pi for experimental analysis using real time processing. The whole platform consists of 3 inch diameter plastic module built using 3d printer, which holds structured light laser source, a fish eye camera and embedded board for processing and recognizing defects of the inner pipeline structure that is not easily accessible for human inspectors. The embedded machine acquires the data from connected camera via serial bus and follows the 3d rendering algorithm. The algorithm is designed to process the data in real time and render all 2d frames into 3D pipe with exactly same features of pipe. The prototype was developed on a level-by-level approach. Firstly, we implemented using an optical camera with ±45 degree held of view that can only cover a part of structured light. Secondly, we use a hsheye camera with ±90 degree held of view, which gives desired results of complete ring,which is used as the input of developed rendering algorithm. According to above mentioned method the motion of camera within pipe is assumed to be constant (10 frames per cm). Finally, the location of cracks within pipes can be detected and the size of cracks can also be recognized using number of frames. The prohle of cracks can easily be seen on rendered 3d structure. The raspberry pi is limiting the use of prototype in long length pipes because as the length increase it leads to huge volume of data and can’t be handled using limited computational power of pi. The possibility of the future work would include the use of graphic processing unit for faster processing and rendering.
m


The form and content of this abstract are approved. I recommend its publication.
Approved: Yiming Deng
IV


ACKNOWLEDGMENT
I would like to express my deepest thanks to my advisor, Dr. Yiming Deng for his valuable guidance and suggestions he has given me in this project. Without him, this would have been impossible. Also, I thank the committee members, Dr. Dan Connors and Dr. Chao Liu for spending their valuable time for reviewing this report and attending my defense.
I would like to express gratitude to my mother Vijaya Laxmi Buggaveeti, my father Phaneendra Babu Buggaveeti and my brother Sravan Kumar Buggaveeti for their encouragement and support throughout the project.
I would also like to thank Deepak Kumar and Bhanu Babaiahgari for explaining some topics related to this project.
v


DEDICATION
The thesis is dedicated to my family and entourage who have encouraged me to go further during my life.
vi


TABLE OF CONTENTS
Chapter
1. Introduction............................................................. 1
1.1 Background ........................................................ 1
1.2 Objectives......................................................... 5
1.3 Scope.............................................................. 6
2. Non-Destructive Evaluation............................................... 8
2.1 Overview........................................................... 8
2.1.1 Limitations of Non-Destructive Testing...................... 9
2.1.2 Conditions For Effective Nondestructive..................... 9
2.2 Testing Methods in Nondestructive Evaluation...................... 10
2.3 Testing Method Implemented in this work........................... 14
3. Data Processing Using Python and OpenCV................................. 18
3.1 Background ....................................................... 18
3.2 Flow of Algorithm................................................. 18
3.2.1 Flow of Algorithm for Distance Estimation ................. 19
3.2.2 Operating System and Library Setup......................... 20
3.3 Python with OpenCV Library........................................ 20
3.3.1 Gray Scale Conversion...................................... 21
3.3.2 Numerical Python........................................... 21
3.3.3 Python Imaging Library..................................... 23
3.4 Morphological Operations.......................................... 24
3.4.1 Dilation................................................... 25
3.4.2 Erosion.................................................... 26
3.4.3 Thresholding............................................... 27
3.5 Visualization Tool Kit............................................ 29
4. Structured Light System Development For Sensing......................... 33
vii


4.1 Structured Light.................................................
4.2 Sensing Module...................................................
4.3 Computational Machines...........................................
4.3.1 Computational Machine Specifications: Apple MacBook (Intel
Corei5) ...................................................
4.3.2 Computational Machine Specifications: Raspberry PI (ARM 7)
5. Results.................................................................
5.1 First Generation Results ........................................
5.2 Second Generation Results........................................
6. Limitations ............................................................
7. Conclusion And Future Work .............................................
References.................................................................
Appendix
A. Algorithms For Capturing and 3D Rendering...............................
A.l Algorithm for Distance Estimation.................................
A. 2 Algorithm for 3D Rendering.......................................
A. 3 Algorithm for Crack Detection....................................
33
35
36
36
37
40
40
45
52
56
58
60
60
64
66


LIST OF FIGURES
Figure
1.1 Crack Developed on Pipe ................................................ 1
1.2 Pipeline Injection Gauge (PIG).......................................... 2
1.3 Sewer scanner and evaluation technology (SSET).......................... 3
1.4 Proposed Setup.......................................................... 4
2.1 Structure Of the System................................................ 15
2.2 Illustration of Structured Light....................................... 16
2.3 Structured Light Working Principle..................................... 17
3.1 Flow Of Algorithm...................................................... 18
3.2 Flow For Distance Calibration.......................................... 19
3.3 a.Color Image b.Converted Gray Image................................... 21
3.4 Black and White........................................................ 23
3.5 a.Square 4x4 Element b.Diamond-shaped 5X5 element c.Cross-Shaped 5x5
element................................................................ 24
3.6 a.Original image b.Kernel or Structural element, 1 = origin c.Resulting image after dilation................................................... 26
3.7 a.Original Image b.Kernel or structural element, 1 = origin c.Resulting
Image After Erosion.................................................... 27
3.8 a.Binary b.Binary Inverse c.Trunk d.Trunk Inverse e.ToZero f.ToZero Inverse: ................................................................ 29
3.9 VTK Rendering.......................................................... 30
3.10 VTK Flow Chart......................................................... 30
3.11 VTK Rendering Window................................................... 32
4.1 Structured Laser Source................................................ 33
4.2 a.Laser Projection Inside Pipe b.Reconstructed Image .................. 34
4.3 a and b Shows the Fish Eye Camera Module............................... 35
IX


4.4 Poor Resolution at Marginal Region ................................. 36
4.5 Raspberry Pi Board.................................................. 37
5.1 a.First Generation Prototype b.The Captured Frame................... 40
5.2 a.The holes on the surface as defects b.The linear cracks in the specimen 41
5.3 a and b: Reconstructed images using first gen prototype............. 42
5.4 a. Results over Mac b. Number of frames vs Processing Time.......... 43
5.5 Prototype Setup with large View Angle Camera........................ 44
5.6 Complete Ring Captured ............................................ 44
5.7 Reconstruction results from 200 Frames ............................. 45
5.8 a. Results over Pi b. Number of frames vs Processing Time........... 46
5.9 a and b Represents the Reconstruction Results on Raspberry Pi....... 47
5.10 Processing time comparison between Raspberry pi and macOSX.......... 48
5.11 Equal Sized Slices 28cm long a) 0-7 cm b) 7-14cm c) 14-21cm......... 49
5.12 Object Under Test for Distance Calculation.......................... 50
5.13 Comparison between the actual distance and measured distance........ 50
6.1 Computational time vs. Resolution (Raspberry Pi).................... 52
6.2 System Design....................................................... 53
6.3 System Design....................................................... 54
6.4 CPU Load Vs. Number Of Frames..................................... 54
6.5 Memory Usage Vs. Number Of Frames................................. 55
x


1. Introduction
1.1 Background
The underground pipeline network is one of the most rigorous infrastructure systems. Many of the pipeline systems in use today were installed 100s of years ago and reaching their design lifetime. The structural integrity of these pipelines is decreasing due to corrosion and deterioration or some manually made mistakes. Because of the fast degradation of the pipeline systems, regularly assessing their conditions is a critical task. Thus, it is very important to continuously check the conditions of the pipeline on a regular basis. There are many pipeline inspection techniques. Due to lack in inspection or delays in inspection cycles caused severe problems like cracks as shown in figure 1.1 [1] [2], Severe damage can be normalized or prevented on timely and accurate checkups. To keep this system work efficiently we need to acquire up to date and reliable pipeline condition data using suitable and efficient methods. Later the detected defects are evaluated for decision making for maintenance and repair. Pipeline inspection and condition assessment technologies are improving daily. Coin-
Figure 1.1: Crack Developed on Pipe
panies are developing a new generation of equipment that involves data acquisition techniques and deploys multiple sensing techniques for acquiring more accurate inspection data [3]. There are many systems that were still in use like Pipeline Injection
1


Gauge as shown in figure 1.2 [4] which is a smart and intelligent inline inspection tools sent through the pipe via the circulation of fluid or gas. This system is fully automatic and reliable in hostile environment but cannot be supervised while under test and the speed depends on the medium under test. Sewer scanner and Evaluation is another technology for pipeline damage detection [5]. This technology obtains images of the interior of pipe as in figure 1.3 [6]. This technology has been tested in the United States and has enabled an improved research in automated defect classification due to its improved image quality compared to conventional CCTV insection tapes. However, the SSET has not been adopted by a majority, because of its higher inspection cost, longer inspection duration, and few other imlementations [7]. In this
Figure 1.2: Pipeline Injection Gauge (PIG)
thesis work we developed a system with automated pipe crack detection capability to enable collecting and interpreting pipe condition data under varying pipe materials, colors, crack patterns. Automated pipeline defect classification has been a subject of intense study in recent years [7], though the existing research has been limited by data acquisition techniques, image analysis and pattern recognition approaches.
Image processing is a technique widely used in pipeline inspection field within the professional domains in the pipeline network industry. The proposed image processing techniques involves five major steps: Acquiring the images, processing the images,
2


Figure 1.3: Sewer scanner and evaluation technology (SSET)
Segmenting the images, features extraction, and the pattern Recognization. Such techniques is to investigate the inner surface of the pipeline by tracing any defects. Different image processing tests and experiments need to be carried out in order to ensure that the algorithm developed is robust enough. To make system work in real time we have to implement this on embedded systems platform. This is the main characteristic of designing an embedded systems and achieving the required results remains a tough task. Optimizing the design and formulating the steps to achieve good results is a goal and implementing in such systems is very critical since it will reduce the costs and the time required to arrive to the solution [8].
We have seen growing research interests in inspection of bridges, pipelines, and other civil infrastructures. Research in structure inspection using a combination of embedded systems with optical imaging technique has resulted in our prototype. Pipe inspection is not a new topic in the held of academic and industry [8]. Various methods have been proposed and developed for inspecting an environment not accessible for direct human observation. Many embedded systems have been developed in the pipeline inspections based on how they were used and on what type of materials the inspection is done. Our research has led to utilize a structured light system combination of optical image processing involving a circular laser, and a camera
3


attached to embedded system board as shown in figure 1.4. The overall methodology illustrates specific information of the pipeline, vision tools and a well-establish mixture of camera and laser adaption to the pipeline inspection problems. The embedded board was implemented in this prototype for visual and Non-Destructive Testing of the pipe structure. We have developed an image processing system in python and
Figure 1.4: Proposed Setup
to implement it on embedded platform for real-time analysis. We have implemented raspberry pi as an embedded platform. Benefits of using raspberry pi as an embedded platform and using image-processing techniques on that would be the main focus. The software and installation of hardware for specific environment i.e., suitable for different conditions of inner surface of the pipe is a scientific achievement for designing more complicated systems.
4


1.2 Objectives
Pipelines are costly investments, to achieve uninterrupted service they should be well maintained for decades. Continuously monitoring the condition of the pipeline is integral to avoiding unforeseen repair and replacement expenditures. Structured light is one such technology, which is progressing considerably in the areas of early detection of damage and maintenance of pipeline integrity.
The nondestructive diagnostic possibilities of structured lights are countless but the techniques that are most impactful are the ones that could potentially save lives of human from major disaster. Like the structured lights used in medical held for Minimally Invasive Surgical Imaging. Optical imaging requires scientists to research, analyze and evaluate in order to arrive at optimal solutions. The challenge, ambiguity, lack of rules and freedom to create things is what makes scientists and engineering fun. The struggle to create superior imaging diagnostics can lead to fewer lives lost due to sever fatalities. The objective of MS Thesis Research at LEAP is to develop a prototype using Structured Light and Raspberry Pi, that provide 3D data with time and spatial measurements capabilities simultaneously. This imaging technique is an effort to satisfy the GTI supported research project for the residual strength and remaining useful life analysis of vintage pipeline materials.
The GTI supported research is an effort of University of Colorado Denver that aims to develop a new hybrid sensing technique that can identify and proactively characterize injurious pipe body with superior resolution and high sensitivity. The detection results from the proposed nondestructive evaluation sensing methodology will be integrated with embedded systems for accurate time-dependent reliability analysis. Effective reliability analysis using nondestructive evaluation techniques requires a fused information framework that involves sensing and image processing.
5


If successful, the defects can be significantly reduced with this innovative pipeline defects diagnosis and prognosis approach.
The overall objective of this research is to design and implement a real time image processing system in python on raspberry pi for detecting existing micrometer to millimeter cracks on pipes. The objective of this thesis is to develop the system to image in real time using Raspberry pi.
1.3 Scope
Structured light NDE imaging is a science that provides a quick and nondestructive way to evaluate and assess the materials properties. Factors in choosing the right non-destructive evaluation technique depend on the type of material, as well as, the size, orientation and location of defects. Whether the defects are located on the surface or internally also affects the selection of the technique. The geometry shape of the material under the test also determines which NDE technique to use. All these factors determine the type and equipment of structured light imaging source. Constructing an imaging system should be based on the property of device under test and location of the flaws that need to be imaged. The system is optimized through experimentation and evaluation.
Structured light, camera and small sized computer have been employed to scan
the internal surface of the material and to develop the system as real-time system.
Techniques have also been used to reconstruct back the scanned surface into a 3D
structure for analysis. The most important component of the imaging system is
structured light. The resolution of the imaging technique is a substantial factor
that should be considered while developing the prototype. Chapter 3 and 4 presents
a system overview, automated data acquisition algorithm using python and initial
efforts of image enhancement techniques. Chapter 5 shows the methods and different
results obtained from experiments. Chapter6 presents the various limitations of the
system developed and Chapter 7 provides a conclusion and summary of the future
6


work. The modeling efforts in chapter 4 and results in chapter 5 provide exposure to initial efforts made by the LEAP team to build the prototype. The modeling efforts are incomplete and are a part of the continuous improvement efforts of the LEAP research group.
7


2. Non-Destructive Evaluation
2.1 Overview
The nondestructive testing is an examination, test, or evaluation conducted on an object that is used for testing. This test is performed on a set of conditions like not changing or altering the test object in any way, in order to determine the absence or presence of conditions or discontinuities that may have an effect on the objects service. Non-destructive tests may also be conducted to measure other objects characteristics, such as size, dimension, configuration, or structure, including alloy content, hardness, grain size etc. This technology is also called as Nondestructive examination, nondestructive inspection, and nondestructive evaluation. This type of testing or evaluation significantly reduces or avoids the critical failure.
Nondestructive testing is considered as one of the fastest growing technologies with the improvement and modifications and through understanding of the materials and use of the various products and systems, have all contributed to the technology [9]. We use this technology in our day today activities for example, when a coin is deposited in a slot of a vending machine and selection is made for a candy or soft drink, the coin undergoes a series of nondestructive tests like size, shape, and metallurgical properties and it should pass all these tests satisfactorily, for the product to dispatch successfully from the vending machine. This technology has become a part of every process in industry, where product failure can result in accidents or causes harm to human life. In industry, nondestructive testing is effectively used for:
1. Examination of the raw materials prior to processing.
2. Evaluation of materials during processing as a means of process control.
3. Examination of finished products.
4. Evaluation of the products and structures once they have been put into service.
2.1.1 Limitations of Non-Destructive Testing
8


Every nondestructive testing has some limitations. A through inspection one for conditions that would exist internally in the part and the second method would be more sensitive to conditions that exist on the surface of the part. Some of the defects on the structure can never be detected by few testing techniques. It is important to know that properties of material and as much information as possible before deploying a type of testing method. The nature of discontinuities that are anticipated for the particular test object should be well known and understood.
Also it is incorrect to assume that if a part is subjected to a non-destructive test and if it passes it guarantees that a part is good enough. There are some minimum requirements that are set before the testing is implemented on a particular test object and they are not only the source that qualifies the part as acceptable for implementation. This requires some kind of monitoring or evaluation of the part or structure once it is operational. There are also some situations where a testing is performed under unqualified examiner, which results in failure.
2.1.2 Conditions For Effective Nondestructive
There are certain conditions for the product to undergo effective nondestructive testing, Firstly the product must be testable and it is essential to know which test method is appropriate for what type of the product and testing method should be selected based on the properties of the product. Secondly, There are certain approved procedures that must be followed to satisfy all the requirements. A certified or qualified person who assesses the adequacy of the procedure should approve it. Thirdly, the equipment used for testing should be operating properly in good working condition. The equipment should be periodically checked to assure that the equipment is working properly. Fourthly, all the test results should be documented properly and address all the key points in the testing examination, including calibration data, equipment and description of the part and identification of the faulty parts etc. Finally all these tests should be performed by a qualified personals who has formalized
9


planned training, testing, and defined experience [9].
2.2 Testing Methods in Nondestructive Evaluation
There are various testing methods for evaluating the products, some of them are described as follows, Visual testing method is one such technique. The main principal behind the visual testing method is the transmitted light from the test object that is captured with a light sensing device or human eye. Mainly used for sewer, storm pipeline inspection and in-service inspection in industries. This technology is least expensive and with minimal training required for testing. This technology requires effective source of illumination for the testing process to be successful [9].
Penetrant testing is another non destructive evaluation technique in which a visible or fluorescent dye mixed with liquid enters the surface of discontinuities by capillary action. This type of technology can be used on any type of solid nonabsorbent material having uncoated surfaces that are not contaminated. Application of this type of technology is relatively easy and materials used for testing are inexpensive. The person using this testing method does not require much of the training before performing this test. The only disadvantage in this technology is the surface of the test object should be relatively smooth and free of contaminants [9].
The technology that uses magnetic properties and ferromagnetic particles for nondestructive testing is known as Magnetic testing. The test object is magnetized and ferromagnetic particles are applied on the surface, aligning to discontinuity. This technology can be applied on surface and subsurface for all sizes of the objects under the test. Magnetic testing is easy to use and inexpensive type of testing compared to penetrant testing [9].
Radiographic testing is another nondestructive testing method, in this technique the test part is placed between the radiation source and him. The test part gets attenuated only when the there are any differences in the material density resulting in scattering and absorption. These differences in the him are recorded on the
10


film. There are several imaging methods available in industries, techniques to display the final image i.e. Film Radiography, Real Time Radiography (RTR), Computed Tomography (CT), Digital Radiography (DR), and Computed Radiography (CR). The main advantage of this technology is providing a permanent record and high sensitivity [9].
Ultrasonic testing is a technique of nondestructive testing method in which the test material is subjected to high frequency sound pluses from transducer that propagates through the test material. These frequencies are reflected if there are any interfaces in object under test. This test is applied for materials for which surface finishes are good and shape is not complex. This testing provides information like thickness and type of depth with high sensitivity. Information can be obtained by scanning from form one side instead of scanning from both the faces. Eddy current testing: Eddy current inspection uses the principal of electromagnetism as the basis for conducting examinations. Eddy currents are created through a process called electromagnetic induction. All the conductive materials can be examined for flaws, metallurgical conditions, thinning and conductivity. The advantages of this type of testing method is it gives immediate results, equipment is portable, minimum part preparation is required and it inspects complex shapes and sizes of conductive material. This type of testing requires skill and training to conduct the test and applies only for conductive materials [10].
Thermal NDT method is testing method that involve the measurement or mapping of surface temperatures as heat flows to, from and/or through an object. The simplest thermal measurements involve making point measurements with a thermocouple. This type of measurement might be useful in locating hot spots, such as a bearing that is wearing out and starting to heat up due to an increase in friction. In its more advanced form, the use of thermal imaging systems allow thermal information to be very rapidly collected over a wide area and in a non-contact mode. Thermal
11


imaging systems are instruments that create pictures of heat flow rather than of light. Thermal imaging is a fast, cost effective way to perform detailed thermal analysis. Thermal measurement methods have a wide range of uses. They are used by the police and military for night vision, surveillance, and navigation aid; by firemen and emergency rescue personnel for fire assessment, and for search and rescue; by the medical profession as a diagnostic tool; and by industry for energy audits, preventative maintenance, processes control and nondestructive testing. The basic premise of thermo graphic NDT is that the flow of heat from the surface of a solid is affected by internal flaws such as disbands, voids or inclusions. This type of testing is extremely sensitive to slight temperature changes in small parts or large areas. This technology does not work well when the parts of the test object are thick and testing requires highly skilled labor [10].
Acoustic Emission (AE) is another Nondestructive testing method that refers to the generation of transient elastic waves produced by a sudden redistribution of stress in a material. When a structure is subjected to an external stimulus (change in pressure, load, or temperature), localized sources trigger the release of energy, in the form of stress waves, which propagate to the surface and are recorded by sensors. With the right equipment and setup, motions on the order of picometers (10 -12 m) can be identified. Sources of AE vary from natural events like earthquakes and rockbursts to the initiation and growth of cracks, slip and dislocation movements, melting, twinning, and phase transformations in metals. In composites, matrix cracking and fiber breakage and debonding contribute to acoustic emissions. AEs have also been measured and recorded in polymers, wood, and concrete, among other materials. Detection and analysis of AE signals can supply valuable information regarding the origin and importance of a discontinuity in a material. Because of the versatility of Acoustic Emission Testing (AET), it has many industrial applications (e.g. assessing structural integrity, detecting flaws, testing for leaks, or monitoring weld quality) and is used
12


extensively as a research tool. Acoustic Emission is unlike most other nondestructive testing (NDT) techniques in two regards. The first difference pertains to the origin of the signal. Instead of supplying energy to the object under examination, AET simply listens for the energy released by the object. AE tests are often performed on structures while in operation, as this provides adequate loading for propagating defects and triggering acoustic emissions. The second difference is that AET deals with dynamic processes, or changes, in a material. This is particularly meaningful because only active features (e.g. crack growth) are highlighted. The ability to discern between developing and stagnant defects is significant. However, it is possible for flaws to go undetected altogether if the loading is not high enough to cause an acoustic event. Furthermore, AE testing usually provides an immediate indication relating to the strength or risk of failure of a component. Other advantages of AET include fast and complete volumetric inspection using multiple sensors, permanent sensor mounting for process control, and no need to disassemble and clean a specimen. Unfortunately, AE systems can only qualitatively gauge how much damage is contained in a structure. In order to obtain quantitative results about size, depth, and overall acceptability of a part, other NDT methods (often ultrasonic testing) are necessary. Another drawback of AE stems from loud service environments, which contribute extraneous noise to the signals. For successful applications, signal discrimination and noise reduction are crucial [9].
Vibration Analysis is another testing method in this type of testing the vibration signatures specific to that piece of rotating machinery and analyzing that information determines the condition of the equipment. Three types of sensors are commonly used in this technology displacement sensors, velocity sensors and accelerometer’s. Displacement sensors uses eddy current to detect vertical and horizontal motion and are well suited to detect shaft motion and changes in clearance tolerances. Spring mounted coil are used for velocity sensors that moves through a coil of wire, with
13


the outer case of the sensor attached to the part being inspected. The coil of wire moves through the magnetic held, generating an electrical signal that is sent back to a receiver and recorded for analysis. Newer model vibration sensors use time-of-hight technology and improved analysis software. Velocity sensors are commonly used in hand held sensors [10].
Visual inspection is another nondestructive testing technique mostly used in automobile industry where the visual aspect of the vehicle body is an important factor. Visual inspection is also very important in airline industry, and accounts for most of the inspection tasks in search for the cracks and corrosion. Light guiding systems have been developed for looking in areas where direct eye inspection is not possible. Direct eye vision is often replaced by artificial vision with a video camera using an imaging detector also known as charge coupled device or CCD [11],
2.3 Testing Method Implemented in this work
We know that many three dimensional (3D) reconstruction techniques have been
explored by researchers, one of the widely known method is stereovision which is
a passive method for 3D acquisition [12]. To avoid the difficulty of correspondence
problem we consider the active method, i.e the structured light system. Structured
light system uses a single strip of ring projected onto the walls of the object. The
structured light system considered in this research work consists of a CCD and a Laser
Projector as shown in figure 2.1. Similar to the traditional stereovision system, but
its second camera is replaced by the light source. The light source projects a known
light pattern on the measuring scene. The single lens camera captures the illuminated
scene and the required 3D information is obtained by analyzing the deformation of
the imaged pattern with respect to the projected one. One principal method of 3D
surface imaging is based on the use of active illumination of the scene with specially
designed 2D pattern. As illustrated in equation 2.1 [13], illumination is generated on
a special projector or light source. The intensity of each pixel on the structured light
14


â– Object Under Test
(X,Y,Z)
/
On Sensor
Camera Center
On Projector
Projector Center
Figure 2.1: Structure Of the System
pattern is represented by the digital signal,
hj = = 1,2,...,/, j = 1,2,..., J (2.1)
An imaging sensor (a video camera, for example) is used to acquire a 2D image of the scene under the structured-light illumination. If the scene is a planar surface without any 3D surface variation, the pattern shown in the acquired image is similar to that of the projected structured-light pattern. However, when the surface in the scene is nonplanar, the geometric shape of the surface distorts the projected structured-light pattern as seen from the camera. The principle of structured-light 3D surface imaging techniques is to extract the 3D surface shape based on the information from the distortion of the projected structured-light pattern. Accurate 3D surface profiles of objects in the scene can be computed by using various structured-light principles and algorithms [13].
As given in equation 2.2 [13], the geometric relationship between an imaging sensor, a structured-light projector, and an object surface point can be expressed by
15


the triangulation principle as
R
sin (9) sin (a- + 9)
(2/2)
The key for triangulation-based 3D imaging is the technique used to differentiate a single projected light spot from the acquired image under a 2D projection pattern.
Figure 2.2: Illustration of Structured Light
Actively illuminated structured-light patterns may include spatial variations in all (x,y,z) directions [13], thus becoming a true 3D structured-light projection system. For example, the intensity of projected light may vary along the optical path of the projected light owing to coherent optical interference. However, most structured-light 3D surface imaging systems use 2D projection patterns [13].
Figure 2.3 represents a computer animation of a structured-light 3D imaging system to demonstrate its working principle. An arbitrary target 3D surface is illuminated by a structured-light projection pattern. In this particular case, the structured-light pattern is a circular ring like pattern. A imaging sensor acquires the image of the target 3D surface under the structured-light illumination. The image captured
16


by the imaging sensor varies accordingly. Based on the distortion of the structured-light pattern seen on the sensed image in comparison with the undistorted projection pattern, the 3D geometric shape of the target surface can be computed accurately.
Figure 2.3: Structured Light Working Principle
17


3. Data Processing Using Python and OpenCV
3.1 Background
This chapter explains different image processing techniques that we have implemented to identify the defects from specific shape of the ring profile and to separate it from the image background. After the extraction of the ring profile, defects can be observed from the 3D rendering of all the ring profiles. Image processing techniques are applied in order to enhance the image quality. Two morphological techniques are applied erosion and dilation to filter the image. Also implemented a visualization tool to visualize the 2D data into 3D to analyze the defects. Some of the image processing techniques and their implementation is explained in the sections below.
3.2 Flow of Algorithm
Thresholding +
Erosion+Di lation
T
I
Stack Filtered Frames
VTK(SD)
Figure 3.1: Flow Of Algorithm
The flow chart in figure 3.1, explains the sequence of steps implemented. First
block captures the image continuously the second block grab the frame and apply
18


various image processing techniques. Next all the processed frames are stacked in a 2D memory. The final step is to grab the processed 2D frame from the memory and projected onto a 3D memory and later all the stacked 3D data is displayed on the screen.
3.2.1 Flow of Algorithm for Distance Estimation
Figure 3.2: Flow For Distance Calibration
The flow chart in figure 3.2, explains the sequence of steps implemented in estimating distance of the crack in this work. First we capture multiple frames simultaneously then we calculate the difference between two frames and estimate the difference and based on this difference we assume that a crack is found. Then we grab the count value based on the number of images captured. In our work we assumed we capture 10 images for every one centimeter so we assumed this formula,
Scan Distance = Number of Frames Captured / Number of Frames Per Centimeter
19


where, Speed is constant
3.2.2 Operating System and Library Setup
On raspberry pi boards, we installed a supported variant of Ubuntu Linux known as raspian, Visualization Tool Kit and Open Computer Vision from source. OpenCV, a computer vision library, is a very popular and has considerable functionality relevant to this project. It also supports Linux, a project to support common video and image capture. Setting up libraries takes lot of time and consumes lot memory, while other computer vision libraries exist, OpenCV is the most popular and hence is best supported online. Programming Language was guided by framework support and the need for performance when running on our boards. OpenCV and VTK have the most limited official support, having bindings with C, C++ and Python. Our modules were implemented in python. We installed Opencv from the following link http : / / www .pyimagesear ch.com/2015 /02/ 23/ install — opencv—and—python—on—your— raspberry — pi — 2 — and — bj and VTK from https : //blog.kitware.com/raspberry — pi — likes — vtk/ and few of the supporting libraries like numpy, scipy etc from http : //wyolum.com/numpyscipymatplotlib — on — raspberry — pi/. Setting up vtk requires cross compiler ccmake which can be downloaded from www.cmake.org.
3.3 Python with OpenCV Library
The Open Source Computer Vision library (OpenCV) is an extensive C library for real time computer vision. Stereovision capabilities are implemented as a part of OpenCV. Lately the library has been fully ported to the programming language Python. The implementation of the software in this thesis is based on the OpenCV
2.3.1 Python library [2].
20


3.3.1 Gray Scale Conversion
The first image transformation to be done for reaching a detectible feature is gray scale conversion. It simply means converting the RGB color span into a corresponding image with one color (gray) with different intensity across the image. Figure 3.3 below illustrate the conversion.
(a) (b)
Figure 3.3: a.Color Image b.Converted Gray Image
Gray scale intensity is stored as an 8-bit integer giving 256 possible differ-ent shades of gray from black to white. That is the reason that only one channel (color) is chosen to represent the image when converting from RGB (which is an array of (255,255,255)). Gray scale conversion is performed in OpenCV using the following command:
cv'l.cvtColor (image, grayimage, cv‘2.COLORbGR‘2GRAY)
3.3.2 Numerical Python
Numerical Python is a package widely used for scientific computing with python and consists of many useful concepts like array objects and linear algebra functions. In this thesis work we converted the captured images into array to perform normalization, which are needed to do operations like erosion, dilation and thresholding [14], When we call cirrayQ we convert the image into NumPy array by using the following code
21


np. array (image)
The converted NumPy array as shown below as an example contains the value of pixel at each position.
[[187 187 186 163 163 163]
[187 187 186 163 163 163]
[187 187 186 163 162 162]
[222 220 218 143 143 144]
[225 221 218 143 143 143]
[227 221 218 143 143 143]]
This represents the image as a data cube and each pixel value is of 8 bits from 0 to 255. We set a certain threshold to filter the image from certain noise. For example we have a color image as shown in figure 3.3a above Using numpy array the above image is converted to array as shown below:
[[ 27 27 27 .. ., 28 28 227]
[ 27 27 27 .. ., 28 28 227]
[ 27 27 27 .. ., 29 29 227]
[ 25 25 25 .. ., 35 35 209]
[ 25 25 25 .. ., 34 34 210]
[ 25 25 25 ... , 34 34 212]]
Now each pixel is subjected to various filtering techniques like erosion, dilation and thresholding to remove background noise.
22


3.3.3 Python Imaging Library
Python imaging library provides various image processing capabilities to the python interpreter. This library provides extensive support to various hie formats, and is a powerful image processing capabilities. The library provides fast access to data stored in various pixel formats. This library is used to convert between various pixel formats, print images, etc. This library provides basic image processing functionality, such as resizing, rotation and arbitrary transforms. We can even built histogram and make certain decisions based on the histogram. The most important class in the python-imaging library is Image class, defined in this work with the same name, we can create instances of the class by loading images from files, processing other images, or creating image from scratch.
We load an image from the file by using the following open function in the image module.
import.image im = Image.open(name)
The function above successfully returns the image object. We use this as a function to convert from array to an image file. First we convert the image to an array and process the pixels using different format and then convert them back to image using the function given below:
imfile = Image.fromarray(bw) Image.fromarrayQ The resulting image is shown below in figure 3.3:
Figure 3.4: Black and White
23


3.4 Morphological Operations
Morphological operations are simple transformations performed on the image affecting the form, structure or shape of an object. These operations are performed only on binary images, and needs two inputs one is the original image and second one is structural element or kernel [15]. Morphological operations are used in this work to magnify the defects clearly for analysis. Two basic morphological operators are used erosion and dilation. Dilation allows objects to expand, below, which minimizes the holes on the image and connecting disjoints on the image [15]. Erosion compresses the size of objects by etching away their boundaries. These operations are selected based on the application by proper selection of the structuring element, which determines exactly how the object is dilated. The structuring element is a small binary image, i.e. a small matrix of pixels, each with a value of zero or one. The dimensions of the matrix specify the size of the structuring element. The shape is determined by the pattern of ones and zeros and one of the pixels in the matrix determines the origin of the structuring element. Simple example of structuring elements are explained below:
Figure 3.5: a.Square 4x4 Element b.Diamond-shaped 5X5 element c.Cross-Shaped 5x5 element
Figure 3.5a represent the 4x4 square matrix with origin as the colored box and figure 3.5b represent the diamond shape 5x5 element and figure 3.5c represents the cross-shaped 5x5 element. Structuring element plays a very important role as convolution kernels in image filtering. When the kernel is placed on a binary image, each
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
(a) (b)
0 0 1 0 0
0 1 1 1 0
1 1 1 1 1
0 1 1 1 0
0 0 1 0 0
(c)
24


pixel under the structural element is associated with the neighboring element in the kernel. The structuring element is said to be fit the image if, for each of its pixel set to 1, the corresponding image pixel is also 1. A structuring element is said to hit, or intersect, an image if, at least one of it pixels set to 1 the corresponding image pixel is also 1.
3.4.1 Dilation
Dilation is a simple morphological operation that allows object to expand [15]. A process performed by layering the structuring element on the image. If the origin of the structuring element coincides with a white pixel there is no change move to the next pixel and if the origin of the structuring element coincides with the black in the image all the pixels surrounded by the structuring element are made black. This type of process is useful in joining the broken parts in an image. An example of dilation process is shown below in figure 3.6a figure 3.6b and figure 3.6c. In this process all the holes will be filled and boundary get expanded and all the black pixels in the original image will be retained.
25


Figure 3.6: a.Original image b.Kernel or Structural element, 1 = origin c.Resulting image after dilation
Figure a represents the original image in our application when this image undergoes dilation process with the structuring element in figure b the resulting image is c. After the dilation process the boundary of the image gets expanded and we can observe the thickness in image c.
3.4.2 Erosion
This process is similar to dilation, as in dilation we reduce the pixels to black,
in erosion we turn the pixels to white [15]. As in dilation we use the same kind of
structuring element of size 3x3 and slide over the entire image if the origin of the
structuring element coincides with a white pixel in the image, there is no change
and slide over to the next pixel and if the origin of the structuring element coincides
with a black pixel in the image and at least one of the black pixels in the structuring
element falls over a white pixel in the image then change the black pixel in the image
26


from black to white. All the pixels near the boundary will be discarded depending upon the size of the kernel. The thickness of the foreground object decreases so the white region decreases in the image. This technique removes small white noises and connects unconnected objects in an image. A simple example explains the erosion process in detail below:
0 1 0
1 1 1
0 1 0
(C)
Figure 3.7: a.Original Image b.Kernel or structural element, I = origin c.Resulting Image After Erosion
In the above figure c the only remaining pixels are those that coincide to the origin of the structuring element in figure b where the entire structuring element was contacted in the original image a. Since the structuring element is 3 x 3 the part of the ring eroded away resulting in image c.
3.4.3 Thresholding
In order to increase the picture contrast, the intensity levels of all the image pixels
are adjusted to span the entire available intensity range(0,255). This is an adaptive
27


image processing step where the pixels intensities are mapped in such a way that all input intensities that fall below the average intensity and all intensities higher than the average are expanded to span the entire image intensity range. Thresholding is a simple image segmentation method that converts a gray scale image into binary image where the two levels are assigned to pixels that are below or above the specified threshold value [16]. There are different types of thresholding operations in OpenCV they are given below:
cv2.TH FLESHbINARY cv2.T HRESHbINARYjNV cv2.T H RE S HtRU NC cv2.T H RE S HtO Z E RO cv2.THRESHtOZERO!NV
The threshold function is represented by:
(T,threshlmage) = cv2.threshold(src,thresh,maxval,type)
The first parameter is the input gray image on which the operation is to be applied onto. The second parameter is the threshold value, used to classify the pixel intensities in the gray image. The third parameter is the max value used if any given pixel in the image passes the threshold test. The fourth parameter is the threshold type. In the first method we just segment the image to be black with a white background as shown in figure b. In the second method the colors are inverted as shown in figure c. third method leaves the pixel intensities, as they are if source pixel value is not greater than the supplied value as in figure d. In fourth method the source pixel is set to zero if source pixel is not greater than the value supplied as shown in figure 3.8.
28


(d) (e) (f)
Figure 3.8: a.Binary b.Binary Inverse c.Trunk d.Trunk Inverse e.ToZero f.ToZero Inverse:
3.5 Visualization Tool Kit
The visualization tool kit is an open source, cross-platform, software system for 3D computer graphics, image processing, and visualization. It is a cross platform tool kit for scientific data processing, visualization, and data analysis. It offers reproducible visualization and data analysis pipelines for a range of scientific data. It implements number of visualization modalities that include volume rendering 2D rendering capabilities. These capabilities are primarily wrapped in python to offer advanced data processing capabilities, visualization, and analysis to user directly or through built on libraries [17]. 3D reconstruction of images is an important research direction in non-destructive visualization. The 3D reconstruction from 2D images to reconstruct the 3D entity, to provide the user to observe and interact. By reconstructing the rendered images in 3D the spatial location of the defect, size and complete information to the user, to help improve the efficiency and accuracy of the diagnosis. Visualization tool kit has a descent rendering performance and is good
for rapid prototyping of 3D visualization tool but not suitable for rendering lots of
29


dynamic content. The simple example of visualization pipeline is shown in figure 3.9 below. To visualize our data in VTK, the pipeline is setup as shown in flow chart be-
Figure 3.9: VTK Rendering
low: Visualization tool kit provides various sources classes that are used to construct
Figure 3.10: VTK Flow Chart
simple geometric objects like sphere, cubes, cones, spheres, cylinders, etc.
volume = vtk.vtkVolumeQ
30


vtkSphereSource, vtkCubeSource, vtkConeSource, vtkVolumeSource are few examples of sources in visualizations tool kit. Readers read the volumetric data from image file [18]. Filter takes data as input, modifies it and returns the modified data. It strengthens the modified data and it intensity. They generate the geometric object from the filtered data. Mapper maps the data to graphics primitives that can be displayed by the renderer. We use vtkVolumeRayCast Mapper it works with single component vtklmageData with a scalar type of unsigned char or unsigned short. It does perform calculations in floating point, and it does have an easy way to subclass the specific ray cast function to perform operations along the ray. Actor represents the geometry and properties like scale, orientation, textures, various rendering properties etc. of the object in a rendering scene.
Vtk.vtk Actor ()
Rendering is a process of converting 3D graphics primitives a specification for materials and camera view into 2D image that can be displayed on the screen.
Vtk.vtk Render er()
Visualization tool uses OpenGL for rendering in the background and uses vtkRenderer class to control the rendering process for actors and scenes.
V tk.vtkRenderW indowQ
The VtkRenderWindow class creates a window for renderers to draw the image as shown below. The interactor class vtkRenderWindowInteractor class provides platform-independent window that helps the user to interact with the 3D rendered image.
vtk.VtkRenderWindow Inter act or ()
The vtkRendererWindow class creates a window for renderers to draw as shown in figure 3.11 below
31


Figure 3.11: VTK Rendering Window
32


4. Structured Light System Development For Sensing
4.1 Structured Light
Structured light is the process of projecting the known pattern on to a scene. The deformation of the light on striking the surface allows vision systems to detect the surface conditions of an object. The hgure 4.1 shows the structured laser source system that we used in our work. Structured light is a technique widely used to measure 3D surfaces. Obtained by projecting the sequence of light patterns. The main advantage of this type of technology is that the projected features are easily distinguished by camera. When the patterned light is projected on to a surface and if the surface contains any defects the pattern gets deform and this can be recorded by using vision system. To avoid ambiguities while matching the projected features in the image the pattern is coded. Structured light is used in dense range sensing, industrial inspection, object recognition, 3D map building, reverse engineering, fast prototyping etc. The pattern of light is projected by means of laser light emitters
Figure 4.1: Structured Laser Source
or digital projectors as shown in Figure 4.1, below which forms different shape of patterns on the surface. Digital projectors have focusing problems, which limits the depth range of the measuring area. So we use laser source in our work to detect the defects because of their uniform intensity. An example of the reconstruction of
33


surface of the pipe, performed by means of laser projection of the single light patterns is presented in figure 4.2 below.
o
(a) (b)
Figure 4.2: a.Laser Projection Inside Pipe b.Reconstructed Image
Laser projectors are thin and can be placed inside compact devices. Due to their uniformity laser projectors are especially useful for structured light applications, including industrial inspection, alignment and machine vision. Structured light lasers used in inspection minimize process variation by drawing attention to parts that do not conform to specifications. They can pick out vegetables with blemishes from food processor lines or can ensure that the right colored capsule goes in the correct bottle in drug packaging lines. Another laser application is alignment. In computer assembly, a laser system can help an operator determine if a computer chip is perfectly positioned on a circuit board. Machine vision is a combination of structured lighting, a detector, and a computer to precisely gather and analyze data. For example, it is used on robots as 3-D guiding systems to place or insert a part on a car, such as a windshield wiper or a door.
Visual range measurement is a task usually performed by stereo imaging systems. Depth is calculated by triangulation using two or more images of the same scene. The previously calibrated cameras are used for crossing the rays coming from the same
34


point in the observed scene. However, the search of correspondences between images is generally a difficult task, even when taking into account epipolar constraints. A solution to this problem is offered by using the information from a structured light pattern projected in the scene. Usually this is achieved by a combination of a pattern projector and a camera.
4.2 Sensing Module
Sensors are the devices that are frequently used to detect and respond to electrical or optical signals. Sensors convert the physical parameter into a signal, which can be measured electrically. The sensing module that we are using in this module is a Fish-eye lens as shown in figure 4.3. Fish-eye lenses are imaging systems with a very short focal length, which gives a hemispherical held of view, as shown in figure 4.4.
(a) (b)
Figure 4.3: a and b Shows the Fish Eye Camera Module
The obtained images benefit of a good resolution on the center but have poor
resolution on the marginal region, as shown in figure 4.4. In human vision, a central
zone of the retina, called the fovea centralis, provides high quality vision while the
peripheral region deals with less detailed images. Therefore the vision acquired by a
fish-eye lens is somehow similar to human vision from the point of view of resolution
distribution. In addition, fish eye lenses introduce radial distortion, which is difficult
35


Figure 4.4: Poor Resolution at Marginal Region
to remove. Despite the lack of several drawbacks the large vision held offered by the fish-eye lenses made this type of elements an attractive choice for our work.
4.3 Computational Machines
Our optical system was implemented on two popular open hardware boards and MacOSX, the first two are community designed and supported. Documentation is freely available and the boards themselves are produced by non-profit entities and MacOSX was used for testing the concept.
We considered boards that could run the entire Linux operating system for ease of setup and flexibility in software installation. This immediately discounted the archetypical Embedded family on boards since these are not general-purpose based boards for their low cost processors, which have efficient power utilization.
Our first choice was Raspberry Pi Model B+, with 512 MB RAM with two USB ports and Ethernet port. Then quickly proved to be underpowered and we then migrated to the faster, Raspberry Pi Model B with more is processing for 3D Imaging.
4.3.1 Computational Machine Specifications: Apple MacBook (Intel Corei5)
To implement this methodology we need some machine for processing and display;
initially we tested our concept on MacOSX. The software package managing the
36


inspection system runs on a 2.6 GHz Intel Core i5 with an UNIX operating system, Intel Irisl536 Graphic card and 8GB RAM. It has the USB port to connect the external devices. It executes the following functions:
1. Capture sequence of frames inside the pipe. 2. Perform Morphological operations on each frame. 3. 3D Rendering processed frames.
The logic written in python controls the capturing and rendering process on the system. The results obtained were very accurate using this type of machine. But due to its size and power constraints this system is not feasible for our application. Then we researched for some alternative embedded platforms for real-time implementation of the system, which can consume less power and perform same as the previous system.
4.3.2 Computational Machine Specifications: Raspberry PI (ARM 7)
The Raspberry Pi model B+ is the fourth iteration and the latest version of the original Raspberry Pi. R was released for sale in July 2014, and was in February 2015 replaced by the new Raspberry Pi2b. The Raspberry Pi model B+ has a single-core
Figure 4.5: Raspberry Pi Board
ARM1176 processor running at 700MHz with 512 Megabytes of memory and also
consist of four USB-ports and one 10/100 Megabit/s Ethernet-port and one micro-sd
card slot for storage. Compared to the earlier models, the Raspberry Pi model B+ has
37


more USB-ports and they have replaced the old SD-card slot for a micro-SD card slot, which has a push-push locking mechanism. It also has a lower power consumptions compared to the earlier models [19]. 2.4.2 Raspberry Pi 2 model B The Raspberry Pi 2 model B was released in February 2015 as the latest Raspberry Pi model with updated hardware. The Raspberry Pi 2 model B has a quad-core ARM Cortex-A7 CPU processor running at 900MHz with 1024 Megabytes of onboard memory with four USB-ports with one 10/100 Megabit/s Ethernet-port and one micro-SD card slot for storage on board [20]. The Raspberry Pi 2 model B is faster and has twice the amount of memory compared to its predecessor, the Raspberry Pi model B+. It now has a quad-core processor, which is speculated to make the Raspberry Pi 2 model B up to six times faster than the previous models.
Arch Linux ARM is an ARM based Linux distribution ported from the x86 based Linux distribution Arch Linux. The Arch Linux philosophy is that users should be in total control over the operating system, which allows the users to implement it in any way they like. Therefore Arch Linux can be used for simple tasks and as well as more advanced scenarios. Arch Linux ARM is based directly on Arch Linux and they share almost all the code, which makes Arch Linux ARM a very fast, Unix-like and flexible Linux distribution. Arch Linux ARM has adapted the rolling-release update function from the x86-version. This means that small iterations is made available to the users as soon as they are ready, instead of the releasing larger updates every few months [20].
The crack detection programme is laid out to run on top of an operating system, so installing one on the Raspberry Pi is the first thing to do. The preferred system for this task is NOOBS, as there is a distribution available, which is optimized in particular for the Raspberry Pi. It can be downloaded from http : / / www .raspberrypi. org / downl oads / noobs
38


For this thesis the NOOBS.v.l.7.0.zip distribution was used. The operating system always runs from an SD card, which was found to be reasonable during the development of the Raspberry Pi. SD cards deliver high capacity, are cheap and fast, easily writable and easily changeable in case of damage. The size of the used card should amount to at least 4 GB, as it will contain the operating system (about 2 GB) and should still provide some additional space. We used 32 GB SD card since we were dealing with High Resolution images. The hie system used for the first setup during the integration of the Raspberry Pi into the WSI system was ext4. It is recommended to continue working with ext4 or at least another journaling hie system, as journaling provides higher security in case of a power failure or a system crash. After unpacking the downloaded NOOBS image, it can be copied onto the SD card. If connected to a monitor, booting the Raspberry Pi with its new operating system for the hrst time should display a conhguration menu. In case of not using an extra monitor for the Raspberry Pi the conhguration menu can be accessed with the help of the command raspi-conhg (as root). One should at least expand the hie system on the SD card to be able to use all of its space.
On top of the operating system we install OpenCV libraries and Visualization Tool Kit with Python interface. OpenCV libraries provide a set of instructions to perform morphological operations. Visualization Tool Kit installed over this machine helps rendering system for analyzing the cracks.
Since Raspberry Pi has lot of resources available on the system, it provides support for vast variety of libraries on it. But all the resources available on the system provides only limited processing capability for the current real-time implementation. Onboard memory is very limited about 1GB of RAM due to which we experience a limited Image processing capabilities.
39


5. Results
5.1 First Generation Results
Figure 5.1a illustrates the prototype that has been used for scanning and figure 5.1b shows the captured frame inside the test object. Actually, the camera in this prototype will be connected to the laser source and shifted few centimeters back. This shifting because the view angle of the camera is small. So some shifting is needed to make the camera capture the rest of the laser ring. By moving the prototype with constant speed, the camera will collect the data and will be saved in the computer.
(a) (b)
Figure 5.1: a.First Generation Prototype b.The Captured Frame
We considered 3-inch diameter PVC pipes for testing our prototype. Then some defects have been made in the pipe as illustrated in figure 5.2 a. Through hole defects with different diameters were introduced. In the white pipe specimen, the smallest hole in the pipe has the diameter of 0.0787 inches and the biggest hole is 0.314 inches. Also, there are additional holes with 0.157 and 0.197 inches diameters. Moreover, some screws have been installed to mimic different defect types. In the black pipe specimen, some damage has been introduced as the different kinds of defects. Figure
40


5.2 b shows the cracks that have different directions in the pipe and the width of the crack is 0.039 inches.
(a) (b)
Figure 5.2: a.The holes on the surface as defects b.The linear cracks in the specimen
We configured our prototype over MacOSX and Rendered 3D images using visualization tool kit and displayed the reconstructed image on screen. As shown in figure 5.3, the laser rings are not completed because the camera will capture a part of the laser source. As a result, some of the image is blocked. This type of missing is called shadowing. This shadow area blocks about 25 percent of the whole scene. So this kind of missing is one of the limitations that should be considered to solve in the next sections. However, the laser ring will take the inner surface shape of the pipe. The changes in the laser light shape will be considered as indications of defects that we want to see in the reconstructed images.
41


(b)
Figure 5.3: a and b: Reconstructed images using first gen prototype
We observed that mac takes 0.05 seconds for processing one frame, for fO frames it takes 0.29 seconds, for fOO frames it takes 0.08 seconds as given in figure 5.4a below. We plotted a graph between number of frames and processing time over MacOSX as shown in figure below 5.4b.
42


Number of Frames Processing Time(seconds)
1 0.05
10 0.29
100 9.08
200 25.80
300 40.88
400 54.76
(a)
(b)
Figure 5.4: a. Results over Mac b. Number of frames vs Processing Time
Using large view angle camera is one of the solutions that make the reconstructed
image better. The difference between this camera and the previous one is the view
angle. In the old one the view angle is about 45°, but in the new one is 180° that
makes the whole scene can be detected by putting the camera and the laser source in
the same base line as shown in figure 5.5. We considered this setup for our further
experimentation and generated results over MacOSX. Firstly, We scanned the entire
43


Figure 5.5: Prototype Setup with large View Angle Camera
pipe and recorded the video of the scan. This prerecorded video is used for processing and reconstruction. We found that for one frame it takes 0.002 seconds to process a frame with a resolution of 480 x 640 and for 200 frames it takes 0.0027 seconds to process and display. The reconstruction results for 200 frames as shown in figure 5.6. The new camera is called fisheye camera. Convex lenses have been installed
Figure 5.6: Complete Ring Captured
to this camera to make the view angle bigger. In the last prototype the camera was contacted to the laser source but in the new prototype the camera and the laser source will be disconnected and they will be in same base line. Because the camera has 180 degree view angles, all the ring will be captured without missing any part. Then, the prototype will be passed in to the specimen to collect the data. However, the scanning
will be by human hand. So, some misalignment will be considered. After that, the
44


same 3D reconstruction process will be repeated. Then by taking a few frames from the recorded video, the laser light inside the pipe will be as shown in figure 5.6 the ring is completed and the laser light does not have any deformation. This indicates that the areas that have been covered by this light are still without any defects.
We implemented this prototype setup for our further testing and generated results over MacOSX. Firstly, We scanned the entire pipe and recorded the video of the scan. This prerecorded video is used for processing and reconstruction. We found that for one frame it takes 0.002 seconds to process a frame with a resolution of 480 x 640 and for 200 frames it takes 0.0027 seconds to process and display. The reconstruction results for 200 frames as shown in figure 5.7.
Figure 5.7: Reconstruction results from 200 Frames
5.2 Second Generation Results
In the previous section we have implemented our prototype and generated the results using MacOSX. Later, we tested our prototype on a low power low cost device Raspberry Pi, processes each frame at 0.35 seconds and as the number of frames increases the processing time increases as shown in figure 5.8 and b below.
45


Number Of Frames Processing Time(Seconds)
10 2.43
20 4.84
30 7.35
40 9.95
50 12.43
60 14.75
70 17.26
80 19.53
(a)
(b)
Figure 5.8: a. Results over Pi b. Number of frames vs Processing Time
The reconstruction results by using raspberry pi are shown below in figure 5.9 a and b. As the number of images for processing increases the performance of raspberry pi decreases and the memory available on pi is not sufficient.
46


Figure 5.9: a and b Represents the Reconstruction Results on Raspberry Pi 47


We plotted a graph to compare the performance of both the machines as shown in figure 5.10 below, observed that for 50 frames OSX takes only 5 seconds and raspberry pi takes 10 seconds and for 100 frames OSX takes 8.5 seconds and raspberry pi takes 20 seconds as described in graph below. To avoid running out of memory over raspberry
Figure 5.10: Processing time comparison between Raspberry pi and macOSX
pi we split the scanning into equal sized slices and stored them in individual hies. The figure 5.11 below shows the slices for certain scanning. When we split the data we avoided the memory issue with raspberry pi. We calculated the scan distance of each scan sample and also we calculated the distance of defect from the starting point. Since we are moving with hand we assumed the scanning speed to be constant and calibrated that our sensing module captures 10 frames for every lcm distance. Then we captured 210 frames i.e., we moved 21 cm from the starting point. Then we split the data based on the equal sized slices each of 7cm as in figure below 5.11 below.
48


(C)
Figure 5.11: Equal Sized Slices 28cm long a) 0-7 cm b) 7-14cm c) 14-21cm
The other experiment was to detect the crack and its position from starting point. We used our test specimen as in figure 5.12 below for this experiment and performed some experiments. We marked our cracks as shown in figure 5.12. We measured with a measuring scale that first crack is at a distance of 18cm, second at a distance of 20cm, third at 23cm, fourth at 25cm and fifth at 26cm from the starting position.
We have plotted a graph as shown in figure 5.13, between the actual positions of the cracks with the distance we calibrated from our experiments. Form the graph
49


Figure 5.12: Object Under Test for Distance Calculation
Figure 5.13: Comparison between the actual distance and measured distance
we observed that the cracks at a distance of 18cm, 20cm, 23cm, 25cm and 26cm were accurately matching with the original measurements but few points were not
50


overlapping this was due to the movement of the prototype. We move our prototype manually due to miss-alignments in the setup there may be errors in the calibration.
51


6. Limitations
Raspberry pi processes each frame at 0.35 seconds for a resolution of 640 x 480 and as the resolution of the frame increases the processing time increases. The processing time for image with different resolution are shown in figure 5.7 below. The x-axis shows the resolution of the frame and y-axis the processing time. It shows that for 2000 x 2000 it took 0.95 seconds to process and for 1024 x 768 it took 0.55 seconds, for 800 x 600 it took 0.4 seconds and on 640 x 480 it took 0.35 seconds. So its good to
Processing Time vs resolution On Pi
0.8
OJ
E 0.6 P
CTi
C
U
S 0.4 a.
Resolution
Figure 6.1: Computational time vs. Resolution (Raspberry Pi)
keep the resolution as much low as possible for this type of experiments, but might be better off not moving up to a higher resolution camera to deal less with other issues, such as the need for more storage and processing power but keeping the resolution lower may definitely miss some details.
There are few miss alignments in our prototype, first is the camera and the laser
miss alignment inside the pipe. The pipe is 3-inch in diameter which was very limited
in space and we have to align the camera and laser such that the laser projects the
52


perfect circle on the internal surface of the pipe and camera captures the full ring. The diameter of the laser source was 1.5 inches and the base of the sensing module was 1-inch x 1-inch. If laser source is placed at the center of the pipe as shown in figure 6.1 below the camera should be placed at an angle such that it does not touch the walls of the pipe when it is moved. But placing the camera with a base dimensions of 1-inch x 1-inch is not possible and it will not capture the entire ring. So the laser source is shifted little away from the center on the base line and the
7.0cm
Figure 6.2: System Design
camera is placed on the base line in such a way that it does not touch the walls of the pipe as in figure 5.4 Chapter 6. This displacement causes the laser projection to misalign on one side of the walls of the pipe as shown in figure 6.2 resulting in miss assumptions in calibrations. Secondly, we are moving our prototype with hand, which may result in lot of errors and miss calculations. Since we calculate the distance as in chapter 5 based on the number of images but if the movement of the prototype is not constant and if it is stopped for a while it may duplicate the images captured. So we implemented the motion sensing principle to solve this issue but since the internal surface features of the pipe are constant and this algorithms are not productive to solve this issue.
53


7.0cm
Figure 6.3: System Design
Thirdly, Raspberry Pi has lot of resources available on it but still the resources available were not sufficient enough. Additional experiments were conducted on raspberry pi to analyze this issue. We plotted a graph between CPU Load and Memory Usage to know the performance of raspberry Pi as the number of frames increases. From the figure 6.3 we can observe that as the number of frames captured increases
Figure 6.4: CPU Load Vs. Number Of Frames
the Load on the CPU increases. Raspberry Pi is very limited in on board Memory
54


100
80
TemparatureiC
Ram:%
t z 40
20
1545 43 4546 45 46 g 42 42 42 ^ f: 42^
60 75 90 105 120
Number Of Captured Frames
Figure 6.5: Memory Usage Vs. Number Of Frames
with only 1GB RAM so managing this memory is a huge concern in this work. As the number of frames increases memory gradually decreases as shown in figure 6.4 and at one particular point Raspberry Pi runs out of memory and crashes. So to avoid this tweak on Raspberry Pi we reduced the load on the device by only capturing the frames processing them and transferring them to the end point over wireless connnuni cation.
The other method we implemented was to reduce the number of frames captured by using sensing module. We implemented motion estimation algorithm and capture only those frames, which contains defects in the internal structure of the pipe. We used this method to estimate the distance of the crack from the starting point. We are still working on this method can be implemented in our system to find the position of the defects inside the pipe.
55


7. Conclusion And Future Work
In this thesis work we designed and developed a single ring structured light prototype and examined the possibilities of implementing Raspberry Pi could be used in real time. We developed algorithms for 3D rendering and finding the scan distance and tested our prototype with a 3 inch diameter pvc pipe and proved that the single ring structured light could be used to render the defects with 80 percent accuracy. Although raspberry pi having lot of resources on board the memory was not sufficient for 3D rendering.
As a closing statement we believe the price is one of the strongest arguments for this kind of implementation. The low cost of the Raspberry Pis means that it can be a more affordable compared to what would otherwise be an expensive enterprise grade solution. Our solution opens up possibilities for users in increasing their Pipeline security. Looking into the future we believe that Raspberry Pi devices has many possibilities to explore and that the last word its role in nondestructive testing industry has yet to be said.
This suggests that it is now feasible for advanced nondestructive evaluation applications to run onboard using a multi-core embedded computer. One can prototype a new application quickly using open source libraries but some manual optimization is required to gain peak performance. Since multi-core systems are quickly becoming the norm, implementing multi-threading should provide significant speedups for most vision processing tasks. The algorithms developed may have to be modified to reach maximum performance and many image processing tasks are perfect for graphic processing units as the computations for each pixel can be performed independently of all others. For further development an automated version of this prototype mounted on a robot could be designed, the robot may be navigated and controlled along the pipe automatically simultaneously capturing the images. These images are processed on board and transmitted over wireless network when it can be viewed in 3D. Since
56


our aim is to develop the system for sensing pipes with less than 3-inch this system may not be suitable under such conditions. Although 3D data from single ring can be reconstructed with the proposed technique but motion may cause registration errors and certain deformation cannot be explored therefore working on multi-ring pattern structured light would be a future topic of interest.
57


REFERENCES
[1] Dr. Duane Priddy. Pvc/cpvc Common Failures.
[2] Raphal Grasset HIT Lab NZ Brian Thorne and Richard Green.
[3] Agbakwuru Jasper. Oil/Gas Pipeline Leak Inspection and Repair in Underwater Poor Visibility Conditions: Challenges and Perspectives.
[4] Barry Nicholson Baker Hughes. In-line inspections guide offshore field management decisions.
[5] LI Xiaochun YANG Zhaofeng Ding Qingxinl, TianFei2. A Crack Recognition Way thru Images for Natural Gas Pipelines.
[6] CEITEC. Evaluation of SSET: The Sewer Scanner and Evaluation Technology.
[7] W.Guo L.Soibelman J.H.Garrett Jr. Automated defect detection for sewer pipelineinspection and condition assessment.
[8] Olga Duran 3 Mohammadreza Motamedi, Farrokh Faramarzi 2. New Concept for corrosion inspection of urban pipeline networks by digital image processing.
[9] Charles J. Hellier. Hand Book on Non Destructive Evaluation.
[10] NDT Resouce Center. https://www.nde-ed.org/EducationResources.
[11] J.P.Monchalin. Optical and Laser NDT: Rising Star.
[12] IEEE Y.F.Li Senior Member IEEE S.Y.Chen, Member and IEEE Jianwei Zhang, Member. Realtime Structured Light Vision with the principle of Unique Color Codes.
[13] Jason Geng. Structured-Light 3d Surface Imaging: a tutorial.
[14] S.Chris Colbert. The Numpy Array: A Structure for Efficient Numerical Computation.
[15] Morphology, http://opencv-python-tutroals.readthedocs.org/en/latest/index.html.
[16] Thresholding, http://opencvpython.blogspot. com/2013/05/thresholding.html.
[17] Vtk. http://www.vtk.org/.
[18] Vtk Resources. http://www.vtk.org/VTK/resources/software.html.
58


[19] R. P. FOUNDATION. What is a raspberry pi? RASPBERRY PI FOUNDATION. https://www.raspberrypi.org/help/what-is-a-raspberry-pi/.
[20] Raspberry Pi 2 model b. RASPBERRY PI FOUNDATION. https://www.raspberrypi.org/products/raspberry-pi-2-model-b/.
59


Appendix A. Algorithms For Capturing and 3D Rendering
A.l Algorithm for Distance Estimation______________________________________
import cv2
from PIL import Image import numpy as np import time import os import cPickle import sys
from matplotlib import pyplot as pit from datetime import datetime
#Initialize List global prev global count global total_count global dis_tot
stack = [] dis_tot = 0 prev = 0 total_count = 0
#Skeleton of image def img_skel(img):
img = cv2.cvtColor( img, cv2.C0L0R_BGR2GRAY ) size = np.size(img)
60


skel = np.zeros(img.shape,np.uint8) ret,img = cv2.threshold(img,180,255,0)
element = cv2.getStructuringElement(cv2.M0RPH_CR0SS,(3,3)) done = False while( not done):
eroded = cv2.erode(img,element) temp = cv2.dilate(eroded,element) temp = cv2.subtract(img,eroded) skel = cv2.bitwise_or(skel,temp) img = eroded.copy()
zeros = size - cv2.countNonZero(img) if zeros==size: done = True
pix = np.asarray(skel).copy() arr_pix = Image.fromarray(pix)
return arr_pix
#Distance Measurement
def dist_total(count):
total_distance = ((count/10) * 1) total_distance = total_distance + 7 return total_distance
#VTK rendering
def show_vtk(l,f_width,f_height,prev,dis_tot): k = 0
d = len(l)*5 #with our sample each images will be displayed 5times to get a better view
61


w = f_width
h = f_height l=sorted(l)
stack = np.zeros((h,d,w),dtype=np.uint8) for i in 1: im = i
temp = np.array(im, dtype=int) for i in range(5):
stack[:,k+i] = temp k+=5
path = "/home/pi/Desktop/Result_files"
filename = os. path, join (path, datetime ,now() .strftime("7oH70M70S") str(prev) + + str(dis_tot) + ".txt")
cPickle,dump( stack, open( filename, "w" ) )
#For Raspberry Pi 2 using Python, print("sys.version:"); print(sys.version + "\n"); try:
cam = cv2.VideoCapture(0) cam.set(3,640) cam.set(4,480) count = 0 while(True): start = time.time() ret, frame = cam.readO cv2.imshow(’frame’,frame) w = frame.shape[1] h = frame.shape[0]


temp = img_skel(frame) stack.append(temp) if ( count > 5 ): total_count = total_count + 1 dis_tot = dist_total(count) temp = dis_tot dis_tot = dis_tot + prev count = 0
show_vtk(st ack,w,h,prev,dis_tot)
print "slice " + str(total_count) + " from:" + str(prev)+ "cm" + + str(dis_tot) + "cm" prev = prev + temp stack = [] count = count + 1 print count
except Keyboardlnterrupt:
end = time.time() - start
print "Total Captured Time:",end
print "Total number of slices", total_count
print "Total Distance covered in centimeters", dis_tot
cam.releaseO
cv2.destroyAllWindows()
print ("\n")
print ("Exit by Keyboardlnterrupt Generated\n") finally: del stack
63


A.2 Algorithm for 3D Rendering
import vtk import cv2 import os import cPickle import numpy as np import sys
from PIL import Image def vtk_renderer(stack): datalmporter = vtk.vtklmagelmport() data_string = stack. tostringO
datalmporter.CopylmportVoidPointer(data_string, len(data_string)) datalmporter.SetDataScalarTypeToUnsignedChar() datalmporter.SetNumberOfScalarComponents(1) w, d, h = stack.shape
datalmporter.SetDataExtent(0, h-1, 0, d-1, 0, w-1) datalmporter.SetWholeExtent(0, h-1, 0, d-1, 0, w-1) alphaChannelFunc = vtk.vtkPiecewiseFunctionO colorFunc = vtk.vtkColorTransferFunctionO for i in range(256):
alphaChannelFunc.AddPoint(i, 0.2) colorFunc.AddRGBPoint(i/255.0,i/255.0,i/255.0,i/255.0) alphaChannelFunc.AddPoint(0, 0.0) colorFunc.AddRGBPoint(150, 0.0, 0.0, 1.0) volumeProperty = vtk.vtkVolumeProperty() volumeProperty.SetColor(colorFunc) volumeProperty.ShadeOn()
64


volumeProperty.SetScalarOpacity(alphaChannelFunc)
compositeFunction = vtk.vtkVolumeRayCastCompositeFunctionO
volumeMapper = vtk.vtkVolumeRayCastMapper()
volumeMapper.SetVolumeRayCastFunction(compositeFunction)
volumeMapper.SetInputConnection(datalmporter.GetOutputPort())
volume = vtk. vtkVolumeO
volume.SetMapper(volumeMapper)
volume.SetProperty(volumeProperty)
renderer = vtk.vtkRenderer()
renderWin = vtk. vtkRenderWindowO
renderWin.AddRenderer(renderer)
renderlnteractor = vtk.vtkRenderWindowInteractor() renderlnteractor.SetRenderWindow(renderWin) renderer.AddVolume(volume) renderer.SetBackground(0.1,2,3) transform = vtk.vtkTransform() transform.Translated .0, 1.0, 0.0) renderWin.SetSize(600, 600) def exitCheck(obj, event): if obj . GetEventPendingO != 0: obj.SetAbortRender(l)
renderWin.AddObserver("AbortCheckEvent", exitCheck) renderlnteractor.Initialize() renderW in.Render() renderlnteractor.Start()
def vtk_main(): infile_name = sys.argv[l]
data = cPickle,load( open(infile_name , "r" ) )
65


return data
#Main Starts Here
if___name__== "___main__
stack = vtk_main() vtk_renderer(stack)
A.3 Algorithm for Crack Detection
import vtk
import cv2
import glob
from PIL import Image
import numpy as np
import time
import os
import matplotlib.pyplot as pit import pylab
org_list = []
1 = []
dist_list = []
) ) )
Distance Calculation
) ) )
def dist_cal_edited(count):
#assuming 10 frames for one centimeter distance = count/10 distance = distance + 7
66


if distance not in dist_list:
dist_list.append(distance) return dist_list
def dist_total(count):
total_distance = count/10 total_distance = total_distance + 7 return total_distance
def plot_graph(dist):
pit.plot(dist,’r-~’,label = ’Distance Measured’)
pit.plot([18,20,23,33,36],’g—o’,label = ’Actual Distance’)
pit.ylabel(’Distance’)
pit,xlabel(’Cracks Detected’)
pit,ylim(l,100)
pit.xlim(0,10)
pit.legend(loc=’best’)
pit.title("Comparision Between Actual Distance & Measured Values") pit.show()
Skeleton of image
) ) )
def img_skel(img):
img = cv2.cvtColor( img, cv2.C0L0R_BGR2GRAY )
size = np.size(img)
skel = np.zeros(img.shape,np.uint8)
ret,img = cv2.threshold(img,180,255,0)
element = cv2.getStructuringElement(cv2.M0RPH_CR0SS,(3,3))
67


done = False
while( not done):
eroded = cv2.erode(img,element) temp = cv2.dilate(eroded,element) temp = cv2.subtract(img,eroded) skel = cv2.bitwise_or(skel,temp) img = eroded.copy()
zeros = size - cv2.countNonZero(img) if zeros==size: done = True
pix = np.asarray(skel).copy()
arr_pix = Image.fromarray(pix)
) ) )
cv2.imshow("skel",temp)
cv2.waitKey(O)
cv2. destroyAHWindows ()
) ) )
return arr_pix
) ) )
Starts Here
) ) )
try:
start_pos_x = 0 end_pos_x = 0 cx = 0 cy = 0
cam = cv2.VideoCapture(0)
68


cam.set(3,640) cam.set(4,480) count = 0 num = 0 while(True):
print "Waiting for Motion Detection...." motion_found = False ret, imgl = cam.read() cv2.imshow(’img’,imgl)
grayimagel = cv2.cvtColor(imgl, cv2.C0L0R_BGR2GRAY)
ret, img2 = cam.read()
count = count + 2
img_width = img2.shape[1]
img_height = img2.shape [0]
cv2.imshow(’img’,img2)
grayimage2 = cv2.cvtColor(img2, cv2.C0L0R_BGR2GRAY) differenceimage = cv2.absdiff( grayimagel, grayimage2 )
# Get differences between the two greyed images
differenceimage = cv2.absdiff( grayimagel, grayimage2 )
# Blur difference image to enhance motion vectors
differenceimage = cv2.blur( differenceimage,(BLUR_SIZE,BLUR_SIZE ))
# Get threshold of blurred difference image based on
THRESHOLD_SENSITIVITY variable retval, thresholdimage = cv2.threshold(
differenceimage,THRESHOLD_SENSITIVITY,255,cv2.THRESH_BINARY )
# Get all the contours found in the threshold image contours, hierarchy = cv2.findContours(
thresholdimage,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE )
69


total_contours = len( contours ) org_list.append(imgl) org_list.append(img2) for c in contours:
# get area of next contour found_area = cv2.contourArea(c)
(x,y,w,h) = cv2.boundingRect(c) cx = x + w/2 cy = y + h/2
track_start_time = time.time() if found_area > 100: num = num + 1
print "Crack " + str(num) + " at: " +str( dist_cal_edited(count))
except Keyboardlnterrupt: cam.releaseO cv2.destroyAllWindows() if(not (len(org_list)==0)): for frame in org_list:
gray = cv2.cvtColor(frame, cv2.C0L0R_BGR2GRAY) im_skel = img_skel(frame)
1.append(im_skel) print dist_list
print "Total distance covered in centimeters:",dist_total(count) plot_graph(dist_list) show_vtk(l,img_width,img_height) else:
print "No Cracks Detected....Motion Not Found"
70


finally:
del dist_list del 1
del org_list
71


Full Text

PAGE 1

STRUCTUREDLIGHTSYSTEMDESIGNFORNONDESTRUCTIVE EVALUATIONIMAGING by PRANEETHBUGGAVEETI BachelorofScience,JawaharlalNehruTechnologicalUniversity,2011 Athesissubmittedtothe FacultyoftheGraduateSchoolofthe UniversityofColoradoinpartialfulllment oftherequirementsforthedegreeof MasterofScience ElectricalEngineering 2016

PAGE 2

ThisthesisfortheMasterofSciencedegreeby PraneethBuggaveeti hasbeenapprovedforthe DepartmentofElectricalEngineering by YimingDeng,Chair DanConnors ChaoLiu April29,2016 ii

PAGE 3

Buggaveeti,PraneethM.S.,ElectricalEngineering StructuredLightSystemDesignForNondestructiveEvaluationImaging ThesisdirectedbyAssistantProfessorYimingDeng ABSTRACT Inthisthesisproject,wehavedevelopedandimplementedaprototypeforevaluationofnaturalgasplasticPipes.Weimplementedthisonraspberrypiforexperimentalanalysisusingrealtimeprocessing.Thewholeplatformconsistsof3inchdiameter plasticmodulebuiltusing3dprinter,whichholdsstructuredlightlasersource,ash eyecameraandembeddedboardforprocessingandrecognizingdefectsoftheinner pipelinestructurethatisnoteasilyaccessibleforhumaninspectors.Theembedded machineacquiresthedatafromconnectedcameraviaserialbusandfollowsthe3d renderingalgorithm.Thealgorithmisdesignedtoprocessthedatainrealtimeand renderall2dframesinto3Dpipewithexactlysamefeaturesofpipe.Theprototype wasdevelopedonalevel-by-levelapproach.Firstly,weimplementedusinganoptical camerawith 45degreeeldofviewthatcanonlycoverapartofstructuredlight. Secondly,weuseasheyecamerawith 90degreeeldofview,whichgivesdesired resultsofcompletering,whichisusedastheinputofdevelopedrenderingalgorithm. Accordingtoabovementionedmethodthemotionofcamerawithinpipeisassumed tobeconstantframespercm.Finally,thelocationofcrackswithinpipescan bedetectedandthesizeofcrackscanalsoberecognizedusingnumberofframes. Theproleofcrackscaneasilybeseenonrendered3dstructure.Theraspberrypi islimitingtheuseofprototypeinlonglengthpipesbecauseasthelengthincreaseit leadstohugevolumeofdataandcan'tbehandledusinglimitedcomputationalpower ofpi.Thepossibilityofthefutureworkwouldincludetheuseofgraphicprocessing unitforfasterprocessingandrendering. iii

PAGE 4

Theformandcontentofthisabstractareapproved.Irecommenditspublication. Approved:YimingDeng iv

PAGE 5

ACKNOWLEDGMENT Iwouldliketoexpressmydeepestthankstomyadvisor,Dr.YimingDengfor hisvaluableguidanceandsuggestionshehasgivenmeinthisproject.Withouthim, thiswouldhavebeenimpossible.Also,Ithankthecommitteemembers,Dr.Dan ConnorsandDr.ChaoLiuforspendingtheirvaluabletimeforreviewingthisreport andattendingmydefense. IwouldliketoexpressgratitudetomymotherVijayaLaxmiBuggaveeti,my fatherPhaneendraBabuBuggaveetiandmybrotherSravanKumarBuggaveetifor theirencouragementandsupportthroughouttheproject. IwouldalsoliketothankDeepakKumarandBhanuBabaiahgariforexplaining sometopicsrelatedtothisproject. v

PAGE 6

DEDICATION Thethesisisdedicatedtomyfamilyandentouragewhohaveencouragedmetogo furtherduringmylife. vi

PAGE 7

TABLEOFCONTENTS Chapter 1.Introduction...................................1 1.1Background...............................1 1.2Objectives................................5 1.3Scope...................................6 2.Non-DestructiveEvaluation..........................8 2.1Overview.................................8 2.1.1LimitationsofNon-DestructiveTesting.............9 2.1.2ConditionsForEectiveNondestructive............9 2.2TestingMethodsinNondestructiveEvaluation............10 2.3TestingMethodImplementedinthiswork...............14 3.DataProcessingUsingPythonandOpenCV.................18 3.1Background...............................18 3.2FlowofAlgorithm............................18 3.2.1FlowofAlgorithmforDistanceEstimation..........19 3.2.2OperatingSystemandLibrarySetup..............20 3.3PythonwithOpenCVLibrary.....................20 3.3.1GrayScaleConversion......................21 3.3.2NumericalPython........................21 3.3.3PythonImagingLibrary.....................23 3.4MorphologicalOperations........................24 3.4.1Dilation..............................25 3.4.2Erosion..............................26 3.4.3Thresholding...........................27 3.5VisualizationToolKit..........................29 4.StructuredLightSystemDevelopmentForSensing..............33 vii

PAGE 8

4.1StructuredLight.............................33 4.2SensingModule.............................35 4.3ComputationalMachines........................36 4.3.1ComputationalMachineSpecications:AppleMacBookIntel Corei5..............................36 4.3.2ComputationalMachineSpecications:RaspberryPIARM737 5.Results......................................40 5.1FirstGenerationResults........................40 5.2SecondGenerationResults.......................45 6.Limitations...................................52 7.ConclusionAndFutureWork.........................56 References ......................................58 Appendix A.AlgorithmsForCapturingand3DRendering.................60 A.1AlgorithmforDistanceEstimation...................60 A.2Algorithmfor3DRendering......................64 A.3AlgorithmforCrackDetection.....................66 viii

PAGE 9

LISTOFFIGURES Figure 1.1CrackDevelopedonPipe..........................1 1.2PipelineInjectionGaugePIG.......................2 1.3SewerscannerandevaluationtechnologySSET.............3 1.4ProposedSetup................................4 2.1StructureOftheSystem...........................15 2.2IllustrationofStructuredLight.......................16 2.3StructuredLightWorkingPrinciple.....................17 3.1FlowOfAlgorithm..............................18 3.2FlowForDistanceCalibration........................19 3.3a.ColorImageb.ConvertedGrayImage...................21 3.4BlackandWhite...............................23 3.5a.Square4x4Elementb.Diamond-shaped5X5elementc.Cross-Shaped5x5 element....................................24 3.6a.Originalimageb.KernelorStructuralelement,1=originc.Resulting imageafterdilation..............................26 3.7a.OriginalImageb.Kernelorstructuralelement,1=originc.Resulting ImageAfterErosion.............................27 3.8a.Binaryb.BinaryInversec.Trunkd.TrunkInversee.ToZerof.ToZeroInverse:.....................................29 3.9VTKRendering................................30 3.10VTKFlowChart...............................30 3.11VTKRenderingWindow...........................32 4.1StructuredLaserSource...........................33 4.2a.LaserProjectionInsidePipeb.ReconstructedImage..........34 4.3aandbShowstheFishEyeCameraModule................35 ix

PAGE 10

4.4PoorResolutionatMarginalRegion....................36 4.5RaspberryPiBoard.............................37 5.1a.FirstGenerationPrototypeb.TheCapturedFrame...........40 5.2a.Theholesonthesurfaceasdefectsb.Thelinearcracksinthespecimen41 5.3aandb:Reconstructedimagesusingrstgenprototype.........42 5.4a.ResultsoverMacb.NumberofframesvsProcessingTime......43 5.5PrototypeSetupwithlargeViewAngleCamera..............44 5.6CompleteRingCaptured..........................44 5.7Reconstructionresultsfrom200Frames..................45 5.8a.ResultsoverPib.NumberofframesvsProcessingTime.......46 5.9aandbRepresentstheReconstructionResultsonRaspberryPi.....47 5.10ProcessingtimecomparisonbetweenRaspberrypiandmacOSX.....48 5.11EqualSizedSlices28cmlonga0-7cmb7-14cmc14-21cm......49 5.12ObjectUnderTestforDistanceCalculation................50 5.13Comparisonbetweentheactualdistanceandmeasureddistance.....50 6.1Computationaltimevs.ResolutionRaspberryPi............52 6.2SystemDesign................................53 6.3SystemDesign................................54 6.4CPULoadVs.NumberOfFrames.....................54 6.5MemoryUsageVs.NumberOfFrames...................55 x

PAGE 11

1.Introduction 1.1Background Theundergroundpipelinenetworkisoneofthemostrigorousinfrastructuresystems.Manyofthepipelinesystemsinusetodaywereinstalled100sofyearsagoand reachingtheirdesignlifetime.Thestructuralintegrityofthesepipelinesisdecreasingduetocorrosionanddeteriorationorsomemanuallymademistakes.Becauseof thefastdegradationofthepipelinesystems,regularlyassessingtheirconditionsisa criticaltask.Thus,itisveryimportanttocontinuouslychecktheconditionsofthe pipelineonaregularbasis.Therearemanypipelineinspectiontechniques.Dueto lackininspectionordelaysininspectioncyclescausedsevereproblemslikecracksas showningure1.1[1][2].Severedamagecanbenormalizedorpreventedontimely andaccuratecheckups.Tokeepthissystemworkecientlyweneedtoacquireupto dateandreliablepipelineconditiondatausingsuitableandecientmethods.Later thedetecteddefectsareevaluatedfordecisionmakingformaintenanceandrepair. Pipelineinspectionandconditionassessmenttechnologiesareimprovingdaily.ComFigure1.1:CrackDevelopedonPipe paniesaredevelopinganewgenerationofequipmentthatinvolvesdataacquisition techniquesanddeploysmultiplesensingtechniquesforacquiringmoreaccurateinspectiondata[3].TherearemanysystemsthatwerestillinuselikePipelineInjection 1

PAGE 12

Gaugeasshowningure1.2[4]whichisasmartandintelligentinlineinspectiontools sentthroughthepipeviathecirculationofuidorgas.Thissystemisfullyautomaticandreliableinhostileenvironmentbutcannotbesupervisedwhileundertest andthespeeddependsonthemediumundertest.SewerscannerandEvaluationis anothertechnologyforpipelinedamagedetection[5].Thistechnologyobtainsimagesoftheinteriorofpipeasingure1.3[6].Thistechnologyhasbeentestedin theUnitedStatesandhasenabledanimprovedresearchinautomateddefectclassicationduetoitsimprovedimagequalitycomparedtoconventionalCCTVinsection tapes.However,theSSEThasnotbeenadoptedbyamajority,becauseofitshigher inspectioncost,longerinspectionduration,andfewotherimlementations[7].Inthis Figure1.2:PipelineInjectionGaugePIG thesisworkwedevelopedasystemwithautomatedpipecrackdetectioncapabilityto enablecollectingandinterpretingpipeconditiondataundervaryingpipematerials, colors,crackpatterns.Automatedpipelinedefectclassicationhasbeenasubject ofintensestudyinrecentyears[7],thoughtheexistingresearchhasbeenlimitedby dataacquisitiontechniques,imageanalysisandpatternrecognitionapproaches. Imageprocessingisatechniquewidelyusedinpipelineinspectioneldwithinthe professionaldomainsinthepipelinenetworkindustry.Theproposedimageprocessing techniquesinvolvesvemajorsteps:Acquiringtheimages,processingtheimages, 2

PAGE 13

Figure1.3:SewerscannerandevaluationtechnologySSET Segmentingtheimages,featuresextraction,andthepatternRecognization.Such techniquesistoinvestigatetheinnersurfaceofthepipelinebytracinganydefects. Dierentimageprocessingtestsandexperimentsneedtobecarriedoutinorderto ensurethatthealgorithmdevelopedisrobustenough.Tomakesystemworkinreal timewehavetoimplementthisonembeddedsystemsplatform.Thisisthemain characteristicofdesigninganembeddedsystemsandachievingtherequiredresults remainsatoughtask.Optimizingthedesignandformulatingthestepstoachieve goodresultsisagoalandimplementinginsuchsystemsisverycriticalsinceitwill reducethecostsandthetimerequiredtoarrivetothesolution[8]. Wehaveseengrowingresearchinterestsininspectionofbridges,pipelines,and othercivilinfrastructures.Researchinstructureinspectionusingacombinationof embeddedsystemswithopticalimagingtechniquehasresultedinourprototype.Pipe inspectionisnotanewtopicintheeldofacademicandindustry[8].Variousmethods havebeenproposedanddevelopedforinspectinganenvironmentnotaccessiblefor directhumanobservation.Manyembeddedsystemshavebeendevelopedinthe pipelineinspectionsbasedonhowtheywereusedandonwhattypeofmaterials theinspectionisdone.Ourresearchhasledtoutilizeastructuredlightsystem combinationofopticalimageprocessinginvolvingacircularlaser,andacamera 3

PAGE 14

attachedtoembeddedsystemboardasshowningure1.4.Theoverallmethodology illustratesspecicinformationofthepipeline,visiontoolsandawell-establishmixture ofcameraandlaseradaptiontothepipelineinspectionproblems.Theembedded boardwasimplementedinthisprototypeforvisualandNon-DestructiveTestingof thepipestructure.Wehavedevelopedanimageprocessingsysteminpythonand Figure1.4:ProposedSetup toimplementitonembeddedplatformforreal-timeanalysis.Wehaveimplemented raspberrypiasanembeddedplatform.Benetsofusingraspberrypiasanembedded platformandusingimage-processingtechniquesonthatwouldbethemainfocus. Thesoftwareandinstallationofhardwareforspecicenvironmenti.e.,suitablefor dierentconditionsofinnersurfaceofthepipeisascienticachievementfordesigning morecomplicatedsystems. 4

PAGE 15

1.2Objectives Pipelinesarecostlyinvestments,toachieveuninterruptedservicetheyshouldbe wellmaintainedfordecades.Continuouslymonitoringtheconditionofthepipeline isintegraltoavoidingunforeseenrepairandreplacementexpenditures.Structured lightisonesuchtechnology,whichisprogressingconsiderablyintheareasofearly detectionofdamageandmaintenanceofpipelineintegrity. Thenondestructivediagnosticpossibilitiesofstructuredlightsarecountlessbut thetechniquesthataremostimpactfularetheonesthatcouldpotentiallysavelives ofhumanfrommajordisaster.Likethestructuredlightsusedinmedicaleldfor MinimallyInvasiveSurgicalImaging.Opticalimagingrequiresscientiststoresearch, analyzeandevaluateinordertoarriveatoptimalsolutions.Thechallenge,ambiguity, lackofrulesandfreedomtocreatethingsiswhatmakesscientistsandengineering fun.Thestruggletocreatesuperiorimagingdiagnosticscanleadtofewerliveslost duetoseverfatalities.TheobjectiveofMSThesisResearchatLEAPistodevelop aprototypeusingStructuredLightandRaspberryPi,thatprovide3Ddatawith timeandspatialmeasurementscapabilitiessimultaneously.Thisimagingtechnique isaneorttosatisfytheGTIsupportedresearchprojectfortheresidualstrength andremainingusefullifeanalysisofvintagepipelinematerials. TheGTIsupportedresearchisaneortofUniversityofColoradoDenverthat aimstodevelopanewhybridsensingtechniquethatcanidentifyandproactively characterizeinjuriouspipebodywithsuperiorresolutionandhighsensitivity.The detectionresultsfromtheproposednondestructiveevaluationsensingmethodology willbeintegratedwithembeddedsystemsforaccuratetime-dependentreliability analysis.Eectivereliabilityanalysisusingnondestructiveevaluationtechniquesrequiresafusedinformationframeworkthatinvolvessensingandimageprocessing. 5

PAGE 16

Ifsuccessful,thedefectscanbesignicantlyreducedwiththisinnovativepipeline defectsdiagnosisandprognosisapproach. Theoverallobjectiveofthisresearchistodesignandimplementarealtime imageprocessingsysteminpythononraspberrypifordetectingexistingmicrometer tomillimetercracksonpipes.Theobjectiveofthisthesisistodevelopthesystemto imageinrealtimeusingRaspberrypi. 1.3Scope StructuredlightNDEimagingisasciencethatprovidesaquickandnondestructivewaytoevaluateandassessthematerialsproperties.Factorsinchoosing therightnon-destructiveevaluationtechniquedependonthetypeofmaterial,as wellas,thesize,orientationandlocationofdefects.Whetherthedefectsarelocated onthesurfaceorinternallyalsoaectstheselectionofthetechnique.Thegeometry shapeofthematerialunderthetestalsodetermineswhichNDEtechniquetouse.All thesefactorsdeterminethetypeandequipmentofstructuredlightimagingsource. Constructinganimagingsystemshouldbebasedonthepropertyofdeviceundertest andlocationoftheawsthatneedtobeimaged.Thesystemisoptimizedthrough experimentationandevaluation. Structuredlight,cameraandsmallsizedcomputerhavebeenemployedtoscan theinternalsurfaceofthematerialandtodevelopthesystemasreal-timesystem. Techniqueshavealsobeenusedtoreconstructbackthescannedsurfaceintoa3D structureforanalysis.Themostimportantcomponentoftheimagingsystemis structuredlight.Theresolutionoftheimagingtechniqueisasubstantialfactor thatshouldbeconsideredwhiledevelopingtheprototype.Chapter3and4presents asystemoverview,automateddataacquisitionalgorithmusingpythonandinitial eortsofimageenhancementtechniques.Chapter5showsthemethodsanddierent resultsobtainedfromexperiments.Chapter6presentsthevariouslimitationsofthe systemdevelopedandChapter7providesaconclusionandsummaryofthefuture 6

PAGE 17

work.Themodelingeortsinchapter4andresultsinchapter5provideexposureto initialeortsmadebytheLEAPteamtobuildtheprototype.Themodelingeorts areincompleteandareapartofthecontinuousimprovementeortsoftheLEAP researchgroup. 7

PAGE 18

2.Non-DestructiveEvaluation 2.1Overview Thenondestructivetestingisanexamination,test,orevaluationconductedon anobjectthatisusedfortesting.Thistestisperformedonasetofconditions likenotchangingoralteringthetestobjectinanyway,inordertodeterminethe absenceorpresenceofconditionsordiscontinuitiesthatmayhaveaneectonthe objectsservice.Non-destructivetestsmayalsobeconductedtomeasureotherobjects characteristics,suchassize,dimension,conguration,orstructure,includingalloy content,hardness,grainsizeetc.ThistechnologyisalsocalledasNondestructive examination,nondestructiveinspection,andnondestructiveevaluation.Thistypeof testingorevaluationsignicantlyreducesoravoidsthecriticalfailure. Nondestructivetestingisconsideredasoneofthefastestgrowingtechnologies withtheimprovementandmodicationsandthroughunderstandingofthematerials anduseofthevariousproductsandsystems,haveallcontributedtothetechnology [9].Weusethistechnologyinourdaytodayactivitiesforexample,whenacoinis depositedinaslotofavendingmachineandselectionismadeforacandyorsoftdrink, thecoinundergoesaseriesofnondestructivetestslikesize,shape,andmetallurgical propertiesanditshouldpassallthesetestssatisfactorily,fortheproducttodispatch successfullyfromthevendingmachine.Thistechnologyhasbecomeapartofevery processinindustry,whereproductfailurecanresultinaccidentsorcausesharmto humanlife.Inindustry,nondestructivetestingiseectivelyusedfor: 1.Examinationoftherawmaterialspriortoprocessing. 2.Evaluationofmaterialsduringprocessingasameansofprocesscontrol. 3.Examinationofnishedproducts. 4.Evaluationoftheproductsandstructuresoncetheyhavebeenputintoservice. 2.1.1LimitationsofNon-DestructiveTesting 8

PAGE 19

Everynondestructivetestinghassomelimitations.Athroughinspectiononefor conditionsthatwouldexistinternallyinthepartandthesecondmethodwouldbe moresensitivetoconditionsthatexistonthesurfaceofthepart.Someofthedefects onthestructurecanneverbedetectedbyfewtestingtechniques.Itisimportantto knowthatpropertiesofmaterialandasmuchinformationaspossiblebeforedeploying atypeoftestingmethod.Thenatureofdiscontinuitiesthatareanticipatedforthe particulartestobjectshouldbewellknownandunderstood. Alsoitisincorrecttoassumethatifapartissubjectedtoanon-destructivetest andifitpassesitguaranteesthatapartisgoodenough.Therearesomeminimum requirementsthataresetbeforethetestingisimplementedonaparticulartestobject andtheyarenotonlythesourcethatqualiesthepartasacceptableforimplementation.Thisrequiressomekindofmonitoringorevaluationofthepartorstructure onceitisoperational.Therearealsosomesituationswhereatestingisperformed underunqualiedexaminer,whichresultsinfailure. 2.1.2ConditionsForEectiveNondestructive Therearecertainconditionsfortheproducttoundergoeectivenondestructive testing,Firstlytheproductmustbetestableanditisessentialtoknowwhichtest methodisappropriateforwhattypeoftheproductandtestingmethodshouldbe selectedbasedonthepropertiesoftheproduct.Secondly,Therearecertainapproved proceduresthatmustbefollowedtosatisfyalltherequirements.Acertiedorqualiedpersonwhoassessestheadequacyoftheprocedureshouldapproveit.Thirdly, theequipmentusedfortestingshouldbeoperatingproperlyingoodworkingcondition.Theequipmentshouldbeperiodicallycheckedtoassurethattheequipment isworkingproperly.Fourthly,allthetestresultsshouldbedocumentedproperly andaddressallthekeypointsinthetestingexamination,includingcalibrationdata, equipmentanddescriptionofthepartandidenticationofthefaultypartsetc.Finallyallthesetestsshouldbeperformedbyaqualiedpersonalswhohasformalized 9

PAGE 20

plannedtraining,testing,anddenedexperience[9]. 2.2TestingMethodsinNondestructiveEvaluation Therearevarioustestingmethodsforevaluatingtheproducts,someofthemare describedasfollows,Visualtestingmethodisonesuchtechnique.Themainprincipal behindthevisualtestingmethodisthetransmittedlightfromthetestobjectthat iscapturedwithalightsensingdeviceorhumaneye.Mainlyusedforsewer,storm pipelineinspectionandin-serviceinspectioninindustries.Thistechnologyisleast expensiveandwithminimaltrainingrequiredfortesting.Thistechnologyrequires eectivesourceofilluminationforthetestingprocesstobesuccessful[9]. Penetranttestingisanothernondestructiveevaluationtechniqueinwhichavisibleoruorescentdyemixedwithliquidentersthesurfaceofdiscontinuitiesbycapillaryaction.Thistypeoftechnologycanbeusedonanytypeofsolidnonabsorbent materialhavinguncoatedsurfacesthatarenotcontaminated.Applicationofthis typeoftechnologyisrelativelyeasyandmaterialsusedfortestingareinexpensive. Thepersonusingthistestingmethoddoesnotrequiremuchofthetrainingbefore performingthistest.Theonlydisadvantageinthistechnologyisthesurfaceofthe testobjectshouldberelativelysmoothandfreeofcontaminants[9]. Thetechnologythatusesmagneticpropertiesandferromagneticparticlesfor nondestructivetestingisknownasMagnetictesting.Thetestobjectismagnetized andferromagneticparticlesareappliedonthesurface,aligningtodiscontinuity.This technologycanbeappliedonsurfaceandsubsurfaceforallsizesoftheobjectsunder thetest.Magnetictestingiseasytouseandinexpensivetypeoftestingcomparedto penetranttesting[9]. Radiographictestingisanothernondestructivetestingmethod,inthistechnique thetestpartisplacedbetweentheradiationsourceandlm.Thetestpartgets attenuatedonlywhenthethereareanydierencesinthematerialdensityresultinginscatteringandabsorption.Thesedierencesinthelmarerecordedonthe 10

PAGE 21

lm.Thereareseveralimagingmethodsavailableinindustries,techniquestodisplay thenalimagei.e.FilmRadiography,RealTimeRadiographyRTR,Computed TomographyCT,DigitalRadiographyDR,andComputedRadiographyCR. Themainadvantageofthistechnologyisprovidingapermanentrecordandhigh sensitivity[9]. Ultrasonictestingisatechniqueofnondestructivetestingmethodinwhichthe testmaterialissubjectedtohighfrequencysoundplusesfromtransducerthatpropagatesthroughthetestmaterial.Thesefrequenciesarereectedifthereareany interfacesinobjectundertest.Thistestisappliedformaterialsforwhichsurface nishesaregoodandshapeisnotcomplex.Thistestingprovidesinformationlike thicknessandtypeofdepthwithhighsensitivity.Informationcanbeobtainedby scanningfromformonesideinsteadofscanningfromboththefaces.Eddycurrent testing:Eddycurrentinspectionusestheprincipalofelectromagnetismasthebasis forconductingexaminations.Eddycurrentsarecreatedthroughaprocesscalled electromagneticinduction.Alltheconductivematerialscanbeexaminedforaws, metallurgicalconditions,thinningandconductivity.Theadvantagesofthistypeof testingmethodisitgivesimmediateresults,equipmentisportable,minimumpart preparationisrequiredanditinspectscomplexshapesandsizesofconductivematerial.Thistypeoftestingrequiresskillandtrainingtoconductthetestandapplies onlyforconductivematerials[10]. ThermalNDTmethodistestingmethodthatinvolvethemeasurementormappingofsurfacetemperaturesasheatowsto,fromand/orthroughanobject.The simplestthermalmeasurementsinvolvemakingpointmeasurementswithathermocouple.Thistypeofmeasurementmightbeusefulinlocatinghotspots,suchasa bearingthatiswearingoutandstartingtoheatupduetoanincreaseinfriction.Inits moreadvancedform,theuseofthermalimagingsystemsallowthermalinformation tobeveryrapidlycollectedoverawideareaandinanon-contactmode.Thermal 11

PAGE 22

imagingsystemsareinstrumentsthatcreatepicturesofheatowratherthanoflight. Thermalimagingisafast,costeectivewaytoperformdetailedthermalanalysis. Thermalmeasurementmethodshaveawiderangeofuses.Theyareusedbythe policeandmilitaryfornightvision,surveillance,andnavigationaid;byremenand emergencyrescuepersonnelforreassessment,andforsearchandrescue;bythe medicalprofessionasadiagnostictool;andbyindustryforenergyaudits,preventativemaintenance,processescontrolandnondestructivetesting.Thebasicpremiseof thermographicNDTisthattheowofheatfromthesurfaceofasolidisaectedby internalawssuchasdisbands,voidsorinclusions.Thistypeoftestingisextremely sensitivetoslighttemperaturechangesinsmallpartsorlargeareas.Thistechnology doesnotworkwellwhenthepartsofthetestobjectarethickandtestingrequires highlyskilledlabor[10]. AcousticEmissionAEisanotherNondestructivetestingmethodthatrefersto thegenerationoftransientelasticwavesproducedbyasuddenredistributionofstress inamaterial.Whenastructureissubjectedtoanexternalstimuluschangeinpressure,load,ortemperature,localizedsourcestriggerthereleaseofenergy,intheform ofstresswaves,whichpropagatetothesurfaceandarerecordedbysensors.Withthe rightequipmentandsetup,motionsontheorderofpicometers-12mcanbeidentied.SourcesofAEvaryfromnaturaleventslikeearthquakesandrockburststothe initiationandgrowthofcracks,slipanddislocationmovements,melting,twinning, andphasetransformationsinmetals.Incomposites,matrixcrackingandberbreakageanddebondingcontributetoacousticemissions.AEshavealsobeenmeasured andrecordedinpolymers,wood,andconcrete,amongothermaterials.Detection andanalysisofAEsignalscansupplyvaluableinformationregardingtheoriginand importanceofadiscontinuityinamaterial.BecauseoftheversatilityofAcoustic EmissionTestingAET,ithasmanyindustrialapplicationse.g.assessingstructural integrity,detectingaws,testingforleaks,ormonitoringweldqualityandisused 12

PAGE 23

extensivelyasaresearchtool.AcousticEmissionisunlikemostothernondestructive testingNDTtechniquesintworegards.Therstdierencepertainstotheorigin ofthesignal.Insteadofsupplyingenergytotheobjectunderexamination,AET simplylistensfortheenergyreleasedbytheobject.AEtestsareoftenperformedon structureswhileinoperation,asthisprovidesadequateloadingforpropagatingdefectsandtriggeringacousticemissions.TheseconddierenceisthatAETdealswith dynamicprocesses,orchanges,inamaterial.Thisisparticularlymeaningfulbecause onlyactivefeaturese.g.crackgrowtharehighlighted.Theabilitytodiscernbetweendevelopingandstagnantdefectsissignicant.However,itispossibleforaws togoundetectedaltogetheriftheloadingisnothighenoughtocauseanacoustic event.Furthermore,AEtestingusuallyprovidesanimmediateindicationrelating tothestrengthorriskoffailureofacomponent.OtheradvantagesofAETinclude fastandcompletevolumetricinspectionusingmultiplesensors,permanentsensor mountingforprocesscontrol,andnoneedtodisassembleandcleanaspecimen.Unfortunately,AEsystemscanonlyqualitativelygaugehowmuchdamageiscontained inastructure.Inordertoobtainquantitativeresultsaboutsize,depth,andoverall acceptabilityofapart,otherNDTmethodsoftenultrasonictestingarenecessary. AnotherdrawbackofAEstemsfromloudserviceenvironments,whichcontribute extraneousnoisetothesignals.Forsuccessfulapplications,signaldiscriminationand noisereductionarecrucial[9]. VibrationAnalysisisanothertestingmethodinthistypeoftestingthevibration signaturesspecictothatpieceofrotatingmachineryandanalyzingthatinformation determinestheconditionoftheequipment.Threetypesofsensorsarecommonly usedinthistechnologydisplacementsensors,velocitysensorsandaccelerometer's. Displacementsensorsuseseddycurrenttodetectverticalandhorizontalmotionand arewellsuitedtodetectshaftmotionandchangesinclearancetolerances.Spring mountedcoilareusedforvelocitysensorsthatmovesthroughacoilofwire,with 13

PAGE 24

theoutercaseofthesensorattachedtothepartbeinginspected.Thecoilofwire movesthroughthemagneticeld,generatinganelectricalsignalthatissentbackto areceiverandrecordedforanalysis.Newermodelvibrationsensorsusetime-of-ight technologyandimprovedanalysissoftware.Velocitysensorsarecommonlyusedin handheldsensors[10]. Visualinspectionisanothernondestructivetestingtechniquemostlyusedinautomobileindustrywherethevisualaspectofthevehiclebodyisanimportantfactor. Visualinspectionisalsoveryimportantinairlineindustry,andaccountsformost oftheinspectiontasksinsearchforthecracksandcorrosion.Lightguidingsystems havebeendevelopedforlookinginareaswheredirecteyeinspectionisnotpossible. Directeyevisionisoftenreplacedbyarticialvisionwithavideocamerausingan imagingdetectoralsoknownaschargecoupleddeviceorCCD[11]. 2.3TestingMethodImplementedinthiswork WeknowthatmanythreedimensionalDreconstructiontechniqueshavebeen exploredbyresearchers,oneofthewidelyknownmethodisstereovisionwhichis apassivemethodfor3Dacquisition[12].Toavoidthedicultyofcorrespondence problemweconsidertheactivemethod,i.ethestructuredlightsystem.Structured lightsystemusesasinglestripofringprojectedontothewallsoftheobject.The structuredlightsystemconsideredinthisresearchworkconsistsofaCCDandaLaser Projectorasshowningure2.1.Similartothetraditionalstereovisionsystem,but itssecondcameraisreplacedbythelightsource.Thelightsourceprojectsaknown lightpatternonthemeasuringscene.Thesinglelenscameracapturestheilluminated sceneandtherequired3Dinformationisobtainedbyanalyzingthedeformationof theimagedpatternwithrespecttotheprojectedone.Oneprincipalmethodof3D surfaceimagingisbasedontheuseofactiveilluminationofthescenewithspecially designed2Dpattern.Asillustratedinequation2.1[13],illuminationisgeneratedon aspecialprojectororlightsource.Theintensityofeachpixelonthestructuredlight 14

PAGE 25

Figure2.1:StructureOftheSystem patternisrepresentedbythedigitalsignal, I ij = i;j ;i =1 ; 2 ;:::;I;j =1 ; 2 ;:::;J .1 Animagingsensoravideocamera,forexampleisusedtoacquirea2Dimageofthe sceneunderthestructured-lightillumination.Ifthesceneisaplanarsurfacewithout any3Dsurfacevariation,thepatternshownintheacquiredimageissimilartothat oftheprojectedstructured-lightpattern.However,whenthesurfaceinthescene isnonplanar,thegeometricshapeofthesurfacedistortstheprojectedstructuredlightpatternasseenfromthecamera.Theprincipleofstructured-light3Dsurface imagingtechniquesistoextractthe3Dsurfaceshapebasedontheinformationfrom thedistortionoftheprojectedstructured-lightpattern.Accurate3Dsurfaceproles ofobjectsinthescenecanbecomputedbyusingvariousstructured-lightprinciples andalgorithms[13]. Asgiveninequation2.2[13],thegeometricrelationshipbetweenanimaging sensor,astructured-lightprojector,andanobjectsurfacepointcanbeexpressedby 15

PAGE 26

thetriangulationprincipleas R = B sin sin + .2 Thekeyfortriangulation-based3Dimagingisthetechniqueusedtodierentiatea singleprojectedlightspotfromtheacquiredimageundera2Dprojectionpattern. Figure2.2:IllustrationofStructuredLight Activelyilluminatedstructured-lightpatternsmayincludespatialvariationsin allx,y,zdirections[13],thusbecomingatrue3Dstructured-lightprojectionsystem. Forexample,theintensityofprojectedlightmayvaryalongtheopticalpathofthe projectedlightowingtocoherentopticalinterference.However,moststructured-light 3Dsurfaceimagingsystemsuse2Dprojectionpatterns[13]. Figure2.3representsacomputeranimationofastructured-light3Dimagingsystemtodemonstrateitsworkingprinciple.Anarbitrarytarget3Dsurfaceisilluminatedbyastructured-lightprojectionpattern.Inthisparticularcase,thestructuredlightpatternisacircularringlikepattern.Aimagingsensoracquirestheimageof thetarget3Dsurfaceunderthestructured-lightillumination.Theimagecaptured 16

PAGE 27

bytheimagingsensorvariesaccordingly.Basedonthedistortionofthestructuredlightpatternseenonthesensedimageincomparisonwiththeundistortedprojection pattern,the3Dgeometricshapeofthetargetsurfacecanbecomputedaccurately. Figure2.3:StructuredLightWorkingPrinciple 17

PAGE 28

3.DataProcessingUsingPythonandOpenCV 3.1Background Thischapterexplainsdierentimageprocessingtechniquesthatwehaveimplementedtoidentifythedefectsfromspecicshapeoftheringproleandtoseparate itfromtheimagebackground.Aftertheextractionoftheringprole,defectscanbe observedfromthe3Drenderingofalltheringproles.Imageprocessingtechniques areappliedinordertoenhancetheimagequality.Twomorphologicaltechniquesare appliederosionanddilationtoltertheimage.Alsoimplementedavisualizationtool tovisualizethe2Ddatainto3Dtoanalyzethedefects.Someoftheimageprocessing techniquesandtheirimplementationisexplainedinthesectionsbelow. 3.2FlowofAlgorithm Figure3.1:FlowOfAlgorithm Theowchartingure3.1,explainsthesequenceofstepsimplemented.First blockcapturestheimagecontinuouslythesecondblockgrabtheframeandapply 18

PAGE 29

variousimageprocessingtechniques.Nextalltheprocessedframesarestackedina 2Dmemory.Thenalstepistograbtheprocessed2Dframefromthememoryand projectedontoa3Dmemoryandlaterallthestacked3Ddataisdisplayedonthe screen. 3.2.1FlowofAlgorithmforDistanceEstimation Figure3.2:FlowForDistanceCalibration Theowchartingure3.2,explainsthesequenceofstepsimplementedinestimatingdistanceofthecrackinthiswork.Firstwecapturemultipleframessimultaneouslythenwecalculatethedierencebetweentwoframesandestimatethedierence andbasedonthisdierenceweassumethatacrackisfound.Thenwegrabthecount valuebasedonthenumberofimagescaptured.Inourworkweassumedwecapture 10imagesforeveryonecentimetersoweassumedthisformula, ScanDistance=NumberofFramesCaptured/NumberofFramesPerCentimeter 19

PAGE 30

where,Speedisconstant 3.2.2OperatingSystemandLibrarySetup Onraspberrypiboards,weinstalledasupportedvariantofUbuntuLinuxknown asraspian,VisualizationToolKitandOpenComputerVisionfromsource.OpenCV, acomputervisionlibrary,isaverypopularandhasconsiderablefunctionalityrelevant tothisproject.ItalsosupportsLinux,aprojecttosupportcommonvideoand imagecapture.Settinguplibrariestakeslotoftimeandconsumeslotmemory,while othercomputervisionlibrariesexist,OpenCVisthemostpopularandhenceisbest supportedonline.ProgrammingLanguagewasguidedbyframeworksupportandthe needforperformancewhenrunningonourboards.OpenCVandVTKhavethemost limitedocialsupport,havingbindingswithC,C++andPython.Ourmodules wereimplementedinpython.WeinstalledOpencvfromthefollowinglink http : ==www:pyimagesearch:com= 2015 = 02 = 23 =install )]TJ/F19 11.9552 Tf 9.446 0 Td [(opencv )]TJ/F19 11.9552 Tf 9.446 0 Td [(and )]TJ/F19 11.9552 Tf 9.446 0 Td [(python )]TJ/F19 11.9552 Tf 9.446 0 Td [(on )]TJ/F19 11.9552 Tf 9.446 0 Td [(your )]TJ/F19 11.9552 Tf -422.701 -23.98 Td [(raspberry )]TJ/F19 11.9552 Tf 10.782 0 Td [(pi )]TJ/F15 11.9552 Tf 10.782 0 Td [(2 )]TJ/F19 11.9552 Tf 10.783 0 Td [(and )]TJ/F19 11.9552 Tf 10.783 0 Td [(b= andVTKfrom https : ==blog:kitware:com=raspberry )]TJ/F19 11.9552 Tf -422.702 -23.98 Td [(pi )]TJ/F19 11.9552 Tf 13.201 0 Td [(likes )]TJ/F19 11.9552 Tf 13.201 0 Td [(vtk= andfewofthesupportinglibrarieslikenumpy,scipyetcfrom http : ==wyolum:com=numpyscipymatplotlib )]TJ/F19 11.9552 Tf 11.579 0 Td [(on )]TJ/F19 11.9552 Tf 11.578 0 Td [(raspberry )]TJ/F19 11.9552 Tf 11.578 0 Td [(pi= .Settingupvtk requirescrosscompilerccmakewhichcanbedownloadedfrom www:cmake:org . 3.3PythonwithOpenCVLibrary TheOpenSourceComputerVisionlibraryOpenCVisanextensiveClibrary forrealtimecomputervision.Stereovisioncapabilitiesareimplementedasapart ofOpenCV.Latelythelibraryhasbeenfullyportedtotheprogramminglanguage Python.TheimplementationofthesoftwareinthisthesisisbasedontheOpenCV 2.3.1Pythonlibrary[2]. 20

PAGE 31

3.3.1GrayScaleConversion Therstimagetransformationtobedoneforreachingadetectiblefeatureisgray scaleconversion.ItsimplymeansconvertingtheRGBcolorspanintoacorresponding imagewithonecolorgraywithdierentintensityacrosstheimage.Figure3.3below illustratetheconversion. a b Figure3.3:a.ColorImageb.ConvertedGrayImage Grayscaleintensityisstoredasan8-bitintegergiving256possibledier-ent shadesofgrayfromblacktowhite.Thatisthereasonthatonlyonechannelcolor ischosentorepresenttheimagewhenconvertingfromRGBwhichisanarrayof ,255,255.GrayscaleconversionisperformedinOpenCVusingthefollowing command: cv 2 :cvtColor image;grayimage;cv 2 :COLOR B GR 2 GRAY 3.3.2NumericalPython NumericalPythonisapackagewidelyusedforscienticcomputingwithpython andconsistsofmanyusefulconceptslikearrayobjectsandlinearalgebrafunctions. Inthisthesisworkweconvertedthecapturedimagesintoarraytoperformnormalization,whichareneededtodooperationslikeerosion,dilationandthresholding[14]. Whenwecall array weconverttheimageintoNumPyarraybyusingthefollowing code 21

PAGE 32

np:array image TheconvertedNumPyarrayasshownbelowasanexamplecontainsthevalueof pixelateachposition. [[187187186...,163163163] [187187186...,163163163] [187187186...,163162162] ..., [222220218...,143143144] [225221218...,143143143] [227221218...,143143143]] Thisrepresentstheimageasadatacubeandeachpixelvalueisof8bitsfrom0to 255.Wesetacertainthresholdtoltertheimagefromcertainnoise.Forexample wehaveacolorimageasshowningure3.3aaboveUsingnumpyarraytheabove imageisconvertedtoarrayasshownbelow: [[272727...,2828227] [272727...,2828227] [272727...,2929227] ..., [252525...,3535209] [252525...,3434210] [252525...,3434212]] Noweachpixelissubjectedtovariouslteringtechniqueslikeerosion,dilationand thresholdingtoremovebackgroundnoise. 22

PAGE 33

3.3.3PythonImagingLibrary Pythonimaginglibraryprovidesvariousimageprocessingcapabilitiestothe pythoninterpreter.Thislibraryprovidesextensivesupporttovariousleformats, andisapowerfulimageprocessingcapabilities.Thelibraryprovidesfastaccessto datastoredinvariouspixelformats.Thislibraryisusedtoconvertbetweenvariouspixelformats,printimages,etc.Thislibraryprovidesbasicimageprocessing functionality,suchasresizing,rotationandarbitrarytransforms.Wecanevenbuilt histogramandmakecertaindecisionsbasedonthehistogram.Themostimportant classinthepython-imaginglibraryisImageclass,denedinthisworkwiththesame name.wecancreateinstancesoftheclassbyloadingimagesfromles,processing otherimages,orcreatingimagefromscratch. Weloadanimagefromthelebyusingthefollowingopenfunctionintheimage module. importimageim = Image:open name Thefunctionabovesuccessfullyreturnstheimageobject.Weusethisasafunction toconvertfromarraytoanimagele.Firstweconverttheimagetoanarrayand processthepixelsusingdierentformatandthenconvertthembacktoimageusing thefunctiongivenbelow: imfile = Image:fromarray bw Image:fromarray Theresultingimageisshownbelowingure3.3: Figure3.4:BlackandWhite 23

PAGE 34

3.4MorphologicalOperations Morphologicaloperationsaresimpletransformationsperformedontheimage aectingtheform,structureorshapeofanobject.Theseoperationsareperformed onlyonbinaryimages,andneedstwoinputsoneistheoriginalimageandsecondone isstructuralelementorkernel[15].Morphologicaloperationsareusedinthisworkto magnifythedefectsclearlyforanalysis.Twobasicmorphologicaloperatorsareused erosionanddilation.Dilationallowsobjectstoexpand,below,whichminimizesthe holesontheimageandconnectingdisjointsontheimage[15].Erosioncompressesthe sizeofobjectsbyetchingawaytheirboundaries.Theseoperationsareselectedbased ontheapplicationbyproperselectionofthestructuringelement,whichdetermines exactlyhowtheobjectisdilated.Thestructuringelementisasmallbinaryimage, i.e.asmallmatrixofpixels,eachwithavalueofzeroorone.Thedimensionsofthe matrixspecifythesizeofthestructuringelement.Theshapeisdeterminedbythe patternofonesandzerosandoneofthepixelsinthematrixdeterminestheoriginof thestructuringelement.Simpleexampleofstructuringelementsareexplainedbelow: a b c Figure3.5:a.Square4x4Elementb.Diamond-shaped5X5elementc.Cross-Shaped 5x5element Figure3.5arepresentthe4x4squarematrixwithoriginasthecoloredboxand gure3.5brepresentthediamondshape5x5elementandgure3.5crepresentsthe cross-shaped5x5element.Structuringelementplaysaveryimportantroleasconvolutionkernelsinimageltering.Whenthekernelisplacedonabinaryimage,each 24

PAGE 35

pixelunderthestructuralelementisassociatedwiththeneighboringelementinthe kernel.Thestructuringelementissaidtobettheimageif,foreachofitspixelset to1,thecorrespondingimagepixelisalso1.Astructuringelementissaidtohit,or intersect,animageif,atleastoneofitpixelssetto1thecorrespondingimagepixel isalso1. 3.4.1Dilation Dilationisasimplemorphologicaloperationthatallowsobjecttoexpand[15].A processperformedbylayeringthestructuringelementontheimage.Iftheoriginof thestructuringelementcoincideswithawhitepixelthereisnochangemovetothe nextpixelandiftheoriginofthestructuringelementcoincideswiththeblackinthe imageallthepixelssurroundedbythestructuringelementaremadeblack.Thistype ofprocessisusefulinjoiningthebrokenpartsinanimage.Anexampleofdilation processisshownbelowingure3.6agure3.6bandgure3.6c.Inthisprocessall theholeswillbelledandboundarygetexpandedandalltheblackpixelsinthe originalimagewillberetained. 25

PAGE 36

a b c Figure3.6:a.Originalimageb.KernelorStructuralelement,1=originc.Resulting imageafterdilation Figurearepresentstheoriginalimageinourapplicationwhenthisimageundergoesdilationprocesswiththestructuringelementingurebtheresultingimageis c.Afterthedilationprocesstheboundaryoftheimagegetsexpandedandwecan observethethicknessinimagec. 3.4.2Erosion Thisprocessissimilartodilation,asindilationwereducethepixelstoblack, inerosionweturnthepixelstowhite[15].Asindilationweusethesamekindof structuringelementofsize3x3andslideovertheentireimageiftheoriginofthe structuringelementcoincideswithawhitepixelintheimage,thereisnochange andslideovertothenextpixelandiftheoriginofthestructuringelementcoincides withablackpixelintheimageandatleastoneoftheblackpixelsinthestructuring elementfallsoverawhitepixelintheimagethenchangetheblackpixelintheimage 26

PAGE 37

fromblacktowhite.Allthepixelsneartheboundarywillbediscardeddepending uponthesizeofthekernel.Thethicknessoftheforegroundobjectdecreasessothe whiteregiondecreasesintheimage.Thistechniqueremovessmallwhitenoisesand connectsunconnectedobjectsinanimage.Asimpleexampleexplainstheerosion processindetailbelow: a b c Figure3.7:a.OriginalImageb.Kernelorstructuralelement,1=originc.Resulting ImageAfterErosion Intheabovegurectheonlyremainingpixelsarethosethatcoincidetothe originofthestructuringelementingurebwheretheentirestructuringelementwas contactedintheoriginalimagea.Sincethestructuringelementis3x3thepartof theringerodedawayresultinginimagec. 3.4.3Thresholding Inordertoincreasethepicturecontrast,theintensitylevelsofalltheimagepixels areadjustedtospantheentireavailableintensityrange,255.Thisisanadaptive 27

PAGE 38

imageprocessingstepwherethepixelsintensitiesaremappedinsuchawaythatall inputintensitiesthatfallbelowtheaverageintensityandallintensitieshigherthan theaverageareexpandedtospantheentireimageintensityrange.Thresholding isasimpleimagesegmentationmethodthatconvertsagrayscaleimageintobinary imagewherethetwolevelsareassignedtopixelsthatarebeloworabovethespecied thresholdvalue[16].TherearedierenttypesofthresholdingoperationsinOpenCV theyaregivenbelow: cv 2 :THRESH B INARY cv 2 :THRESH B INARY I NV cv 2 :THRESH T RUNC cv 2 :THRESH T OZERO cv 2 :THRESH T OZERO I NV Thethresholdfunctionisrepresentedby: T;threshImage = cv 2 :threshold src;thresh;maxval;type Therstparameteristheinputgrayimageonwhichtheoperationistobeapplied onto.Thesecondparameteristhethresholdvalue,usedtoclassifythepixelintensities inthegrayimage.Thethirdparameteristhemaxvalueusedifanygivenpixelin theimagepassesthethresholdtest.Thefourthparameteristhethresholdtype.In therstmethodwejustsegmenttheimagetobeblackwithawhitebackgroundas showningureb.Inthesecondmethodthecolorsareinvertedasshowningure c.thirdmethodleavesthepixelintensities,astheyareifsourcepixelvalueisnot greaterthanthesuppliedvalueasingured.Infourthmethodthesourcepixelis settozeroifsourcepixelisnotgreaterthanthevaluesuppliedasshowningure 3.8. 28

PAGE 39

a b c d e f Figure3.8:a.Binaryb.BinaryInversec.Trunkd.TrunkInversee.ToZerof.ToZero Inverse: 3.5VisualizationToolKit Thevisualizationtoolkitisanopensource,cross-platform,softwaresystemfor 3Dcomputergraphics,imageprocessing,andvisualization.Itisacrossplatform toolkitforscienticdataprocessing,visualization,anddataanalysis.Itoersreproduciblevisualizationanddataanalysispipelinesforarangeofscienticdata. Itimplementsnumberofvisualizationmodalitiesthatincludevolumerendering2D renderingcapabilities.Thesecapabilitiesareprimarilywrappedinpythontooer advanceddataprocessingcapabilities,visualization,andanalysistouserdirectlyor throughbuiltonlibraries[17].3Dreconstructionofimagesisanimportantresearch directioninnon-destructivevisualization.The3Dreconstructionfrom2Dimages toreconstructthe3Dentity,toprovidetheusertoobserveandinteract.Byreconstructingtherenderedimagesin3Dthespatiallocationofthedefect,sizeand completeinformationtotheuser,tohelpimprovetheeciencyandaccuracyofthe diagnosis.Visualizationtoolkithasadescentrenderingperformanceandisgood forrapidprototypingof3Dvisualizationtoolbutnotsuitableforrenderinglotsof 29

PAGE 40

dynamiccontent.Thesimpleexampleofvisualizationpipelineisshowningure3.9 below.TovisualizeourdatainVTK,thepipelineissetupasshowninowchartbeFigure3.9:VTKRendering low:Visualizationtoolkitprovidesvarioussourcesclassesthatareusedtoconstruct Figure3.10:VTKFlowChart simplegeometricobjectslikesphere,cubes,cones,spheres,cylinders,etc. volume = vtk:vtkVolume 30

PAGE 41

vtkSphereSource,vtkCubeSource,vtkConeSource,vtkVolumeSourcearefewexamplesofsourcesinvisualizationstoolkit.Readersreadthevolumetricdatafrom imagele[18].Filtertakesdataasinput,modiesitandreturnsthemodieddata. Itstrengthensthemodieddataanditintensity.Theygeneratethegeometricobject fromtheltereddata.Mappermapsthedatatographicsprimitivesthatcanbe displayedbytherenderer.WeusevtkVolumeRayCastMapperitworkswithsingle componentvtkImageDatawithascalartypeofunsignedcharorunsignedshort.It doesperformcalculationsinoatingpoint,anditdoeshaveaneasywaytosubclass thespecicraycastfunctiontoperformoperationsalongtheray.Actorrepresentsthe geometryandpropertieslikescale,orientation,textures,variousrenderingproperties etc.oftheobjectinarenderingscene. Vtk:vtkActor Renderingisaprocessofconverting3Dgraphicsprimitivesaspecicationformaterialsandcameraviewinto2Dimagethatcanbedisplayedonthescreen. Vtk:vtkRenderer VisualizationtoolusesOpenGLforrenderinginthebackgroundandusesvtkRenderer classtocontroltherenderingprocessforactorsandscenes. Vtk:vtkRenderWindow TheVtkRenderWindowclasscreatesawindowforrendererstodrawtheimage asshownbelow.TheinteractorclassvtkRenderWindowInteractorclassprovides platform-independentwindowthathelpstheusertointeractwiththe3Drendered image. vtk:vtkRenderWindowInteractor ThevtkRendererWindowclasscreatesawindowforrendererstodrawasshownin gure3.11below 31

PAGE 42

Figure3.11:VTKRenderingWindow 32

PAGE 43

4.StructuredLightSystemDevelopmentForSensing 4.1StructuredLight Structuredlightistheprocessofprojectingtheknownpatternontoascene. Thedeformationofthelightonstrikingthesurfaceallowsvisionsystemstodetect thesurfaceconditionsofanobject.Thegure4.1showsthestructuredlasersource systemthatweusedinourwork.Structuredlightisatechniquewidelyusedto measure3Dsurfaces.Obtainedbyprojectingthesequenceoflightpatterns.The mainadvantageofthistypeoftechnologyisthattheprojectedfeaturesareeasily distinguishedbycamera.Whenthepatternedlightisprojectedontoasurfaceand ifthesurfacecontainsanydefectsthepatterngetsdeformandthiscanberecorded byusingvisionsystem.Toavoidambiguitieswhilematchingtheprojectedfeatures intheimagethepatterniscoded.Structuredlightisusedindenserangesensing, industrialinspection,objectrecognition,3Dmapbuilding,reverseengineering,fast prototypingetc.Thepatternoflightisprojectedbymeansoflaserlightemitters Figure4.1:StructuredLaserSource ordigitalprojectorsasshowninFigure4.1,belowwhichformsdierentshapeof patternsonthesurface.Digitalprojectorshavefocusingproblems,whichlimitsthe depthrangeofthemeasuringarea.Soweuselasersourceinourworktodetect thedefectsbecauseoftheiruniformintensity.Anexampleofthereconstructionof 33

PAGE 44

surfaceofthepipe,performedbymeansoflaserprojectionofthesinglelightpatterns ispresentedingure4.2below. a b Figure4.2:a.LaserProjectionInsidePipeb.ReconstructedImage Laserprojectorsarethinandcanbeplacedinsidecompactdevices.Dueto theiruniformitylaserprojectorsareespeciallyusefulforstructuredlightapplications, includingindustrialinspection,alignmentandmachinevision.Structuredlightlasers usedininspectionminimizeprocessvariationbydrawingattentiontopartsthatdo notconformtospecications.Theycanpickoutvegetableswithblemishesfrom foodprocessorlinesorcanensurethattherightcoloredcapsulegoesinthecorrect bottleindrugpackaginglines.Anotherlaserapplicationisalignment.Incomputer assembly,alasersystemcanhelpanoperatordetermineifacomputerchipisperfectly positionedonacircuitboard.Machinevisionisacombinationofstructuredlighting, adetector,andacomputertopreciselygatherandanalyzedata.Forexample,itis usedonrobotsas3-Dguidingsystemstoplaceorinsertapartonacar,suchasa windshieldwiperoradoor. Visualrangemeasurementisataskusuallyperformedbystereoimagingsystems. Depthiscalculatedbytriangulationusingtwoormoreimagesofthesamescene.The previouslycalibratedcamerasareusedforcrossingtherayscomingfromthesame 34

PAGE 45

pointintheobservedscene.However,thesearchofcorrespondencesbetweenimages isgenerallyadiculttask,evenwhentakingintoaccountepipolarconstraints.A solutiontothisproblemisoeredbyusingtheinformationfromastructuredlight patternprojectedinthescene.Usuallythisisachievedbyacombinationofapattern projectorandacamera. 4.2SensingModule Sensorsarethedevicesthatarefrequentlyusedtodetectandrespondtoelectrical oropticalsignals.Sensorsconvertthephysicalparameterintoasignal,whichcan bemeasuredelectrically.Thesensingmodulethatweareusinginthismoduleisa Fish-eyelensasshowningure4.3.Fish-eyelensesareimagingsystemswithavery shortfocallength,whichgivesahemisphericaleldofview,asshowningure4.4. a b Figure4.3:aandbShowstheFishEyeCameraModule Theobtainedimagesbenetofagoodresolutiononthecenterbuthavepoor resolutiononthemarginalregion,asshowningure4.4.Inhumanvision,acentral zoneoftheretina,calledthefoveacentralis,provideshighqualityvisionwhilethe peripheralregiondealswithlessdetailedimages.Thereforethevisionacquiredbya sh-eyelensissomehowsimilartohumanvisionfromthepointofviewofresolution distribution.Inaddition,sheyelensesintroduceradialdistortion,whichisdicult 35

PAGE 46

Figure4.4:PoorResolutionatMarginalRegion toremove.Despitethelackofseveraldrawbacksthelargevisioneldoeredbythe sh-eyelensesmadethistypeofelementsanattractivechoiceforourwork. 4.3ComputationalMachines Ouropticalsystemwasimplementedontwopopularopenhardwareboardsand MacOSX,thersttwoarecommunitydesignedandsupported.Documentationis freelyavailableandtheboardsthemselvesareproducedbynon-protentitiesand MacOSXwasusedfortestingtheconcept. WeconsideredboardsthatcouldruntheentireLinuxoperatingsystemforease ofsetupandexibilityinsoftwareinstallation.Thisimmediatelydiscountedthe archetypicalEmbeddedfamilyonboardssincethesearenotgeneral-purposebased boardsfortheirlowcostprocessors,whichhaveecientpowerutilization. OurrstchoicewasRaspberryPiModelB+,with512MBRAMwithtwo USBportsandEthernetport.Thenquicklyprovedtobeunderpoweredandwethen migratedtothefaster,RaspberryPiModelBwithmoreisprocessingfor3DImaging. 4.3.1ComputationalMachineSpecications:AppleMacBookIntel Corei5 Toimplementthismethodologyweneedsomemachineforprocessinganddisplay; initiallywetestedourconceptonMacOSX.Thesoftwarepackagemanagingthe 36

PAGE 47

inspectionsystemrunsona2.6GHzIntelCorei5withanUNIXoperatingsystem, IntelIris1536Graphiccardand8GBRAM.IthastheUSBporttoconnectthe externaldevices.Itexecutesthefollowingfunctions: 1.Capturesequenceofframesinsidethepipe.2.PerformMorphologicaloperationsoneachframe.3.3DRenderingprocessedframes. Thelogicwritteninpythoncontrolsthecapturingandrenderingprocessonthe system.Theresultsobtainedwereveryaccurateusingthistypeofmachine.Butdue toitssizeandpowerconstraintsthissystemisnotfeasibleforourapplication.Then weresearchedforsomealternativeembeddedplatformsforreal-timeimplementation ofthesystem,whichcanconsumelesspowerandperformsameastheprevious system. 4.3.2ComputationalMachineSpecications:RaspberryPIARM7 TheRaspberryPimodelB+isthefourthiterationandthelatestversionofthe originalRaspberryPi.ItwasreleasedforsaleinJuly2014,andwasinFebruary2015 replacedbythenewRaspberryPi2b.TheRaspberryPimodelB+hasasingle-core Figure4.5:RaspberryPiBoard ARM1176processorrunningat700MHzwith512Megabytesofmemoryandalso consistoffourUSB-portsandone10/100Megabit/sEthernet-portandonemicro-sd cardslotforstorage.Comparedtotheearliermodels,theRaspberryPimodelB+has 37

PAGE 48

moreUSB-portsandtheyhavereplacedtheoldSD-cardslotforamicro-SDcardslot, whichhasapush-pushlockingmechanism.Italsohasalowerpowerconsumptions comparedtotheearliermodels[19].2.4.2RaspberryPi2modelBTheRaspberry Pi2modelBwasreleasedinFebruary2015asthelatestRaspberryPimodelwith updatedhardware.TheRaspberryPi2modelBhasaquad-coreARMCortex-A7 CPUprocessorrunningat900MHzwith1024Megabytesofonboardmemorywith fourUSB-portswithone10/100Megabit/sEthernet-portandonemicro-SDcardslot forstorageonboard[20].TheRaspberryPi2modelBisfasterandhastwicethe amountofmemorycomparedtoitspredecessor,theRaspberryPimodelB+.Itnow hasaquad-coreprocessor,whichisspeculatedtomaketheRaspberryPi2modelB uptosixtimesfasterthanthepreviousmodels. ArchLinuxARMisanARMbasedLinuxdistributionportedfromthex86based LinuxdistributionArchLinux.TheArchLinuxphilosophyisthatusersshouldbe intotalcontrolovertheoperatingsystem,whichallowstheuserstoimplementitin anywaytheylike.ThereforeArchLinuxcanbeusedforsimpletasksandaswellas moreadvancedscenarios.ArchLinuxARMisbaseddirectlyonArchLinuxandthey sharealmostallthecode,whichmakesArchLinuxARMaveryfast,Unix-likeand exibleLinuxdistribution.ArchLinuxARMhasadaptedtherolling-releaseupdate functionfromthex86-version.Thismeansthatsmalliterationsismadeavailableto theusersassoonastheyareready,insteadofthereleasinglargerupdateseveryfew months[20]. Thecrackdetectionprogrammeislaidouttorunontopofanoperatingsystem,soinstallingoneontheRaspberryPiistherstthingtodo.Thepreferred systemforthistaskisNOOBS,asthereisadistributionavailable,whichisoptimizedinparticularfortheRaspberryPi.Itcanbedownloadedfrom http : ==www:raspberrypi:org=downloads=noobs 38

PAGE 49

Forthisthesisthe NOOBS:v: 1 : 7 : 0 :zip distributionwasused.Theoperating systemalwaysrunsfromanSDcard,whichwasfoundtobereasonableduringthe developmentoftheRaspberryPi.SDcardsdeliverhighcapacity,arecheapandfast, easilywritableandeasilychangeableincaseofdamage.Thesizeoftheusedcard shouldamounttoatleast4GB,asitwillcontaintheoperatingsystemabout2GB andshouldstillprovidesomeadditionalspace.Weused32GBSDcardsincewewere dealingwithHighResolutionimages.Thelesystemusedfortherstsetupduring theintegrationoftheRaspberryPiintotheWSIsystemwasext4.Itisrecommended tocontinueworkingwithext4oratleastanotherjournalinglesystem,asjournaling provideshighersecurityincaseofapowerfailureorasystemcrash.Afterunpacking thedownloadedNOOBSimage,itcanbecopiedontotheSDcard.Ifconnectedtoa monitor,bootingtheRaspberryPiwithitsnewoperatingsystemforthersttime shoulddisplayacongurationmenu.Incaseofnotusinganextramonitorforthe RaspberryPithecongurationmenucanbeaccessedwiththehelpofthecommand raspi-congasroot.OneshouldatleastexpandthelesystemontheSDcardto beabletouseallofitsspace. OntopoftheoperatingsystemweinstallOpenCVlibrariesandVisualization ToolKitwithPythoninterface.OpenCVlibrariesprovideasetofinstructionsto performmorphologicaloperations.VisualizationToolKitinstalledoverthismachine helpsrenderingsystemforanalyzingthecracks. SinceRaspberryPihaslotofresourcesavailableonthesystem,itprovidessupportforvastvarietyoflibrariesonit.Butalltheresourcesavailableonthesystem providesonlylimitedprocessingcapabilityforthecurrentreal-timeimplementation. Onboardmemoryisverylimitedabout1GBofRAMduetowhichweexperiencea limitedImageprocessingcapabilities. 39

PAGE 50

5.Results 5.1FirstGenerationResults Figure5.1aillustratestheprototypethathasbeenusedforscanningandgure 5.1bshowsthecapturedframeinsidethetestobject.Actually,thecamerainthis prototypewillbeconnectedtothelasersourceandshiftedfewcentimetersback.This shiftingbecausetheviewangleofthecameraissmall.Sosomeshiftingisneededto makethecameracapturetherestofthelaserring.Bymovingtheprototypewith constantspeed,thecamerawillcollectthedataandwillbesavedinthecomputer. a b Figure5.1:a.FirstGenerationPrototypeb.TheCapturedFrame Weconsidered3-inchdiameterPVCpipesfortestingourprototype.Thensome defectshavebeenmadeinthepipeasillustratedingure5.2a.Throughholedefects withdierentdiameterswereintroduced.Inthewhitepipespecimen,thesmallest holeinthepipehasthediameterof0.0787inchesandthebiggestholeis0.314inches. Also,thereareadditionalholeswith0.157and0.197inchesdiameters.Moreover, somescrewshavebeeninstalledtomimicdierentdefecttypes.Intheblackpipe specimen,somedamagehasbeenintroducedasthedierentkindsofdefects.Figure 40

PAGE 51

5.2bshowsthecracksthathavedierentdirectionsinthepipeandthewidthofthe crackis0.039inches. a b Figure5.2:a.Theholesonthesurfaceasdefectsb.Thelinearcracksinthespecimen WeconguredourprototypeoverMacOSXandRendered3Dimagesusingvisualizationtoolkitanddisplayedthereconstructedimageonscreen.Asshownin gure5.3,thelaserringsarenotcompletedbecausethecamerawillcaptureapart ofthelasersource.Asaresult,someoftheimageisblocked.Thistypeofmissingis calledshadowing.Thisshadowareablocksabout25percentofthewholescene.So thiskindofmissingisoneofthelimitationsthatshouldbeconsideredtosolveinthe nextsections.However,thelaserringwilltaketheinnersurfaceshapeofthepipe. Thechangesinthelaserlightshapewillbeconsideredasindicationsofdefectsthat wewanttoseeinthereconstructedimages. 41

PAGE 52

a b Figure5.3:aandb:Reconstructedimagesusingrstgenprototype Weobservedthatmactakes0.05secondsforprocessingoneframe,for10frames ittakes0.29seconds,for100framesittakes0.08secondsasgiveningure5.4abelow. WeplottedagraphbetweennumberofframesandprocessingtimeoverMacOSXas showningurebelow5.4b. 42

PAGE 53

a b Figure5.4:a.ResultsoverMacb.NumberofframesvsProcessingTime Usinglargeviewanglecameraisoneofthesolutionsthatmakethereconstructed imagebetter.Thedierencebetweenthiscameraandthepreviousoneistheview angle.Intheoldonetheviewangleisabout45 ,butinthenewoneis180 that makesthewholescenecanbedetectedbyputtingthecameraandthelasersourcein thesamebaselineasshowningure5.5.Weconsideredthissetupforourfurther experimentationandgeneratedresultsoverMacOSX.Firstly,Wescannedtheentire 43

PAGE 54

Figure5.5:PrototypeSetupwithlargeViewAngleCamera pipeandrecordedthevideoofthescan.Thisprerecordedvideoisusedforprocessing andreconstruction.Wefoundthatforoneframeittakes0.002secondstoprocess aframewitharesolutionof480x640andfor200framesittakes0.0027seconds toprocessanddisplay.Thereconstructionresultsfor200framesasshowningure 5.6.Thenewcameraiscalledsheyecamera.Convexlenseshavebeeninstalled Figure5.6:CompleteRingCaptured tothiscameratomaketheviewanglebigger.Inthelastprototypethecamerawas contactedtothelasersourcebutinthenewprototypethecameraandthelasersource willbedisconnectedandtheywillbeinsamebaseline.Becausethecamerahas180 degreeviewangles,alltheringwillbecapturedwithoutmissinganypart.Then,the prototypewillbepassedintothespecimentocollectthedata.However,thescanning willbebyhumanhand.So,somemisalignmentwillbeconsidered.Afterthat,the 44

PAGE 55

same3Dreconstructionprocesswillberepeated.Thenbytakingafewframesfrom therecordedvideo,thelaserlightinsidethepipewillbeasshowningure5.6the ringiscompletedandthelaserlightdoesnothaveanydeformation.Thisindicates thattheareasthathavebeencoveredbythislightarestillwithoutanydefects. Weimplementedthisprototypesetupforourfurthertestingandgeneratedresults overMacOSX.Firstly,Wescannedtheentirepipeandrecordedthevideoofthescan. Thisprerecordedvideoisusedforprocessingandreconstruction.Wefoundthatfor oneframeittakes0.002secondstoprocessaframewitharesolutionof480x640 andfor200framesittakes0.0027secondstoprocessanddisplay.Thereconstruction resultsfor200framesasshowningure5.7. Figure5.7:Reconstructionresultsfrom200Frames 5.2SecondGenerationResults Intheprevioussectionwehaveimplementedourprototypeandgeneratedthe resultsusingMacOSX.Later,wetestedourprototypeonalowpowerlowcostdevice RaspberryPi,processeseachframeat0.35secondsandasthenumberofframes increasestheprocessingtimeincreasesasshowningure5.8andbbelow. 45

PAGE 56

a b Figure5.8:a.ResultsoverPib.NumberofframesvsProcessingTime Thereconstructionresultsbyusingraspberrypiareshownbelowingure5.9a andb.Asthenumberofimagesforprocessingincreasestheperformanceofraspberry pidecreasesandthememoryavailableonpiisnotsucient. 46

PAGE 57

a b Figure5.9:aandbRepresentstheReconstructionResultsonRaspberryPi 47

PAGE 58

Weplottedagraphtocomparetheperformanceofboththemachinesasshownin gure5.10below,observedthatfor50framesOSXtakesonly5secondsandraspberry pitakes10secondsandfor100framesOSXtakes8.5secondsandraspberrypitakes20 secondsasdescribedingraphbelow.Toavoidrunningoutofmemoryoverraspberry Figure5.10:ProcessingtimecomparisonbetweenRaspberrypiandmacOSX piwesplitthescanningintoequalsizedslicesandstoredtheminindividualles.The gure5.11belowshowstheslicesforcertainscanning.Whenwesplitthedatawe avoidedthememoryissuewithraspberrypi.Wecalculatedthescandistanceofeach scansampleandalsowecalculatedthedistanceofdefectfromthestartingpoint. Sincewearemovingwithhandweassumedthescanningspeedtobeconstantand calibratedthatoursensingmodulecaptures10framesforevery1cmdistance.Then wecaptured210framesi.e.,wemoved21cmfromthestartingpoint.Thenwesplit thedatabasedontheequalsizedsliceseachof7cmasingurebelow5.11below. 48

PAGE 59

a b c Figure5.11:EqualSizedSlices28cmlonga0-7cmb7-14cmc14-21cm Theotherexperimentwastodetectthecrackanditspositionfromstartingpoint. Weusedourtestspecimenasingure5.12belowforthisexperimentandperformed someexperiments.Wemarkedourcracksasshowningure5.12.Wemeasuredwith ameasuringscalethatrstcrackisatadistanceof18cm,secondatadistanceof 20cm,thirdat23cm,fourthat25cmandfthat26cmfromthestartingposition. Wehaveplottedagraphasshowningure5.13,betweentheactualpositions ofthecrackswiththedistancewecalibratedfromourexperiments.Formthegraph 49

PAGE 60

Figure5.12:ObjectUnderTestforDistanceCalculation Figure5.13:Comparisonbetweentheactualdistanceandmeasureddistance weobservedthatthecracksatadistanceof18cm,20cm,23cm,25cmand26cm wereaccuratelymatchingwiththeoriginalmeasurementsbutfewpointswerenot 50

PAGE 61

overlappingthiswasduetothemovementoftheprototype.Wemoveourprototype manuallyduetomiss-alignmentsinthesetuptheremaybeerrorsinthecalibration. 51

PAGE 62

6.Limitations Raspberrypiprocesseseachframeat0.35secondsforaresolutionof640x480and astheresolutionoftheframeincreasestheprocessingtimeincreases.Theprocessing timeforimagewithdierentresolutionareshowningure5.7below.Thex-axis showstheresolutionoftheframeandy-axistheprocessingtime.Itshowsthatfor 2000x2000ittook0.95secondstoprocessandfor1024x768ittook0.55seconds, for800x600ittook0.4secondsandon640x480ittook0.35seconds.Soitsgoodto Figure6.1:Computationaltimevs.ResolutionRaspberryPi keeptheresolutionasmuchlowaspossibleforthistypeofexperiments,butmightbe betteronotmovinguptoahigherresolutioncameratodeallesswithotherissues, suchastheneedformorestorageandprocessingpowerbutkeepingtheresolution lowermaydenitelymisssomedetails. Therearefewmissalignmentsinourprototype,rstisthecameraandthelaser missalignmentinsidethepipe.Thepipeis3-inchindiameterwhichwasverylimited inspaceandwehavetoalignthecameraandlasersuchthatthelaserprojectsthe 52

PAGE 63

perfectcircleontheinternalsurfaceofthepipeandcameracapturesthefullring. Thediameterofthelasersourcewas1.5inchesandthebaseofthesensingmodule was1-inchx1-inch.Iflasersourceisplacedatthecenterofthepipeasshown ingure6.1belowthecamerashouldbeplacedatananglesuchthatitdoesnot touchthewallsofthepipewhenitismoved.Butplacingthecamerawithabase dimensionsof1-inchx1-inchisnotpossibleanditwillnotcapturetheentirering. Sothelasersourceisshiftedlittleawayfromthecenteronthebaselineandthe Figure6.2:SystemDesign cameraisplacedonthebaselineinsuchawaythatitdoesnottouchthewallsof thepipeasingure5.4Chapter6.Thisdisplacementcausesthelaserprojectionto misalignononesideofthewallsofthepipeasshowningure6.2resultinginmiss assumptionsincalibrations.Secondly,wearemovingourprototypewithhand,which mayresultinlotoferrorsandmisscalculations.Sincewecalculatethedistanceas inchapter5basedonthenumberofimagesbutifthemovementoftheprototypeis notconstantandifitisstoppedforawhileitmayduplicatetheimagescaptured.So weimplementedthemotionsensingprincipletosolvethisissuebutsincetheinternal surfacefeaturesofthepipeareconstantandthisalgorithmsarenotproductiveto solvethisissue. 53

PAGE 64

Figure6.3:SystemDesign Thirdly,RaspberryPihaslotofresourcesavailableonitbutstilltheresources availablewerenotsucientenough.Additionalexperimentswereconductedonraspberrypitoanalyzethisissue.WeplottedagraphbetweenCPULoadandMemory UsagetoknowtheperformanceofraspberryPiasthenumberofframesincreases. Fromthegure6.3wecanobservethatasthenumberofframescapturedincreases Figure6.4:CPULoadVs.NumberOfFrames theLoadontheCPUincreases.RaspberryPiisverylimitedinonboardMemory 54

PAGE 65

Figure6.5:MemoryUsageVs.NumberOfFrames withonly1GBRAMsomanagingthismemoryisahugeconcerninthiswork.As thenumberofframesincreasesmemorygraduallydecreasesasshowningure6.4 andatoneparticularpointRaspberryPirunsoutofmemoryandcrashes.Soto avoidthistweakonRaspberryPiwereducedtheloadonthedevicebyonlycapturingtheframesprocessingthemandtransferringthemtotheendpointoverwireless communication. Theothermethodweimplementedwastoreducethenumberofframescaptured byusingsensingmodule.Weimplementedmotionestimationalgorithmandcapture onlythoseframes,whichcontainsdefectsintheinternalstructureofthepipe.We usedthismethodtoestimatethedistanceofthecrackfromthestartingpoint.Weare stillworkingonthismethodcanbeimplementedinoursystemtondtheposition ofthedefectsinsidethepipe. 55

PAGE 66

7.ConclusionAndFutureWork InthisthesisworkwedesignedanddevelopedasingleringstructuredlightprototypeandexaminedthepossibilitiesofimplementingRaspberryPicouldbeusedin realtime.Wedevelopedalgorithmsfor3Drenderingandndingthescandistance andtestedourprototypewitha3inchdiameterpvcpipeandprovedthatthesingle ringstructuredlightcouldbeusedtorenderthedefectswith80percentaccuracy. Althoughraspberrypihavinglotofresourcesonboardthememorywasnotsucient for3Drendering. Asaclosingstatementwebelievethepriceisoneofthestrongestarguments forthiskindofimplementation.ThelowcostoftheRaspberryPismeansthat itcanbeamoreaordablecomparedtowhatwouldotherwisebeanexpensive enterprisegradesolution.Oursolutionopensuppossibilitiesforusersinincreasing theirPipelinesecurity.LookingintothefuturewebelievethatRaspberryPidevices hasmanypossibilitiestoexploreandthatthelastworditsroleinnondestructive testingindustryhasyettobesaid. Thissuggeststhatitisnowfeasibleforadvancednondestructiveevaluationapplicationstorunonboardusingamulti-coreembeddedcomputer.Onecanprototype anewapplicationquicklyusingopensourcelibrariesbutsomemanualoptimization isrequiredtogainpeakperformance.Sincemulti-coresystemsarequicklybecoming thenorm,implementingmulti-threadingshouldprovidesignicantspeedupsformost visionprocessingtasks.Thealgorithmsdevelopedmayhavetobemodiedtoreach maximumperformanceandmanyimageprocessingtasksareperfectforgraphicprocessingunitsasthecomputationsforeachpixelcanbeperformedindependentlyof allothers.Forfurtherdevelopmentanautomatedversionofthisprototypemounted onarobotcouldbedesigned.therobotmaybenavigatedandcontrolledalongthe pipeautomaticallysimultaneouslycapturingtheimages.Theseimagesareprocessed onboardandtransmittedoverwirelessnetworkwhenitcanbeviewedin3D.Since 56

PAGE 67

ouraimistodevelopthesystemforsensingpipeswithlessthan3-inchthissystem maynotbesuitableundersuchconditions.Although3Ddatafromsingleringcanbe reconstructedwiththeproposedtechniquebutmotionmaycauseregistrationerrors andcertaindeformationcannotbeexploredthereforeworkingonmulti-ringpattern structuredlightwouldbeafuturetopicofinterest. 57

PAGE 68

REFERENCES [1]Dr.DuanePriddy.Pvc/cpvcCommonFailures. [2]RaphalGrassetHITLabNZBrianThorneandRichardGreen. [3]AgbakwuruJasper.Oil/GasPipelineLeakInspectionandRepairinUnderwater PoorVisibilityConditions:ChallengesandPerspectives. [4]BarryNicholsonBakerHughes.In-lineinspectionsguideoshoreeldmanagementdecisions. [5]LIXiaochunYANGZhaofengDingQingxin1,TianFei2.ACrackRecognition WaythruImagesforNaturalGasPipelines. [6]CEITEC.EvaluationofSSET:TheSewerScannerandEvaluationTechnology. [7]W.GuoL.SoibelmanJ.H.GarrettJr.Automateddefectdetectionforsewer pipelineinspectionandconditionassessment. [8]OlgaDuran3MohammadrezaMotamedi,FarrokhFaramarzi2.NewConcept forcorrosioninspectionofurbanpipelinenetworksbydigitalimageprocessing. [9]CharlesJ.Hellier.HandBookonNonDestructiveEvaluation. [10]NDTResouceCenter. https://www.nde-ed.org/EducationResources . [11]J.P.Monchalin.OpticalandLaserNDT:RisingStar. [12]IEEEY.F.LiSeniorMemberIEEES.Y.Chen,MemberandIEEEJianweiZhang, Member.RealtimeStructuredLightVisionwiththeprincipleofUniqueColor Codes. [13]JasonGeng.Structured-Light3dSurfaceImaging:atutorial. [14]S.ChrisColbert.TheNumpyArray:AStructureforEcientNumericalComputation. [15]Morphology. http://opencv-python-tutroals.readthedocs.org/en/latest/index.html . [16]Thresholding. http://opencvpython.blogspot.com/2013/05/thresholding.html . [17]Vtk. http://www.vtk.org/ . [18]VtkResources. http://www.vtk.org/VTK/resources/software.html . 58

PAGE 69

[19]R.P.FOUNDATION.Whatisaraspberrypi?RASPBERRYPIFOUNDATION. https://www.raspberrypi.org/help/what-is-a-raspberry-pi/ . [20]RaspberryPi2modelb.RASPBERRYPIFOUNDATION. https://www.raspberrypi.org/products/raspberry-pi-2-model-b/ . 59

PAGE 70

AppendixA.AlgorithmsForCapturingand3DRendering A.1AlgorithmforDistanceEstimation importcv2 fromPILimportImage importnumpyasnp importtime importos importcPickle importsys frommatplotlibimportpyplotasplt fromdatetimeimportdatetime #InitializeList globalprev globalcount globaltotal_count globaldis_tot stack=[] dis_tot=0 prev=0 total_count=0 #Skeletonofimage defimg_skelimg: img=cv2.cvtColorimg,cv2.COLOR_BGR2GRAY size=np.sizeimg 60

PAGE 71

skel=np.zerosimg.shape,np.uint8 ret,img=cv2.thresholdimg,180,255,0 element=cv2.getStructuringElementcv2.MORPH_CROSS,,3 done=False whilenotdone: eroded=cv2.erodeimg,element temp=cv2.dilateeroded,element temp=cv2.subtractimg,eroded skel=cv2.bitwise_orskel,temp img=eroded.copy zeros=size-cv2.countNonZeroimg ifzeros==size: done=True pix=np.asarrayskel.copy arr_pix=Image.fromarraypix returnarr_pix #DistanceMeasurement defdist_totalcount: total_distance=count/10*1 total_distance=total_distance+7 returntotal_distance #VTKrendering defshow_vtkl,f_width,f_height,prev,dis_tot: k=0 d=lenl*5#withoursampleeachimageswillbedisplayed5timestoget abetterview 61

PAGE 72

w=f_width h=f_height l=sortedl stack=np.zerosh,d,w,dtype=np.uint8 foriinl: im=i temp=np.arrayim,dtype=int foriinrange: stack[:,k+i,:]=temp k+=5 path="/home/pi/Desktop/Result_files" filename=os.path.joinpath,datetime.now.strftime"%H%M%S"+"-"+ strprev+"-"+strdis_tot+".txt" cPickle.dumpstack,openfilename,"w" #ForRaspberryPi2usingPython. print"sys.version:"; printsys.version+"n"; try: cam=cv2.VideoCapture cam.set,640 cam.set,480 count=0 whileTrue: start=time.time ret,frame=cam.read cv2.imshow'frame',frame w=frame.shape[1] h=frame.shape[0] 62

PAGE 73

temp=img_skelframe stack.appendtemp ifcount>5: total_count=total_count+1 dis_tot=dist_totalcount temp=dis_tot dis_tot=dis_tot+prev count=0 show_vtkstack,w,h,prev,dis_tot print"slice"+strtotal_count+"from:"+strprev+"cm"+"-" +strdis_tot+"cm" prev=prev+temp stack=[] count=count+1 printcount exceptKeyboardInterrupt: end=time.time-start print"TotalCapturedTime:",end print"Totalnumberofslices",total_count print"TotalDistancecoveredincentimeters",dis_tot cam.release cv2.destroyAllWindows print"n" print"ExitbyKeyboardInterruptGeneratedn" finally: delstack 63

PAGE 74

A.2Algorithmfor3DRendering importvtk importcv2 importos importcPickle importnumpyasnp importsys fromPILimportImage defvtk_rendererstack: dataImporter=vtk.vtkImageImport data_string=stack.tostring dataImporter.CopyImportVoidPointerdata_string,lendata_string dataImporter.SetDataScalarTypeToUnsignedChar dataImporter.SetNumberOfScalarComponents w,d,h=stack.shape dataImporter.SetDataExtent,h-1,0,d-1,0,w-1 dataImporter.SetWholeExtent,h-1,0,d-1,0,w-1 alphaChannelFunc=vtk.vtkPiecewiseFunction colorFunc=vtk.vtkColorTransferFunction foriinrange: alphaChannelFunc.AddPointi,0.2 colorFunc.AddRGBPointi/255.0,i/255.0,i/255.0,i/255.0 alphaChannelFunc.AddPoint,0.0 colorFunc.AddRGBPoint,0.0,0.0,1.0 volumeProperty=vtk.vtkVolumeProperty volumeProperty.SetColorcolorFunc volumeProperty.ShadeOn 64

PAGE 75

volumeProperty.SetScalarOpacityalphaChannelFunc compositeFunction=vtk.vtkVolumeRayCastCompositeFunction volumeMapper=vtk.vtkVolumeRayCastMapper volumeMapper.SetVolumeRayCastFunctioncompositeFunction volumeMapper.SetInputConnectiondataImporter.GetOutputPort volume=vtk.vtkVolume volume.SetMappervolumeMapper volume.SetPropertyvolumeProperty renderer=vtk.vtkRenderer renderWin=vtk.vtkRenderWindow renderWin.AddRendererrenderer renderInteractor=vtk.vtkRenderWindowInteractor renderInteractor.SetRenderWindowrenderWin renderer.AddVolumevolume renderer.SetBackground.1,2,3 transform=vtk.vtkTransform transform.Translate.0,1.0,0.0 renderWin.SetSize,600 defexitCheckobj,event: ifobj.GetEventPending!=0: obj.SetAbortRender renderWin.AddObserver"AbortCheckEvent",exitCheck renderInteractor.Initialize renderWin.Render renderInteractor.Start defvtk_main: infile_name=sys.argv[1] data=cPickle.loadopeninfile_name,"r" 65

PAGE 76

returndata #MainStartsHere if__name__=="__main__": stack=vtk_main vtk_rendererstack A.3AlgorithmforCrackDetection importvtk importcv2 importglob fromPILimportImage importnumpyasnp importtime importos importmatplotlib.pyplotasplt importpylab org_list=[] l=[] dist_list=[] ''' DistanceCalculation ''' defdist_cal_editedcount: #assuming10framesforonecentimeter distance=count/10 distance=distance+7 66

PAGE 77

ifdistancenotindist_list: dist_list.appenddistance returndist_list defdist_totalcount: total_distance=count/10 total_distance=total_distance+7 returntotal_distance defplot_graphdist: plt.plotdist,'r-^',label='DistanceMeasured' plt.plot[18,20,23,33,36],'g--o',label='ActualDistance' plt.ylabel'Distance' plt.xlabel'CracksDetected' plt.ylim,100 plt.xlim,10 plt.legendloc='best' plt.title"ComparisionBetweenActualDistance&MeasuredValues" plt.show ''' Skeletonofimage ''' defimg_skelimg: img=cv2.cvtColorimg,cv2.COLOR_BGR2GRAY size=np.sizeimg skel=np.zerosimg.shape,np.uint8 ret,img=cv2.thresholdimg,180,255,0 element=cv2.getStructuringElementcv2.MORPH_CROSS,,3 67

PAGE 78

done=False whilenotdone: eroded=cv2.erodeimg,element temp=cv2.dilateeroded,element temp=cv2.subtractimg,eroded skel=cv2.bitwise_orskel,temp img=eroded.copy zeros=size-cv2.countNonZeroimg ifzeros==size: done=True pix=np.asarrayskel.copy arr_pix=Image.fromarraypix ''' cv2.imshow"skel",temp cv2.waitKey cv2.destroyAllWindows ''' returnarr_pix ''' StartsHere ''' try: start_pos_x=0 end_pos_x=0 cx=0 cy=0 cam=cv2.VideoCapture 68

PAGE 79

cam.set,640 cam.set,480 count=0 num=0 whileTrue: print"WaitingforMotionDetection...." motion_found=False ret,img1=cam.read cv2.imshow'img',img1 grayimage1=cv2.cvtColorimg1,cv2.COLOR_BGR2GRAY ret,img2=cam.read count=count+2 img_width=img2.shape[1] img_height=img2.shape[0] cv2.imshow'img',img2 grayimage2=cv2.cvtColorimg2,cv2.COLOR_BGR2GRAY differenceimage=cv2.absdiffgrayimage1,grayimage2 #Getdifferencesbetweenthetwogreyedimages differenceimage=cv2.absdiffgrayimage1,grayimage2 #Blurdifferenceimagetoenhancemotionvectors differenceimage=cv2.blurdifferenceimage,BLUR_SIZE,BLUR_SIZE #Getthresholdofblurreddifferenceimagebasedon THRESHOLD_SENSITIVITYvariable retval,thresholdimage=cv2.threshold differenceimage,THRESHOLD_SENSITIVITY,255,cv2.THRESH_BINARY #Getallthecontoursfoundinthethresholdimage contours,hierarchy=cv2.findContours thresholdimage,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE 69

PAGE 80

total_contours=lencontours org_list.appendimg1 org_list.appendimg2 forcincontours: #getareaofnextcontour found_area=cv2.contourAreac x,y,w,h=cv2.boundingRectc cx=x+w/2 cy=y+h/2 track_start_time=time.time iffound_area>100: num=num+1 print"Crack"+strnum+"at:"+strdist_cal_editedcount exceptKeyboardInterrupt: cam.release cv2.destroyAllWindows ifnotlenorg_list==0: forframeinorg_list: gray=cv2.cvtColorframe,cv2.COLOR_BGR2GRAY im_skel=img_skelframe l.appendim_skel printdist_list print"Totaldistancecoveredincentimeters:",dist_totalcount plot_graphdist_list show_vtkl,img_width,img_height else: print"NoCracksDetected....MotionNotFound" 70

PAGE 81

finally: deldist_list dell delorg_list 71