Citation
Digital photography analysis

Material Information

Title:
Digital photography analysis Analytical framework for measuring the effects of saturation on photo response non-uniformity
Uncontrolled:
Analytical framwork for measuring the effects of saturation on photo response non-uniformity
Creator:
Henry, Kris ( author )
Language:
English
Physical Description:
1 electronic file (48 pages) : ;

Subjects

Subjects / Keywords:
Signal processing -- Digital techniques ( lcsh )
Signal processing -- Digital techniques ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Review:
Over the years, and through various research, it has been found there are many ways to analyze digital photography to determine its source camera for the original capture. There are many factors to consider when analyzing photography, such as the device used, the environment of the capture, the software used to process the image and any alterations or editing which may have been done. One very important technique of camera source identification is to analyze photo response non-uniformity (PRNU). It has been found every camera, or more specifically every camera’s sensor, reacts differently in various conditions. The photo response non-uniformity acts as a fingerprint for a camera. In this paper, we will explore the various techniques used to determine the source of a photo. We will also explore how the unique PRNU fingerprint responds to various situations, including environments of high saturation, artificial light and natural light. Chapter 4 will provide the framework for analyzing such images through multiple case studies using different devices. This study will provide a basis and explanation of how multiple levels of saturation can affect PRNU through the camera’s sensor during capture.
Thesis:
Thesis (M.S.)-University of Colorado Denver.
Bibliography:
Includes bibliographic references
System Details:
System requirements: Adobe Reader.
Statement of Responsibility:
by Kris Henry.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
953836235 ( OCLC )
ocn953836235
Classification:
LD1190.A70 2016m H46 ( lcc )

Downloads

This item has the following downloads:


Full Text
DIGITAL PHOTOGRAPHY ANALYSIS: ANALYTICAL FRAMEWORK FOR
MEASURING THE EFFECTS OF SATURATION ON PHOTO RESPONSE
NON- UNIFORMITY
by
KRIS HENRY
B.S., University of Central Florida, 2003
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado in partial fulfillment
of the requirements for the degree of
Master of Science
Recording Arts
2016


2016
KRIS HENRY
ALL RIGHTS RESERVED
11


This thesis for the Master of Science degree by
Kris Henry
has been approved for the
Recording Arts Program
by
Catalin Grigoras
Jeff M. Smith
Scott Burgess
April 29, 2016
m


Henry, Kris (M.S., Recording Arts)
Digital Photography Analysis: Analytical Framework for Measuring the Effects of Saturation
on Photo Response Non-Uniformity
Thesis directed by Associate Professor Catalin Grigoras
ABSTRACT
Over the years, and through various research, it has been found there are many ways to
analyze digital photography to determine its source camera for the original capture. There are
many factors to consider when analyzing photography, such as the device used, the
environment of the capture, the software used to process the image and any alterations or
editing which may have been done. One very important technique of camera source
identification is to analyze photo response non-uniformity (PRNU). It has been found every
camera, or more specifically every cameras sensor, reacts differently in various conditions.
The photo response non-uniformity acts as a fingerprint for a camera. In this paper, we will
explore the various techniques used to determine the source of a photo. We will also explore
how the unique PRNU fingerprint responds to various situations, including environments of
high saturation, artificial light and natural light. Chapter 4 will provide the framework for
analyzing such images through multiple case studies using different devices. This study will
provide a basis and explanation of how multiple levels of saturation can affect PRNU through
the cameras sensor during capture.
The form and content of this abstract are approved. I recommend its publication.
Approved: Catalin Grigoras
IV


TABLE OF CONTENTS
CHAPTER
I. INTRODUCTION......................................................1
History of Camera Sensors..........................................1
Digital Photography................................................3
Authentication.....................................................5
II. PHOTO RESPONSE NON-UNIFORMITY.........................................8
Identification.....................................................9
Measurements......................................................10
III. ARTIFACTS...........................................................15
Defective Pixels..................................................16
Compression.......................................................17
Alterations.......................................................19
Saturation........................................................20
IV. FRAMEWORK FOR MEASURING SATURATION EFFECTS...........................21
Camera Studies....................................................22
Final Results.....................................................29
V. CONCLUSION...........................................................35
REFERENCES...............................................................37
v


LIST OF FIGURES
Figure
1 Camera Module of Mobile Phones...................................................2
2 Digital Image Acquisition Pipeline...............................................4
3 Example of Random Noise..........................................................8
4 Example of Fixed Pattern Noise...................................................8
5 Example of PRNU Pattern from a Kodak V550 Camera (magnified).....................9
6 Sensor Output with Light Input as a Function of Different Exposure Times........12
7 Denoising Filters Used to Obtain Residual Image of Subject or Test Image........14
8 Diagram of Large Component Extraction Algorithm.................................14
9 Nikon S710 Diagonal Artifact....................................................16
10 Casio P-Maps with Artifacts at Various Focal Lengths in Lens Distortion........16
11 MATLAB Code for Frame Averaging for Residual Pattern...........................22
12 Kodak Low-Color Image..........................................................23
13 Kodak High-Color Image.........................................................23
14 Kodak PRNU Reference Image.....................................................24
15 Pixel Saturation at 10%........................................................25
16 Pixel Saturation at 34%........................................................25
17 Pixel Saturation at 50%........................................................25
18 Pixel Saturation at 70%........................................................25
19 Nokia Lumia 635 Plot Low-Color vs. High-Color Saturation.......................26
20 LG G3 Plot Low-Color vs. High-Color Saturation.................................27
vi


21 Kodak EasyShare Plot Low-Color vs. High-Color Saturation..................28
22 Samsung Galaxy S3 Plot Low-Color vs. High-Color Saturation................31
23 LG G4 Plot Low-Color vs. High-Color Saturation............................32
24 Slate 8 Tablet Plot Low-Color vs. High-Color Saturation....................32
25 Alcatel Tablet Plot Low-Color vs. High-Color Saturation....................33
26 GoPro Hero3 Plot Low-Color vs. High-Color Saturation......................33
27 Canon Powershot G2 Plot Low-Color vs. High-Color Saturation...............34
28 Nexus6 Plot Low-Color vs. High-Color Saturation...........................34
vii


LIST OF TABLES
Table
1 Devices and specifications.........................................................22
2 Nokia Lumia 635 Low-Color vs. High-Color Saturation................................26
3 LG G3 Vigor Low-Color vs. High-Color Saturation....................................27
4 Kodak Easy Share VI003 Low-Color vs. High-Color Saturation.........................28
5 Low-Color Saturation Correlation Coefficients per Camera Model.....................30
6 High-Color Saturation Correlation Coefficients per Camera Model....................30
viii


CHAPTER I
INTRODUCTION
This thesis involves studying the background of camera sensors, how they react to
environment, specifically pertaining to light sources, and above all, how a cameras fingerprint
reacts to saturation in capture. There are many ways to authenticate an image; however, little
studies have been conducted on the effects of saturation on photo response non-uniformity
(PRNU), or a cameras sensor fingerprint. Every camera is different; this also pertains to
identical makes and models. It is important to understand how PRNU relates to image
authentication and how it can be a useful tool in determining an images source. Unfortunately,
with any forensic science discoveries, there are also anti-forensics to consider. As our
community moves forward in digital imaging research, there are others who may be using the
research for ill practice. Determining the effects on authentication measurement techniques
will allow an understanding of how one might employ anti-forensics in this area. This paper
will focus mainly on the effects of saturation on PRNU through original camera studies, both
from digital still and mobile phone cameras. It is here where light will be shed as to if and
how one can intentionally produce false readings for personal gain.
History of Camera Sensors
It has been defined that an image is a variation of light intensity of a reflection as a
function of position on a plane; however, to capture an image, means to convert the information
to signals which are then stored on a device. In electronic photography, there are two primary
methods of storage: analog and digital. In analog cameras, image signals from the cameras
sensor are converted and stored as video signals; whereas, in digital cameras, they are
converted and stored as digital signals [1], Before information can be converted into a signal
1


and stored on a device, the camera itself must have the proper hardware to do so. The focus
of this study will be digital cameras, mainly cameras in mobile devices. There are many
components to even the smallest of digital cameras, such as: lens barrels, multiple lenses, a
lens mount and the information chip, as seen in Figure 1 [2], Located on the information chip,
is the cameras sensor, or the heart of the camera.

3RD LENS ELEMENf
2NO LENS ELEMENT
1ST LENS ELEMENT
LENS BARREL
Figure 1 Camera Module of Mobile Phones
To understand a cameras sensor, one must first comprehend how information reaches
it. A camera lens is responsible for focusing light onto the sensor. The image sensor converts
this light into electronical signals. As an image is captured, the light passes through the lens
and falls onto the sensor, which consists of small photo-detectors. These detectors are also
referred to as pixels. There are two different types of image sensors: Charged Coupled Device
(CCD) and Complementary Metal-Oxide Semiconductor (CMOS). Regardless of type,
sensors cannot distinguish between various light wavelengths. In other words, the sensors
cannot identify individual colors. It is the responsibility of the filter in front of the sensor to
2


assign color to the corresponding pixels of a capture [2], There are a few differences between
the CCD and the CMOS sensor. It was October of 1969, when George Smith and Willard
Boyle invented the charge-coupled device (CCD). This was widely used in digital
photography in analog cameras because it produced very high quality images. However, in
the 1970s the CMOS sensor was invented and offered higher speeds for transfer than the
analog chip known as the CCD. It is here where digital photography presented more options
[3].
Digital Photography
As previously mentioned, the heart of a digital camera is its sensor. There is a very
important difference in the two sensors known as CCD and CMOS. The difference lies in how
charges are passed through the pixels. In the older CCD model, the pixels charge is transferred
through output nodes and assigned to be converted into voltage. From there, the charge is then
buffered and sent off-chip as an analog signal. In other words, the work of transferring light
into a charge is outsourced. The outputs uniformity is high which results in very high quality
photographs. All of the pixels can be devoted to light capture which aids in this higher quality.
This contributed to the CCD models popularity. This was the case until the CMOS model
could be put into widespread production in the 1990s. The CMOS became popular because
it required lower power consumption and lowered fabrication costs. The CMOS sensor works
by each pixel having its own charge-to-voltage conversion and includes amplifiers, noise-
correction and digitization circuits so the output of the chip consists of digital bits. Because of
this, the uniformity, or the quality, is lower; however, it is parallel and allows for higher transfer
speeds [3],
3


There is a common thread between both the CCD and CMOS sensors, and that is the
way color is read during capture. As mentioned before, the sensor cannot see color on its own.
It relies on a filter in front of the sensor to assign color to each pixel. This filter is referred to
as the Bayer Pattern color filter array (CFA). The Bayer Pattern works by adding two green
pixels, one red pixel, and one blue pixel in each square of four-by-four pixels. By this process,
each pixel can register one of the three primary colors and any missing color values can be
gathered from its neighboring pixels [4], So the process starts as a scene is captured, it passes
through the lens of a camera, through the filters and then to the CFA. Once color is assigned
to the pixels of the sensor, the RAW image is then stored before processing and compression
stages occur within the camera [5], This digital photography flow can be seen in Figure 2.
Scene Lens Filters CFA Sensor Post Processing Storage
Figure 2 Digital Image Acquisition Pipeline
Another common thread between the CCD and CMOS sensors are what is called sensor
noise. Both produce sensor noise; however, how each sensor responds and corrects this noise
varies between the two sensor types. For example, in very saturated, or highly lit
environments, overexposure of individual pixels can occur. This is referred to as anti-
blooming. The CMOS sensor presents better noise correction in these situations as it handles
signal conversion for each pixel individually; whereas, the CCD sensor handles signal
conversion as a whole and is dependent on conversion off-sensor [6], In a CCD sensor, there
4


are four different types of noise sources: Read Noise, Dark Current, Photon Noise, and Pixel
Response Non-Uniformity. Read Noise (RN) is noise caused by thermally induced motions
of electrons in the output amplifier. This noise can limit the performance of a CCD sensor, but
can be reduced in lengthier read out times. Simply stated, the faster the readout, the more noise
is present. Dark Current is a noise which is caused by thermally generated electronics in the
CCD sensor, but can be eliminated by cooling the CCD. Photon Noise is when the sensor
detects photons in an unpredictable fashion. To explain, photons are distributed to the sensor,
pixel-by-pixel. So basically, Photon Noise is when the pixels acquire unequal amounts of
photons across the entire sensor. It is an uneven distribution of photons. It is also referred to
as shot noise. Pixel Response Non-Uniformity, simply put, is a defect in the silicon of the
sensor by the manufacture. This is why every camera sensor possess a different fixed pattern
noise. To correct this defect and remove the noise, flat fielding, or frame averaging, can be a
technique used [7], Much like the CCD sensor, the CMOS sensor also has noise obstacles.
Notable issues with the CMOS sensor are high-level dark current shot noise and reset noise.
Shot noise has already been addressed with the CCD sensor; however, the CMOS sensor also
possesses reset noise. It is produced from thermal noise causing fluctuations in voltage in the
reset level for any given pixel [8], Due to the noises produced from sensors, it creates more
characteristics to be studied in each individual camera. These noises will be explained in more
detail later.
Authentication
There are a many ways to determine an images source camera through image
authentication. Such information can be found in an images file structure, in an images
source properties and through the images sensor pattern. A lot of research has been conducted
5


around analysis of these techniques, to include using Exiftool to review file format/EXIF data
and camera properties, using HxD, JPEGSnoop, X-Ways or MATLAB to run metadata
keywords and even researching how a camera distinctively compresses its images. These
techniques and tools, although beneficial in providing file data information, are not to be
considered authentication tools, but rather tools which aid in obtaining further information
about an images source. Some of these methods, however, such as analyzing hex data and
metadata keywords, can be manually altered. A cameras sensor is the true fingerprint and is
difficult to alter. A cameras sensor is imperfect from the start, straight from the manufacture.
No sensor, nor its pattern, is identical. Each camera produces its own unique noise. This is
commonly coined as camera ballistics. Much like scientists can determine which firearm a
bullet was fired from, scientists can also determine which camera was used for a photo capture.
There are many different types of camera noises to consider: temporal noise, photon noise,
dark current noise, readout noise, quantization noise, spatial noise, fixed pattern noise, and
photo response non-uniformity (gain noise). Temporal noise is a combination of all noises
which can change a pixels value. In CCDs, the charge is shifted so many times during readout
that temporal noise is considered the most dominate noise source for those sensors. Photon
noise, coined as shot noise earlier mentioned, is from an uneven distribution of photons into
the pixels. Dark current noise is created by electrons that evolve through thermal processes of
the pixel. This noise can be reduced by cooling of the sensor. Readout noise is when the
charge, or electrons, is converted into voltage. This noise is directly related to readout speed.
As previously mentioned, the faster the readout speed, the more noise is apparent.
Quantization noise is when A/D conversion occurs, or when voltage is converted into a digital
value. Spatial noise occurs when the pixels are exposed to a homogeneous light and react
6


differently due to varying sensitivity levels. It is most dominate in the CMOS sensors, as the
pixels are read out through different circuits. Fixed pattern noise, also called dark signal non-
uniformity (DSNU), is the difference between the lowest and highest measured values for all
active pixels in the array. Photo response non-uniformity (PRNU) has often been referred to
as the difference between what a camera sensors ideal response to light should be and what
the true response is on the pixels [9],
7


CHAPTER II
PHOTO RESPONSE NON-UNIFORMITY
As previously stated, the PRNU is considered the fingerprint of a cameras sensor.
There are two main categories of noise which fall under PRNU, or a cameras sensor
imperfections and signature. Those categories are temporal noise, or random noise, and fixed
pattern noise (FPN). Random noise is when the location and time of occurrence of pixel
differences is unpredictable. Fixed noise, on the other hand, is when occurrence is based on
location due to an underlying structure. Fixed noise will appear constant in every photo taken
from that particular device. Fixed pattern noise consists of dark signal non-uniformity (DSNU)
and photo response non-uniformity (PRNU). DSNU is considered an offset between pixels in
the dark. It is measured in the absence of light and can be corrected by subtracting a dark
frame. PRNU is just the opposite. It is seen as a variation between pixels under a certain
amount of illumination. It is corrected by offset and gain for each pixel. PRNU is a signal
dependent noise and is created due to a variation amongst pixels in their sensitivity to light.
Figures 3-5 show examples of random noise, fixed pattern noise and a typical pattern of PRNU
[10] [11].
8


Figure 5 Example of PRNU Pattern from a Kodak V550 Camera (magnified)
Identification
Photo response non-uniformity can be divided into two separate components:
individual detector uniformity and array divide uniformity. The variation in responsivity
between adjacent pixels is considered high-frequency PRNU. High-frequency PRNU is when
the FPN pattern appears more evident in the brighter areas of an image than the darker areas.
Identifying tolerances on defective or hot pixels is usually done using a high-frequency
PRNU pattern. Low-frequency PRNU is a good measure to use when evaluating the variations
in responsiveness from one side of the photo array to the other. It is also referred to as photo-
response shading. Generally speaking, when manufactures refer to PRNU, they are referring
to the low-frequency rather than the high-frequency measure. The low-frequency measure
displays the difference in response levels between the most and the least sensitive pixels across
the sensor array under uniform illumination conditions. The degree of non-uniformity in
PRNU is related to a few factors: amplitude of the non-uniform pixels, the pixels polarity, the
pixels location, the pixels total count, the pixels distance between non-uniform pixels and
the column amplitude. When measuring PRNU is important to note what causes a weak or
strong signal pattern for better comparison results. For example, red and infrared (IR) light
produce a strong PRNU pattern than the blue and green wavelengths captured. This is due to
the deeper penetration lengths at the end of the spectrum (the redder end). At the redder end
9


of the spectrum, photons encounter more defect sites and material variations. It is important
to understand obstacles prior to selecting a model to use for PRNU fingerprint measurement.
It is also crucial to remember the purpose of such measurement and suspect images involved
[12].
There are many forensic uses for measuring a sensors PRNU pattern such as camera
identification, device linking, recovery of processing history and detection of digital forgeries.
These tasks aid in criminal investigations and can link a suspect to a particular camera and/or
contraband image. PRNU is typically reliable for making these determinations and although
PRNU is stochastic, or random in nature, the life span is very stable. PRNU has multiple
credible properties. It has dimensionality it provides large information content. It has
universality all sensors will exhibit its own PRNU pattern. It has generality the fingerprint
is present in every capture regardless of camera settings or content environment. It has stability
- the pattern can withstand time under various environmental conditions such as temperature
and humidity. Lastly, PRNU has robustness it can survive lossy compression, filing, gamma
correction and other forms of processing within the camera or through outsourced software
[13].
Measurements
There are a few different methods in measuring a cameras PRNU sensor pattern. The
most widely accepted model consists of running a reference pattern from a particular camera
against the image in question. To start, a reference pattern must be obtained by capturing 30-
50 flat field images with the device. A flat field is a solid color, preferably light, displaying
illumination without heavy saturation. Once these frames are captured, an average must be
taken for the best possible estimate of the sensor pattern. Using MATLAB, a code can be run
10


to obtain the correlation coefficient between the reference pattern and the subject image to
determine its linear relationship. A correlation coefficient (CC) is a measure of the strength of
the straight-line or linear relationship between two variables, in this case, the reference image
and the subject image. The CC will have a value ranging between +1 and -1. To interpret the
correlation, it must be understood what the values indicate. A value of 0 indicates no linear
relationship between reference image and subject image. A value of +1 indicates a perfect
positive linear relationship, or both variables increase in its values through an exact linear rule.
A value of-1 indicates a perfect negative relationship, or one variable increases in value while
the other decreases in value. Values between 0 and 0.3 (0 and -0.3) indicate a weak positive
(or negative) relationship through a shaky linear rule. Values between 0.3 and 0.7 (-0.3 and
-0.7) indicate a moderate positive (or negative) relationship through a fuzzy-firm linear rule.
Values between 0.7 and 1.0 (-0.7 and -1.0) indicate a strong positive (or negative) linear
relationship through a firm linear rule. A value of r squared is the percent of variation shared
between the two variables being examined. All of these values can make the linear
determination if the relationship is already known. If the relationship is unknown or nonlinear,
the CC will be useless and questionable [14],
After covering the basic model and correlation coefficient values, other models will
now be discussed. The second model for PRNU measurement consists of measuring the CC
just the same as mentioned in the first model; however, in this model, exposure time of the
reference images is taken into account. In other words, instead of taking flat field images one
after another to average together for one solid PRNU reference pattern, the flat field images
will be taken at various exposure or integration times. This allows the sensor to cool between
captures and reduces thermal noise and dark current in each of the 20-50 reference images. So
11


as a result, each reference photo is a strong pattern reference prior to the frame averaging
process. Figure 6 shows the sensor output with light input as a function of the exposure time.
Note the sensor starts to saturate at 400 ms [15],
t,,p= 0 ms t,p = 0.2 ms t,xp = 0.3 ms t,xp= 0.5 ms t,xp= 0.8 ms
t,xp = 1 ms t,Xp = 2 ms t,xp= 3.2 ms t,Xp = 5 ms tXp = 8 ms
tp= 12.5 ms t,,p = 20 ms t,xp= 25 ms t,xp = 32 ms t,Xp = 40 ms
t,xp = 50 ms t,xp = 65 ms t,xp = 80 ms t,xp = 100 ms t,xp =125 ms
t,XB = 150 ms t,XB = 200 ms t,xp = 250 ms t,p = 320 ms t,x = 400 ms
Figure 6 Sensor output with light input as a function of different exposure times
The next model is referred to as color-decoupled PRNU (CD-PRNU). It is believe the
color interpolation process of photo capture creates noise which affects the readout of the
PRNU pattern. When a scene passes through the lens and into the color filter array (CFA), the
camera assigns one color per pixel. This is part of the color interpolation process within the
camera. Artificial colors are obtained through this process which are not a part of the scene
itself nor the camera hardware. Couple-decoupled PRNU is a method which proposes to
decompose each color channel into 4 sub-images, then extracts the PRNU noise from each sub-
image. This will eliminate the additional, artificial noise created by on-board processing. Once
the sub-images are obtained, they are compared against the subject image from the same
camera. It was noted during the experiment of this method that CD-PRNU correlation
coefficient figures are slightly higher than the traditional PRNU method. This is to infer the
12


CD-PRNU provides more positive results in strong linear relationships between reference
images and subject images [16],
The last method to measure PRNU is to utilize the traditional PRNU-frame averaging
method, but also further adopt a model which entails comparing only the larger components of
the two signals. It is suggested this method is more accurate for determining the linear
relationship between a reference image and a subject image because it increases the correct
detection possibility and decreases the computational complexity of using the whole PRNU
pattern as opposed to just the larger components it carries. Through the algorithm of this
method, it was found that highly saturated images, or even areas of an image, carry no PRNU
information; whereas, dark location carries a weak PRNU signal. The idea is to obtain a PRNU
pattern from illumination, with little dark current and no high saturation. In the first, most
often used, model of PRNU extraction previously mentioned, it is necessary to extract PRNU
by subtracting the denoised version from the original image. So to obtain the correlation
coefficient, the reference image is directly compared to the subject image for PRNU pattern.
In this algorithm, it is proposed that the PRNU pattern should be extracted from both the
reference images and the subject image, both undergo removal of color interpolation, then the
reference residual images to be averaged and compared to the subject residual image. In other
words, the reference images have their PRNU pattern extracted individually prior to frame
averaging, then compared to the pattern, itself, of the subject image. Figure 7 displays the
results of PRNU extraction from a subject image and Figure 8 shows the model algorithm and
how to utilize the PRNU pattern once obtained. This method of PRNU extraction and
comparison is preferred by some as traditional PRNU extraction poses many issues such as
addition of shaping noise, background noise left as the extraction is not perfectly content-
13


adaptive, evidence of small-magnitude high-frequency noise and, as mentioned in the previous
CD-PRNU model, cameras can only capture one color per pixel [17], These are all valid
factors which affect the readout of the PRNU pattern. Unfortunately, these are only to name a
few.
Figure 7 Denoising filters used to obtain residual image of subject or test image
Reference Residual Reference pattern
images images noise

Removing color interpolation
and averaging
Denoising
Denoised
images
Â¥
Comparing large
components of two
signals
Removing color
interpolation
i
Residual
image
Test pattern
noise
t
Y/N ?
Denoised
image
Figure 8 Diagram of large component extraction algorithm
14


CHAPTER III
ARTIFACTS
Regardless of model algorithm used to extract PRNU and compare to a subject image,
there are always artifacts to consider. These artifacts can provide false readouts and incorrect
correlation coefficients. These objects found in the PRNU pattern could be due to multiple
factors, such as the color interpolation process, excess background noise, artifacts due to
heating of the sensor and quick exposure time, additional shaping noise, defective pixels,
alterations or edits and heavy saturation. Some other artifacts include those specific to select
cameras. Special care must be given to acknowledge these artifacts to prevent false readings
and misinterpretation. Gloe, Pfennig and Kirchner reported a case study from the Dresden
Image Database which revealed similar artifacts found in certain camera models. The cameras
explored were the Nikon CoolPix S710, theFujiFilm J50 and the Casio EX-Z150. It was found
the Nikon CoolPix S710 presented a diagonal line artifact in all its captures. The FujiFilm J50
images exhibited a slight horizontal shift and the Casio EX-Z150 displayed irregular geometric
distortions. On one hand, these artifacts can be beneficial in further device identification;
however, on the other hand, these artifacts can prove detrimental to a case if the examiner is
uncertain or unaware of these model-specific pattern issues. One of the major challenges of
camera identification by use of PRNU is the suppression of non-unique artifacts. These are
artifacts which are specific to a camera model or make. These artifacts, however, may be very
similar to those in a PRNU pattern of a different device altogether. Figures 9-10 show
examples of known-model artifacts in PRNU [18],
15


cross-correlation p 256 x 256 crop of the reference pattern
-0.02 o 0.02 0.04 0.06 0.08 (contrast enhanced)
-20 -10 0 10 20
horizontal shift
Figure 9 Nikon S710 diagonal artifact
Figure 10 Casio p-maps with artifacts at various focal lengths in lens distortion
Defect Pixels
As already established, a cameras sensor will always have defects to the silicon by the
manufacture. This is inevitable. These defects, among others, display across the array of the
PRNU fingerprint in different ways. One display of imperfection might show as defective or
dead pixels. Dead pixels are any pixel with intensity below a specified percentage of mean.
Defective pixels are any pixel that deviates from the mean light field intensity by more than a
specified percentage of mean. These defective and dead pixels mentioned above are those in
light field space. If a defective pixel exists in dark field space, it is considered a hot pixel.
Typically, CMOS sensors have an on-chip algorithm to correct defective or dead pixels, or this
could be done through the cameras processing stage. If multiple dead or defective pixels exist,
16


the cameras onboard processing may not be able to correct them all. In these cases, provided
there are limited defectives, this can prove fruitful in the PRNU pattern analysis [19], Although
uncorrected defective pixels aid in camera identification the best, corrected defective pixels
can still give insight as to their original state even post-processing. This is done by analyzing
the pixel values individually. A corrected pixel, when captured with uniform illumination, will
appear either much lower or much higher in value than its surrounding pixel neighbors. This
is a good indication this was once a defected pixel which has been corrected during post-
processing and can still prove in aiding in camera identification [20], To test or check the
value of a pixel, Mathworks has developed a code referred to as impixel to be used in the
MATLAB program. This code can provide the pixel values of RGB images, grayscale images
and binary images. This aids in the determination of pixel value comparison to neighboring
pixels which provides an indication of defective pixels within a PRNU fingerprint [21],
Defective pixels are only one artifact which affects the PRNU fingerprint.
Compression
There are many artifacts which can affect the PRNU readout. Another artifact is
compression. There are two main types of compression: lossy and lossless compression.
Lossy compression refers to data compression, or shrinkage in size, in which information is
lost, but it is unnecessary information. The data is still mostly intact. Lossless compression is
shrinkage without any data or information loss. All the data is still intact. An example of this
is when a raw image file is compressed to a portable network graphics (PNG) file. A PNG
compression is an example of lossless compression. An example of lossy compression is when
a raw image file is compressed to a joint photographic experts group (JPG) file. The image is
still intact, but some information is lost and the file may appear a bit grainy or pixelated. In
17


the lossy scheme, a JPEG converts the color in images into a suitable color space and processes
these color components independently from one another.
Compression is performed in three basic steps. The first step is called the discrete
cosine transform (DCT). This technique is when an image is divided into 8x8 or 16x16 non-
overlapping blocks. From there, each block is shifted from an unsigned integer to a signed
integer. The DCT transformation occurs and the signal is converted into elementary frequency
components. The remainder of the image consists of visually significant information and is
concentrated in a few coefficients of the DCT [22], The second step is quantization. This step
is defined as a division of each DCT coefficient by the corresponding quantizer step size,
followed by rounding to the nearest integer. It is this step where the most information is lost.
The third and final step is entropy coding. The DCT quantized coefficient are lossless coded
and written into a bitstream. It is here where Huffman tables are formed. These tables are
from the Huffman algorithm and are viewed as a variable-length code table, which presents a
source symbol for the lossy compression [23], Lossy compression is the higher compression
rate, of about lOx the original size, with some information loss. Lossless compression is used
to compress raw images into smaller images without information or data loss (i.e. PNG). This
compression is at a rate of 2x the original size [24], After analyzing how an image is
compressed, we must explore how this affects the PRNU pattern of an image.
When an image is compressed, specifically through lossy compression, it creates the
block artifact, commonly seen in JPEG images. Although the PRNU fingerprint is robust and
can survive certain levels of compression, heavy lossy compression compromises the PRNU
pattern and causes an impairment in camera identification. There is also an issue of
recompression. This is the image processing operation of decompressing an image, possibly
18


changing the uncompressed image, and then compressing the image again. If using the same
quantization tables as the initial compression, further degradation will not occur. However, if
using different quantization tables from the first compression, additional degradation will
occur and the PRNU fingerprint may be lost. Photo-editing programs will often introduce
different quantization tables from the cameras JPEG to the softwares JPEG resaved image.
This will alter the PRNU pattern [25], So to summarize, lossless compression does not alter
the PRNU fingerprint enough to cause much error in camera identification; however, lossy
compression and alterations through editing software programs will degrade the pattern and
may prevent a positive identification.
Alterations
It has already been discovered the effects of compression on the PRNU fingerprint.
Studies have also been conducted about editing effects on PRNU, specifically pertaining to
image forgeries and the ability to still identify a camera source through such alterations. Video
and analog cameras often have distinctive scratches on the film and negatives. This makes it
easy to identify the origin of an image even with alterations and overexposure; however, in the
age of digital photography and software editing, it has been more challenging. There are
many different filters used in software editing programs. These filters are used to attack pixel
values by assigning them new values based on neighboring pixels. When a filter is used on a
color image, the three values (red, blue, green) are determined separately. In the study by
Bouman, Khanna, and Delp, five filters were tested on an original image, then compared to an
average to test for the PRNU pattern and the effects of each filter on the image. The filters
used were blurring, weighted blurring, histogram equalization, sharpening and pseudo-random
noise. It is said that the closer a correlation value is to 1, the more similar it is to the cameras
19


noise pattern and a more positive camera identification can be made. From this study, it was
found that all the source cameras could be matched to the subject images, regardless of filter
or editing progress; however, some filters degrade the pattern more than others. From the five
filters tested and from most damaging to least damaging to the PRNU were histogram
equalization, blurring, pseudo-random noise, sharpening and weighted blurring. This was just
one notable test of filter effects on PRNU within the scientific community [26],
Saturation
Much like defective pixels, compression and editing affect a cameras PRNU readout,
so does image saturation. First, we must differentiate between certain terms. The term
luminance refers to the intensity of light emitted from a surface in a given direction. The term
saturation refers to the state or process which contains the maximum amount of Chroma or
purity. It is of the highest intensity of hue and free of admixture of white. There have been
little studies of the effects of highly saturated scenes on the PRNU fingerprint. Before we can
compare low-color verses high-color saturated photos against reference images from their
camera source, we must explore dynamic range. The dynamic range is the number of exposure
stops between the lightest white and the darkest black in a digital camera. It is tricky to
determine the darkest useable black in a digital camera as the darker tones produce more noise
pattern. In the lighter areas of a scene, there will be less noise pattern visible [27], So we have
established lumination is necessary for a good readout of PRNU; however, a scene that is too
bright or too dark will disguise the pattern making it more difficult to read and compare to the
cameras reference images. This leads us to the saturation experiment.
20


CHAPTER IV
FRAMEWORK FOR MEASURING SATURATION EFFECTS ON PRNU
In this study, I looked at ten different camera models, shown in Table 1. The Table
shows camera make and model, type of sensor, image resolution, ISO and image format. A
cameras ISO number represents its sensitivity to light. In other words, the lower the ISO
number, the lower the random noise exists and the pattern noise will be easier to detect [28], I
took 30-40 flat field photos and averaged them for one solid reference PRNU fingerprint per
camera model. From there, I analyzed ten low-color saturated and ten high-color saturated
image captures per camera to determine exactly if and how saturation affects the PRNU
fingerprint. I calculated the correlation coefficient figures and examined which camera model
best preserved the pattern fingerprint in highly saturated environments. I notated the camera
make and model, the settings and image capture conditions. First, I took 30-40 flat field photos,
of a solid neutral color and applied the MATLAB code found in Figure 11 [29], I then took
ten low-color photos with each camera. From there, I took another ten high-color photos with
each camera, some outside with natural light and some inside with artificial light. Once the
reference photos were averaged for one solid residual pattern and the subject photos were
taken, I compared them to each other to determine correlation coefficient figures in each
environment. After those numbers were calculated, I was able to determine if and how
saturation affects the PRNU readout.
21


Table 1 Devices and Specifications
DEVICE SENSOR RESOLUTION ISO FORMAT
Samsung Galaxy S3 CMOS 3264x2448 80 JPEG
LG G4 CMOS 5312x2988 50 JPEG
Nokia Lumia 635 No light/proximity sensor 2592x1456 100 JPEG
LG G3 Vigor CMOS 3264x2448 100 JPEG
Slate 8 Tablet CMOS 2560x1920 50 JPEG
Alcatel One Touch Tablet No light/proximity sensor 2560x1920 100 JPEG
GoPro Hero3 CMOS 2592x1944 100 JPEG
Kodak EasyShare V1003 CCD 3648x2736 80 JPEG
Canon Powershot G2 CCD 2272x1704 100 JPEG
Motorola Nexus6 CMOS 4160x3120 40 JPEG
% read + average all JPGs in a folder
O
O
clear all;
O
O
dirl=uigetdir;
cd(dirl) ;
D=dir('*.JPG');
[a,b]=size(D);
Ml=[D(1).name];
Mli=imread(Ml);
I=im2double(Mli);
for k=2:a
M2=[D(k).name];
Mi=imread(M2);
Mi=im2double(Mi);
I=I+Mi;
end
clear Mli
-- Detect files ----------
% select the JPG folder
% DOS cd to dirl
% dir for JPG files
% a = number of JPG files
% first file
% read image
% convert to double
% add new image to the previous
imwrite(I,namel); % save the averaged image
disp('Average computed and saved.')
Figure 11 MATLAB code for frame averaging for residual pattern
Camera Studies
For the study, I took ten separate camera devices and calculated an averaged residual
photo and compared it to ten low-color photos and ten high-color or high-color photos from
the same devices. I then took an average of the correlation coefficient figures from each
camera for the low-color and the high-color photo study. For the purpose of this paper, details
22


will be given for three of the ten cameras studied. Picked at random, the cameras which will
be highlighted here are the Nokia Lumia 635, the LG G3 Vigor and the Kodak EasyShare
V1003. The Nokia and the LG are both cellular devices. The Kodak is solely a digital camera.
Figures 12-14 show samples of low-color saturated and high-color saturated photos from the
Kodak used in the study. Also shown is the Kodak PRNU reference image derived from flat
field frame averaging. All of the low-color photos taken with each camera were of doors,
walls, furniture and anything else of low saturation or very neutral colors. The high-color
photos were mostly taken outside at high noon daylight displaying bright blue skies, greenery
and flowers. It must be noted these photos were taken in the daylight, but not with direct
sunlight, sunbeams or sunbeam reflection. This ensured the proper collection was made to
show the deepest colors of the spectrum without heavy luminance or reflection of high sunlight.
It should also be noted these photos consisted of many different environments; however, they
mostly consisted of heavy blues and greens. As mentioned previously, PRNU patterns are
most dominate in red or IR scenes.
Figure 12 Kodak Low-Color Image Figure 13 Kodak High-Color Image
23


Figure 14 Kodak PRNU Reference Image
The images above, and similar images for each camera, were used in this experiment.
Using written code in MATLAB, I ran the residual image, or PRNU reference, against the low-
color photos and then ran the residual image against the high-color photos. To explain this
further, the code promotes a clipping effect. In this process, a white box is displayed on the
test image to show where pixel saturation is being clipped for analysis. It is clipped in a
percentage format which can be shown in Figures 15-18. From that analysis, the correlation
can be drawn. When viewing the results, it will be found that some percentages of pixel
saturation, or measurement via clipping, stop at 55% as opposed to 70% or higher in other
images. This is due to each camera capture presenting different resolutions and dimensions.
The clipping process starts as a perfect square of data taken from the top left of the photo at
0% or 1%. This square increases a certain percentage with each level of pixel saturation
clipped until the clipping achieves the largest perfect square based on the size of the image and
the increment of the increase. In an ideal experiment, resolution and picture dimensions are
exactly the same from one camera model to another; however, this cannot always be achieved.
Due to that factor of the experiment, the actual percentage of pixel saturation varies.
24


Figure 15 Pixel Saturation at 10% Figure 16 Pixel Saturation at 34%
Figure 17 Pixel Saturation at 50% Figure 18 Pixel Saturation at 70%
It should be noted this will not affect the final results in comparing correlation between one
camera model and another and between low-color and high-color saturated photos. At each
level of pixel saturation, the correlation coefficient is assigned between -1 and 1 to determine
the linear relationship between subject images and the PRNU reference. These numbers will
signify how strong the match is between a subject image and a camera possibly used to capture
that image. Tables 2-4 and Figures 19-21 show the Nokia, LG G3, and the Kodak camera
correlations to their own PRNU in both low-color and high color saturated environments. It
should be noted the correlation figures in these tables are of one low-color and one high-color
photo from the each camera.
25


Table 2 Nokia Lumia 635 Low vs. High-Color Saturation
Percentage of Pixel Saturation Low-Color Saturation High-Color Saturation
0% 0.30851 0.025203
i% 0.30624 0.024962
3% 0.30462 0.024815
5% 0.30269 0.024525
6% 0.29985 0.024159
8% 0.29708 0.023876
10% 0.29382 0.023423
12% 0.28972 0.022952
15% 0.28583 0.02268
17% 0.28048 0.021974
20% 0.27545 0.022316
24% 0.26923 0.021258
27% 0.26366 0.021192
31% 0.25609 0.020398
35% 0.24897 0.019251
40% 0.2405 0.018241
45% 0.23138 0.015694
50% 0.22134 0.01467
55% 0.21073 0.014389
Nokia Lumia 635
0.35
-2 0.1
0)
S 0.05
u ******** *t T t t i m m m
0
0% 10% 20% 30% 40% 50% 60%
Percentage of Pixel Saturation
Low-Color Saturation High-Color Saturation
Figure 19 Nokia Lumia 635 Plot Low-Color vs. High-Color Saturation
26


Table 3 LG G3 Vigor Low vs. High-Color Saturation
Percentage of Pixel Saturation Low-Color Saturation High-Color Saturation
0% 0.0010026 -0.00039006
i% 0.00098032 -0.00039935
2% 0.00089497 -0.00037365
4% 0.00093415 -0.00054247
5% 0.0010093 -0.00050572
8% 0.0010536 -0.00050318
10% 0.0011603 -0.00056076
13% 0.0010981 -0.00057124
16% 0.0010595 -0.00053795
20% 0.0010098 -0.00054959
24% 0.0010974 -0.00073298
28% 0.0012627 -0.00051283
33% 0.0011283 -0.00042726
38% 0.00089573 -0.00045035
44% 0.0011883 -0.00036856
50% 0.0015186 -0.0004383
56% 0.0018156 -0.00037294
63% 0.0011286 -0.00010892
70% 0.0011585 0.00054099
77% 0.0011454 0.00013703
LG G3 Vigor
Percentage of Pixel Saturation
Figure 20
Low-Color Saturation High-Color Saturation
LG G3 Plot Low-Color vs. High-Color Saturation
27


Table 4 Kodak EasyShare V1003 Low vs. High-Color Saturation
Percentage of Pixel Saturation Low-Color Saturation High-Color Saturation
0% 0.21285 0.085872
i% 0.21016 0.084553
2% 0.20763 0.084215
4% 0.20472 0.083154
5% 0.20148 0.081153
8% 0.198 0.079486
10% 0.19403 0.077688
13% 0.19 0.0759
16% 0.18494 0.074382
20% 0.18004 0.072829
24% 0.17472 0.069823
29% 0.16966 0.06686
33% 0.1632 0.062687
39% 0.15664 0.061329
44% 0.1494 0.058998
50% 0.14232 0.058393
57% 0.13366 0.056248
63% 0.12448 0.053903
70% 0.1126 0.0515
Kodak EasyShare V1003
Percentage of Pixel Saturation
Low-Color Saturation High-Color Saturation
Figure 21 Kodak EasyShare Plot Low-Color vs. High-Color Saturation
28


Based on the correlation figures from the Nokia Lumia 635, it can be said that the low-
color saturated photo has a weak positive relationship through a shaky linear rule as the
numbers range between 0.2 and 0.3. Although the saturated photo shows lower correlation to
the cameras actual PRNU reference image, ranging between 0.01 and 0.02, it still falls under
the same category: weak positive. The correlation figures for the LG G3 show a large
difference between low-color and high-color environments. These differences are great
enough to fall into different categories. The low-color photo, with a range of 0.001 and 0.0011,
falls under a weak positive, while the high-color photo, with a range of -0.0003 and 0.0001,
falls under a weak negative correlation. The correlation figures for the Kodak EasyShare show
similar to that of the Nokia camera. The low-color photo produced better results than that of
the high-color photo; however, all correlation coefficients are between 0 and 0.3, which
indicate a weak positive linear relationship to the PRNU pattern for that camera.
Final Results
After reviewing all the relationships between subject images and PRNU patterns, I
compared the correlation coefficient figures to make a determination between the different
camera models. Tables 5-6 show the breakdown of each camera and the results of the low-
color and high-color saturation photos. In Table 5, it shows the lowest correlation and the
highest correlation of all ten low-color saturation photos taken for each camera model. Table
6 displays the lowest and highest correlation of all ten high-color saturation photos taken for
each camera model.
29


Table 5 Low-Color Saturation Correlation Coefficients per Camera Model
Device Low-Color Saturation (Lowest CC) Low-Color Saturation (Highest CC)
Samsung S3 0.039618 0.095084
LG G4 0.00194 0.10441
Nokia Lumia 635 0.012831 0.30851
LG G3 Vigor 0.000211 0.0010026
Slate 8 Tablet 0.13728 0.38318
Alcatel Tablet 0.012415 0.10434
GoPro Hero3 0.039513 0.16715
Kodak EasyShare 0.073982 0.21216
Canon Powershot 0.0063403 0.026674
Motorola Nexus6 0.088532 0.16229
Table 6 High-Color Saturation Correlation Coefficients per Camera Model
Device High-Color Saturation (Lowest CC) High-Color Saturation (Highest CC)
Samsung S3 0.02052 0.038594
LG G4 0.023825 0.055266
Nokia Lumia 635 0.014389 0.049567
LG G3 Vigor -0.000162 0.0030136
Slate 8 Tablet 0.092558 0.32575
Alcatel Tablet 0.032627 0.074759
GoPro Hero3 0.018619 0.055404
Kodak EasyShare 0.050754 0.17164
Canon Powershot 0.0049013 0.029744
Motorola Nexus6 0.03563 0.16579
It was found that the Nokia Lumia 635, the Alcatel Tablet, the Slate8 Tablet, the GoPro
Hero3, and the Nexus6 produced much better results when the PRNU pattern was compared
to low-color image verses a high-color image. High-color saturation clearly made a difference
in camera identification with those particular devices. It was also found in most models tested,
the higher the pixel saturation level, the closer in numbers the low-color and high-color
correlations were when compared to each other. Based on the correlation of each photo,
whether low-color or high-color saturation, it can be determined that all figures fall between -
30


0 and 0.3+ which indicate either a weak positive, weak negative, or in some cases, moderate
positive correlation. Furthermore, it was found the ISO level and whether photos were taken
indoors or outdoors were of little importance; however, the sensor type did play a significant
role in the study. Two of the ten cameras contained CCD sensors: the Kodak and the Canon
Powershot. The PRNU fingerprint of these cameras were not grossly affected by high-color
saturation compared to the CMOS cameras researched. Figures 22-28 further illustrate low-
color vs. high-color saturation correlation coefficients in the remaining seven devices.
Samsung Galaxy S3
o.i
0
0% 10% 20% 30% 40% 50% 60% 70% 80% 90%
Percentage of Pixel Saturation
Low-Color Saturation High-Color Saturation
Figure 22 Samsung Galaxy S3 Plot of Low-Color vs. High-Color Saturation
31


LG G4
0.12
0
0% 10% 20% 30% 40% 50% 60%
Percentage of Pixel Saturation
70%
Low-Color Saturation
High-Color Saturation
Figure 23 LG G4 Plot of Low-Color vs. High-Color Saturation
Slate 8 Tablet
0.45
0
0% 10% 20% 30% 40% 50% 60% 70% 80%
Percentage of Pixel Saturation
Low-Color Saturation High-Color Saturation
Figure 24 Slate 8 Tablet Plot of Low-Color vs. High-Color Saturation
32


Alcatel Tablet
Low-Color Saturation
High-Color Saturation
Figure 25 Alcatel Tablet Plot of Low-Color vs. High-Color Saturation
GoPro Hero3
0.18
0
0% 10% 20% 30% 40% 50% 60% 70% 80% 90%
Percentage of Pixel Saturation
Low-Color Saturation
High-Color Saturation
Figure 26 GoPro Hero3 Plot of Low-Color vs. High-Color Saturation
33


Canon Powershot G2
0.03
0
0% 10% 20% 30% 40% 50% 60% 70% 80% 90%
Percentage of Pixel Saturation
Low-Color Saturation
High-Color Saturation
Figure 27 Canon Powershot G2 Plot of Low-Color vs. High-Color Saturation
Nexus 6
c
a;
'u
0)
o
u
c
o
rv
0)
o
u
0.18
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
0% 10% 20% 30% 40% 50% 60% 70%
Percentage of Pixel Saturation
80%
Low-Color Saturation
High-Color Saturation
Figure 28 Nexus6 Plot of Low-Color vs. High-Color Saturation
34


CHAPTER V
CONCLUSION
Through this study, it has been noted that a cameras sensor, the ISO, or sensors
sensitivity to light, and scene environment plays a role in determining a cameras PRNU
pattern-to-subject image correlation. This study was conducted to determine if saturation
affects PRNU pattern readout and if so, could it be considered a form of anti-forensics. After
the research concluded, it has been determined high-color saturated images do affect a
cameras PRNU fingerprint, especially in certain tablets and camera models; however, it does
not affect it enough to create a false positive nor does it totally prevent positive camera
identification. It does however, disguise the fingerprint enough to create lower positives in
correlation figures. The results of high-color saturation do not affect the PRNU fingerprint
enough that it will not cause anti-forensics to occur, but it will aid researchers and examiners
to be mindful of the environment of an image capture. Factors which should be considered
when explaining camera identification in relation to PRNU fingerprint are color saturation
levels, camera models, their sensors, if the scene of a capture is high in infrared or red hues,
how the flat field reference images were captured and how frame averaging was conducted. If
an examiner understands all the factors which cause weak positives, it will allow him to further
explain the linear relationships in his report.
Based on the findings of this experiment and the research conducted, it can be said the
scientific community could benefit from additional research surrounding frame averaging with
respect to exposure time of reference images and testing the PRNU fingerprint against different
camera battery levels and environmental temperatures. There could be more analysis on dark
signal non-uniformity (DSNU) regarding the thermal component, which depends on the
35


temperature and exposure times of capture. There could be more emphasis on decomposed
PRNU (DPRNU) experiments and how well the PRNU fingerprint upholds when the artificial
component is separated from the physical component to allow PRNU collection without the
interference of interpolation noise. These are just a few areas which could prove very valuable
to the community for further authentication purposes.
36


REFERENCES
[1] Nakamura, Junichi. Image Sensors and Signal Processing for Digital Still Cameras. Boca
Raton, FL: Taylor & Francis, 2005. Web. 29 Feb. 2016.
gital+photography#v=onepage&q=digital%20photography&f=false>.
[2] Ali, Tanweer. Protecting Your Visual Identity in the Digital Age. Amsterdam: Innovation
Labs, 2015. Web. 29 Feb. 2016. visual-identity-in-the-digital-age/>.
[3] Waterloo, Ontario: Teledyne DALSA, 2016. CCD Vs. CMOS. Web. 29 Feb. 2016.
.
[4] Esser, Felix. The Phoblographer, 2013. An Introduction To And Brief History Of Digital
Imaging Sensor Technologies. Web. 29 Feb. 2016.
digital-imaging-sensor-technologies/#.VushulUrIdV>.
[5] Knight, Simon, Simon Moschou, and Matthew Sorell. Analysis of Sensor Photo Response
Non-Uniformity in RAW Images. Adelaide, Australia: ICST Institute For Computer Sciences,
2009. Web. 29 Feb. 2016.
.
[6] Mukherjee, Rajib. CCD Vs. CMOS: A Comparative Analysis of the Two Most Popular
Digital Sensor Technologies. Udemy, Inc, 2016. Udemy Blog. Web. 29 Feb. 2016.
.
[7] Darden, Jamel. Lecture 4: Introduction to CCDs. 2006. SlidePlayer. Web. 29 Feb. 2016.
.
[8] Calizo, Irene G. Reset Noise in CMOS Image Sensors. San Jose, CA: San Jose State
University, 2005. SJSUScholarWorks. Web. 29 Feb. 2016.
.
[9] Sensor Noise. United Kingdom: Stemmer Imaging, 2016. Web. 29 Feb. 2016.
.
37


[10] Hilgarth, Alexander. Noise in Image Sensors. Berlin: Technische University Berlin,
2016. Web. 29 Feb. 2016. berlin.de/fileadmin/fgl44/Courses/10WS/pdci/talks/noise_sensors.pdf>.
[11] Muammar, Hani. Source Camera Identification Using Image Sensor PRNU
Pattern. London: Communications And Signal Processing Research Group, 2014. Imperial
College Communications And Signal Processing Research Group. Web. 29 Feb. 2016.
.
[12] Burke, Michael W. Image Acquisition. London: Chapman & Hall, 1996. Web. 29 Feb.
2016.
+to+measure+pmu& source=bl&ots=H8 Af2W 4hI6& sig=ER-HuHRCj T 4 Y p_KDXP7ytT -
MjR4&hl=en&sa=X&ved=0ahUKEwi6pPSwlJXKAhUGFR4KHQtiDMI4ChDoAQg7MAQ
#v=onepage&q=how%20to%20measure%20prnu&f=false>.
[13] Fridrich, Jessica. Digital Image Forensics Using Sensor Noise. Web. 29 Feb. 2016.
.
[14] Ratner, Ph.D., Bruce. The Correlation Coefficient: Definition. North Woodmere, NY:
DM Stat-1 Consulting, 2016. DMSTAT-1 Articles: The Only Online Newsletter About
Quantitative Methods In Direct Marketing. Web. 29 Feb. 2016.
.
[15] Theuwissen, Albert. How to Measure: Fixed-Pattern Noise in Light or PRNU
(1). Belgium: Harvest Imaging, 2012. Harvest Imaging. Web. 29 Feb. 2016.
.
[16] Li, Chang-Tsun, and Yue Li. Colour-Decoupled Photo Response Non-Uniformity for
Digital Image Forensics. IEEE, 2012. The Warwick Research Archive Portal (WRAP). Web.
29 Feb. 2016.
.
38


[17] Hu, Yongjian, Binghua Yu, and Chao Jian. Source Camera Identification Using Large
Components of Sensor Pattern Noise.WEE Xplore, 2010. Research Gate. Web. 29 Feb.
2016.
_Identification_Using_Large_Components_of_Sensor_Pattem_Noise/links/0912f4fea2b894c
c8d000000.pdf>.
[18] Gloe, Thomas, Stefan Pfennig, and Matthias Kirchner. ACM Multimedia and Security
Workshop Unexpected Artifacts in PRNU-Based Camera Identification: A 'Dresden Image
Database' Case-Study. Coventry, UK: Dartmouth, 2012. Dartmouth College: Digital
Forensic Database. Web. 29 Feb. 2016.
.
[19] ISL-3200 Advanced Analysis Guide: Digital Imaging Sensor Interface and Test
Solution. San Francisco, CA: Jova Solutions, 2009. Web. 1 Mar. 2016.
ed%20Analysis%20Guide.pdf>.
[20] Van Houten, Wiger, and Zeno Geradts. Using Sensor Noise to Identify Low Resolution
Compressed Videos from You7 N eth erl an ds: Netherlands Forensic Institute, Netherlands
Forensic Institute: Digital Techology & Biometrics. Web. 1 Mar. 2016.
.
[21] Impixel. MathWorks, 2006. MathWorks Documentation. Web. 1 Mar. 2016.
.
[22] Discrete Cosine Transform. MathWorks, 2006. MathWorks Documentation. Web. 1
Mar. 2016. transform.html?refresh=true>.
[23] REWIND: Reverse Engineering of Audio-Visual Content Data. Imperial, 2011. State-of-
the-Art On Multimedia Footprint Detection. Web. 1 Mar. 2016.
REWINDD31 final.pdf>.
[24] Jerian, Martino. AMPED Authenticate Effective Photo Forensics. Trieste, Italy: AMPED
SRL, Web. 1 Mar. 2016.
.
39


[25] Rosenfeld, Kurt, Taha Senear, and Nasir Memon. A Study of the Robustness o/PRNU-
based Camera Identification. Web. 3 Mar. 2016.
.
[26] Bouman, K. L., N. Khanna, and E. J. Delp. Digital Image Forensics Through the Use of
Noise Reference Patterns. West Lafayette, IN: Purdue University, School Of Electrical And
Computer Engineering. Web. 3 Mar. 2016.
.
[27] Edberg, Timothy. Measuring Digital Dynamic Range. Phototech Magazine,
2007. Phototech Magazine. Web. 3 Mar. 2016. dynamic-range/>.
[28] Martinec, Emil. Noise, Dynamic Range and Bit Depth in Digital SLRs. Chicago, IL:
University Of Chicago, 2008. University Of Chicago. Web. 3 Mar. 2016.
.
[29] Grigoras, Ph.D., Catalin. MATLAB for Forensic Video and Image Analysis. Denver, CO:
National Center For Media Forensics, 2008 .University Of Colorado Denver: National Center
For Media Forensics. Web. 1 Jan. 2016.
40


Full Text

PAGE 1

DIGITAL PHOTOGRAPHY ANALYSIS: ANALYTICAL FRAMEWORK FOR MEASURING THE EFFECTS OF SATURATION ON PHOTO RESPONSE NON UN I FORMITY by KRIS HENRY B.S., University of Central Florida, 2003 A thesis submitted to the Fa culty of the Graduate School of the Un iversity of Colorado in partial fulfillment of the requirements for the degree of Master of Science Recording Arts 2016

PAGE 2

ii 2016 KRIS HENRY ALL RIGHTS RESERVED

PAGE 3

iii This t hesis for the Master of Science degree by Kris Henry has been approved for the Recording Arts Program by Catalin Grigoras Jeff M. Smith Scott Burgess April 29, 2016

PAGE 4

iv Henry, Kris (M.S., Recording Arts ) Digital Photography Analysis: Analytical Framework for Measuring the Effects of Saturation o n Photo Respo nse N on Uniformity Thesis directed by Associate P rofessor Catalin Grigoras ABSTRACT Over the years and through various research it has been found there are many ways to analyze digital photography to determine its source camera for the original captur e. There are many factors to conside r when analyzing photography, such as t he device used the environment of the capture, the softwar e used to process the image and any alterations or editing which may have been done One very important technique of cam era source identification is to analyze photo response non uniformity (PRNU). It has been found every camera, or more specifically every camera sensor reacts differently in various conditions The photo response non uniformity acts as a fingerprint fo r a camera. In this paper, we will explore the various techniques used to determine the source of a photo We will also explore how the unique PRNU fingerprint responds to various situations, including environments of high saturation, artificial light and natural light. Chapter 4 will provide the framework for analyzing such images through multiple case studies using different devices. This study will provide a basis and explanation of how multiple levels of saturation can affect PRNU through s sensor during capture. The form and content of this abstract are approved. I recommend its publication. Approved: Catalin Grigoras

PAGE 5

v TABLE OF CONTENTS CHAPTER I. INTRODUCTION ... History of Camera Sensors . Digital Photography .. 3 Authenti .. II. PHOTO RESPONSE NON UNIFORMITY ... 8 Identification .. 9 Measurements . III ARTIFACTS Defective Pixels .16 Compression .17 Alterations 19 Saturation .. 20 IV FRAME WORK FOR MEASURING SATURATION EFFECTS .. .21 Camera Studies .. 22 29 V 35 ... 37

PAGE 6

vi LIST OF FIGURES Figure 1 Camera Module of Mobile Phones 2 Digital Image Acquisition Pipeline .4 3 Example of Random Noise 4 Example of Fixed Pattern Noise ........... .8 5 Example of PRNU Pattern from a Kodak V550 Camera (magnified) .9 6 Sensor Output with Light Input as a Function of Different Exposure Times 12 7 Denoising Filters Used to Obtain Residual Image of Subject or Test Im age 14 8 Diagram of Large Component Extraction Algorithm 14 9 Nikon S710 Diagonal Arti fact 16 10 Casio P Maps with Arti facts at Various Focal Lengths in Lens Distortion 16 11 MATLAB Code for Frame Averagi ng for Residual Pattern ........... 22 12 Kodak Low Color .. ........... 23 13 Kodak High Color .23 15 Pixel Saturation a t 10% ............ ............................. .............................................................. 25 16 Pixel Saturation at 34 % .................. ............................. ........................................................ 25 17 Pixel Saturation at 50 % ........................ ............................. .................................................. 25 18 Pixel Saturation at 70 % .................................. ............................. ........................................ 25 19 Nokia Lumia 635 Plot Low Color vs High Color Saturation 20 LG G3 Pl ot Low Color vs High Color Saturation

PAGE 7

vii 21 Kodak EasyShare Plot Low Color vs High Color Saturation ... 22 Samsung Galaxy S3 Plot Low Color vs High Color Saturation ... 23 LG G4 Plot Low Color vs High Color Saturation 24 Slate 8 Tablet Plot Low Color vs High Color Saturation 25 Alcatel Tablet Plot Low Color vs High Color Saturation .. 26 GoPro Hero3 Plot Low Color vs High Color Saturation .. .33 27 Canon Powershot G2 Plot Low Color vs High Color Saturation 28 Nexus6 Plot Low Color vs High Color Saturation 34

PAGE 8

viii LIST OF TABLES Table 1 Devices and specifications .. 2 Nokia Lumia 6 35 Low Color vs High Color Saturation .. 26 3 LG G3 Vigor Low Color vs High Color Saturation ... .. 27 4 Kodak EasyShare V1003 Low Color vs High Color Saturation .. .. .28 5 Low Color Saturation Correlation Coefficients p 30 6 High 30

PAGE 9

1 CHAPTER I INTRODUCTION This thesis involves studying the background of camera sensors, how they react to environment, specifically p er taining to light sources, and above all, how reacts to saturation in capture. There are many ways to authenticate an image; however, little studies have been conducted on the effects of s aturation on photo response non uniformity (P RNU), or a camera s sensor fingerpr int. Every camera is different; this also pertains to identical makes and models. It is important to understand how PRNU relates to image Unfortunately, with any forensic science discoveries, there are also anti forensics to consider. As our community moves forward in digital imaging research, there are others who may be using the research for ill practice Determining the effects on auth entication measurement techniques will allow an understanding of how one might employ anti forensics in this area This paper will focus mainly on the ef fects of saturation on PRNU through original camera studies, both from digital still and mobile phone cameras. It is here where light will be shed as to if and how one can intentionally produce false readings for personal gain. History of Camera Sensors It has been defined that an image is a variation of light intensity of a reflection as a function of po sition on a plane; however, to capture an image, means to convert the information to signals which are then stored on a device. In electronic photography, there are two primary methods of storage: analog and digital. In analog cameras, image signals from sensor are converted and stored as video signals; whereas, in digital cameras, they are converted and stored as digital signals [1] Before information can be converted into a signal

PAGE 10

2 and stored on a device, the camera itself must have the pr oper hardware to do so. The focus of this study will be digital cameras, mainly cameras in mobile devices. There are many components to even the smallest of digital cameras, such as: lens barrels, multiple lenses, a lens mount and the information chip, a s seen in Figure 1 [2] Located on the information chip, is the or the heart of the camera Figure 1 Camera Module of Mobile Phones comprehend how information reaches it. A camera le ns is responsible for focusing light onto the sensor. The image sensor converts this light into electronical signals. As an image is captured, the light passes through the lens and falls onto the sensor, which consists of small photo detectors. These de tectors are also referred to as pixels. There are two different types of image sensors: Charged Coupled Device (CCD) and Complementary Metal Oxide Semiconductor (CMOS). Regardless of type, sensors cannot distinguish between various light wavelengths. I n other words, the sensors cannot identify individual colors It is the responsibility of the filter in front of the sensor to

PAGE 11

3 assign color to the corresponding pixel s of a capture [2] There are a few differences between the CCD and the CMOS sensor. It was October of 1969, when George Smith and Willard Boyle invented the charge coupled device (CCD). This was widely used in digital photography in analog cameras because it produced very high quality images However, in was inv ented and offered higher speeds for transfer than the analog chip known as the CCD. It is here where digital photography presented more options [3]. Digital Photography As previously mentioned, t he heart of a digital camera is its sensor There is a very important difference in the two sensors known as CCD and CMOS The difference lies in how charges are passed through the pixels through output nodes and assigned to be converted into voltage. F rom there, the charge is then buffered and sent off chip as an analog signal In other words, the work of transferring light into a charge is outsourced photographs. All of the pixels c an be devoted to light capture which aids in this higher quality. This contributed to the CC until the CMOS model could be put in to wide spread it required l ower power consumption and lowered fabrication costs. The CMOS sensor works by each pixel having its own charge to voltage conversion and includes amplifiers, noise correction and digitization circuits so the output of the chip consists of digital bits. Because of this, the uniformity, or the quality is lower; however, it is parallel and allows for higher transfer speeds [3]

PAGE 12

4 There is a common thread between both the CCD and CMOS sensors, and that is the way color is read during capture. As mentione d before, the sensor cannot see color on its own. It relies on a filter in front of the sensor to assign color to each pixel. This filter is referred to as the Bayer Pattern color filter array (CFA). The Bayer Pattern works by adding two green pixels, o ne red pixel, and one blue pixel in each square of four by four pixels. By this process, each pixel can register one of the three primary colors and any missing color values can be gathered from its neighboring pixels [4] So the process starts as a scen e is captured, it passes through the lens of a camera, through th e filters and then to the CFA. O nce color is assigned to the pixels of the sensor, the RAW image is then stored before processing and compression stages occur within the camera [5] This di gital photography flow can be seen in Figure 2. Figure 2 Digital Image Acquisition Pipeline Another common thread between the CCD and CMOS sensors are what is called sensor noise. Both produce sensor noise; however, how each sensor responds and corre cts this noise varies between the two sensor types. For example, in very saturated, or highly lit environments, overexposure of individual pixels can occur. This is referred to as anti blooming. The CMOS sensor presents better noise correction in these situations as it handles signal conversion for each pixel individually; whereas, the CCD sensor handles signal conversion as a whole and is dependent on conversion off sensor [6]. In a CCD sensor, there

PAGE 13

5 are four different types of noise sources: Read Nois e, Dark Current, Photon Noise and Pixel Response Non Uniformity. Read Noise (RN) is noise caused by thermally induced motions of electrons in the output amplifier This noise can limit the performance of a CCD sensor, but can be reduced in lengthier read out times. Simply stated t he faster the readout, the more noise is present. Dark Current is a noise which is caused by thermally generated electronics in the CCD sensor, but can be eliminated by cooling the CCD. Photon Noise is when the sensor detects photons in an unpredictable fashion. To explain, p hotons are distributed to the sensor, pixel by pixel. So b asically, Photon Noise is when the pixels acquire unequal amounts of photons across the entire sensor. It is an uneven distribution of photons. It is also referred to Pixel Response Non Uniformity, simply put, is a defect in the silicon of the sensor by the manufacture. This is why every camera sensor possess a different fixed pattern noise. To correct this defect and remove th e noise, flat f ielding, or frame averaging, can be a technique used [7]. Much like the CCD sensor, the CMOS sensor also has noise obstacles. Notable issues with the CMOS sensor are high level dark current shot noise and reset noise. Shot noise has alread y been addressed with the CCD sensor; however, the CMOS sensor also possesses reset noise. It is produced from thermal noise causing fluctuations in voltage in the reset level for any given pixel [8]. Due to the noises produced from sensor s it creates m ore characteristics to be studied in each individual camera. These noises will be explained in more detail later. Authentication There are a many ways to determine source camera through image authentication Such information can be found in an source properties and through the A lot of research has been conducted

PAGE 14

6 around analysis of these techniques, to include using Exiftool to review file format/ EXIF data and camera properties usi ng HxD, JPEGSnoop, X Ways or MATLAB to run metadata keywords and even researching how a camera distinctively compresses its images. These techniques and tools alth ough beneficial in providing file data information, are not to be considered authentication tools, but rather tools which aid in obtaining further information about an image s source Some of these methods, however, such as analyzing hex data and imperfect from the start, straight from the manufacture. No sensor, nor its pattern, is identical. Each camera produces its own unique noise. This is firearm a bullet was fired from, scientists can also determine which camera was used for a photo capture. There are many different types of c amera noises to consider: temporal noise, photon noise, d ark cu rrent noise, readout noise, quantization noise, spatial noise, fixed pattern noise and photo response non uniformity (gain noise). Temporal noise is a combination of all noises which can chan that temporal noise is considered the most dominate noise source for those sensors. Photon photons into the pixels. Dark current noise is created by electrons that evolve through thermal processes of the pixel. This noise can be reduced by cooling of the sensor. Readout noise is when the charge, or electrons, is converted into voltage. This noise is direc tly related to readout speed. As previously mentioned, t he faster the readout speed, the more noise is apparent. Quantization noise is when A/D conversion occurs, or when voltage is converted into a digital value. Spatial noise occurs when the pixels are exposed to a homogeneous light and react

PAGE 15

7 differently due to varying sensitivity levels. It is most dominate in the CMOS sensors, as the pixels are read out through different circuits. Fixed pattern noise also called dark signal non unifo rmity (DSNU), is the difference between the lowest and highest measured values for all active pixels in the array. Photo response non uniformity (PRNU) has often been referred to as the uld be and what the true response is on the pixels [9].

PAGE 16

8 CHAPTER II PHOTO RESPONSE NON UNIFORMITY There are two main categories of noise which fall under PRNU, or a imperfections and signature. Those categories are temporal noise, or random noise and fixed pattern noise (FPN). Random noise is when the location and time of occurrence of pixel differences is unpredictable. Fixed noise, on the other hand, is when occurrence is based on location due to an underlying structure. Fixed noise will appear constant in every photo taken from that particular device. Fixed pattern noise consists of dark signal non uniformity (DSNU) and photo respon se non uniformity (PRNU). DSNU is considered an offset between pixels in the dark. It is measured in the absence of light and can be corrected by subtracting a dark frame. PRNU is just the opposite. It is seen as a variation between pixels under a cert ain amount of illumination. It is correct ed by offset and gain for each pixel. PRNU is a signal dependent noise and is created due to a variation amongst pixels in their sensitivity to light. Figures 3 5 show examples of random noise, fixed pattern noise and a typical pattern of PRNU [10] [11]. Figure 3 Example of Random Noise Figure 4 Example of Fixed Pattern Noise

PAGE 17

9 Figure 5 Example of PRNU Pattern from a Kodak V550 Camera (magnified) Identification Photo response non uniformity can be divided into two separate components: individual detector uniformity and array divide uniformity. The variation in responsivity between adjacent pixels is considered high frequency PRNU. High frequency PRNU is when the FPN pattern appears more evident in the brighter areas of an image than the darker areas. frequency PRNU pattern. Low frequency PRNU is a good measure to use when evaluating the variations in responsiveness from one side of the photo array to the other. It is also referred to as photo response shading. Generally speaking, when manufactures refer to PRNU, they are referring to the low frequency rather than the high frequency meas ure. The low frequency measure displays the difference in response levels between the most and the least sensitive pixels across the sensor array under uniform illumination conditions The degree of non uniformity in PRNU is related to a few factors: amp litude of the non uniform pixels and the column amplitude. When measuring PRNU is important to note what causes a weak or strong signal p attern for better comparison r esults. For example, red and infrared (IR) light produce a strong PRNU pattern than the blue and green wavelengths captured. This is due to the deeper penetration lengths at the end of the spectrum (the redder end). At the redder end

PAGE 18

10 of the spectrum, photons encounter more defect sites and material variations. It is important to understand obstacles prior to selecting a model to use for PRNU fingerprint measurement. It is also crucial to remember the purpose of such measur ement and suspect images involved [12]. There are many forensic identification, device linking, recovery of processing history and detection of digital forgeries. These tasks aid in criminal inves tigations and can link a suspect to a particular camera and/or contraband image. PRNU is typically reliable for making these determinations and although PRNU is stochastic, or random in nature, the life span is very stable. PRNU has multiple credible pro perties. It has dimensionality it provides large information content. It has universality all sensors will exhibit its own PRNU pattern. It has generality the fingerprint is present in every capture regardless of camera settings or content environ ment. It has stability the pattern can withstand time under various environmental conditions such as temperature and humidity. Lastly, PRNU has robustness it can survive lossy compression, filing, gamma correction and other forms of processing within the camera or through outsourced software [13] Measurement s There are a few different methods in measuring most widely accepted model consists of running a reference pattern from a particular camera against the image in question. To start, a reference pattern must be obtained by capturing 30 50 flat field images with the device. A flat field is a solid color, preferably light, displaying illumination without heavy saturation. Once these frames are captured, an averag e must be taken for the best possible estimate of the sensor pattern. Using MATLAB, a code can be run

PAGE 19

11 to obtain the correlation coefficient between the reference pattern and the subject image to determine its linear relationship A correlation coefficient (CC) is a measure of the strength of the straight line or linear relationship between two variables, in this case, the reference image and the subject image. The CC will have a value ranging between +1 and 1. To interpret the correlation, it must be un derstood what the values indicate. A value of 0 indicates no linear relationship between reference image and subject image. A value of +1 indicates a perfect positive linear relationship, or both variables increase in its values through an exact linear ru le. A value of 1 indicates a perfect negative relationship, or one variable increases in value while the other decreases in value. Values between 0 and 0.3 (0 and 0.3) indicate a weak positive (or negative) relationship through a shaky linear rule. Va lues between 0.3 and 0.7 ( 0.3 and 0.7) indicate a moderate positive (or negative) relationship through a fuzzy firm linear rule. Values between 0.7 and 1.0 ( 0.7 and 1.0) indicate a strong positive (or negative) linear relationship through a firm li between the two variables being examined. All of these values can make the linear determination if the relationship is already known. If the relationship is unknown or nonlinear, the C C will be useless and questionable [14]. After covering the basic model and correlation coefficient values, other models will now be discussed. The second model for PRNU measurement consists of measuring the CC just the same as mentioned in the first mode l; however, in this model, exposure time of the reference images is taken into account. In other words, instead of taking flat field images one after another to average together for one solid PRNU reference pattern, the flat field images will be taken at various exposure or integration times. This allows the sensor to cool between capture s and reduces thermal noise and dark current in each of the 20 50 reference images. So

PAGE 20

12 as a result, each reference photo is a strong pattern reference prior to the frame averaging process. Figure 6 shows the sensor output with light input as a function of the exposure time. Note the sensor starts to saturate at 400 ms [15]. Figure 6 Sensor output with light input as a function of different exposure times T he next model is referred to as colo r decoupled PRNU (CD PRNU) It is believe the color interpolation process of pho to capture creates noise which a ffects the readout of the PRNU pattern. When a scene passes through the lens and into the color filter array (CFA ), the camera assigns one color per pixel. This is part of the color interpolation process within the camera. Artificial colors are obtained through this process which are not a part of the scene itself nor the camera hardware. Couple decoupled PRNU is a method which proposes to decompose each color channel into 4 sub images, then extracts the PRNU noise from each sub image. This will eliminate the additional, artificial noise created by on board processing. Once the sub images are obtained, they are c ompared against the subject image from the same camera It was noted during the experiment of this method that CD PRNU correlation coefficient figures are slightly higher than the traditional PRNU method. This is to infer the

PAGE 21

13 CD PRNU provides more positi ve results in strong linear relationships between reference images and subject images [16] The last method to measure PRNU is to utilize the traditio nal PRNU frame averaging method, but also further adopt a model which entails comparing only the larger c omponents of the two signals. It is suggested this method is more accurate for determining the linear relationship between a reference image and a subject image because it increases the correct detection possibility and decreases the computational complex ity of using the whole PRNU pattern as opposed to just the larger components it carries. Through the algorithm of this method, it was found that highly saturated images, or even areas of an image, carry no PRNU information; whereas, dark location carries a weak PRNU signal. The idea is to obtain a PRNU pattern from illumination, with little dark current and no high saturation. In the first, most often used, model of PRNU extraction previously mentioned, it is necessary to extract PRNU by subtracting the denoised version from the original image So to obtain the correlation coefficient, the reference image is directly compared to the subject image for PRNU pattern. In this algorithm, it is proposed that the PRNU pattern should be extracted from both the reference images and the subject image, both undergo removal of color interpolation, then the reference residual images to be averaged and compared to the subject residual image. In other words, the reference images have their PRNU pattern extracted indiv idually prior to frame averaging, then compared to the pattern, itself, of the subject image. Figure 7 displays the results of PRNU extraction from a subject image and Figure 8 shows the model algor ithm and how to utilize the PRNU pattern once obtained T his method of PRNU extraction and comparison is preferred by some as traditional PRNU extraction poses m any issues such as addition of shaping noise, background noise left as the extraction is not perfectly content

PAGE 22

14 adaptive, evidence of small magnitude hig h frequency noise and, as mentioned in the previous CD PRNU model, cameras can only capture one color per pixel [17] These are all valid factors which affect the readout of the PRNU pattern. Unfortunately, these are only to name a few. Figure 7 D enoising filters used to obtain residual image of subject or test image Figure 8 Diagram of large component extraction algorithm

PAGE 23

15 CHAPTER III ARTI FACTS Regardless of model algorithm used to extract PRNU and compare to a subject image, there are always artifacts to consider. These arti facts can provide false readouts and incorrect correlation coefficients. These objects found in the PRNU pattern could be due to mul tiple factors, such as the colo r interpolation process, excess background noise, a rtifact s due to heating of the sensor and quick exposure time, additional shaping noise, defective pixels, alterations or edits and heavy saturation. Some other artifact s include those specific to select cameras. Special care must be given to acknowledge these artifact s to prevent false readings and misinterpretation. Gloe, Pfennig and Kirchner reported a case study from the Dresden Image Database which revealed similar artifact s found in certain camera models. Th e cameras explored were the Nikon CoolPi x S710, the FujiFilm J50 and the Casio EX Z150. It was found the Nikon CoolPix S710 presented a diagonal line artifact in all its captures. The FujiFilm J50 images exhibited a slight horizontal shift and the Casio EX Z150 displayed irregular geometric di stortions. On one hand, these artifact s can be beneficial in further device identification; however, on the other hand, these artifact s can prove detrimental to a case if the examiner is uncertain or unaware of these model specific pattern issues. One of the major challenges of camera identification by use of PRNU is the suppression of non unique artifact s. These are artifact s which are specific to a camera model or make. These artifact s, however, may be very similar to those in a PRNU pattern of a diffe rent device altogether. Figures 9 10 show examples of known model artifact s in PRNU [18]

PAGE 24

16 Figure 9 Nikon S710 diagonal artifact Figure 10 Casio p maps with artifact s at various focal lengths in lens distortion Defect Pixels As already esta manufacture. This is inevitable. These defects among others, display across the array of the PRNU fingerprint in different ways. One display of imperfection might show as defect ive or dead pixels. Dead pixels are any pixel with intensity below a specified percentage of mean. Defective pixels are any pixel that deviates from the mean light field intensity by more than a specified percentage of mean. These defective and dead pix els mentioned above are those in Typically, CMOS sensors have an on chip algorithm to correct defective or dead pixels, or this could be done through the

PAGE 25

17 there are limited defectives, this can prove fruitful in the PRNU pattern ana lysis [19 ] Although uncorrected defective pixels aid in camera identification the best, corrected defective pixels can still give insight as to their original st ate even post processing. This is done by analyzing the pixel values individually. A corrected pixel, w hen captured with uniform illumination, will appear either much lower or much higher in value than its surrounding pixel neighbors. This is a good indication this was once a defected pixel which has been cor rected during post processing and can still prov e in aid ing in camera identification [20 ]. To test or check the to be used in the MATLAB program This code can provide the pixel values of RGB images, grayscale images and binary images. This aids in the determination of pixel value comparison to neighboring pixels which provides an indication of defective pix e ls within a PRNU fingerprint [21 ]. Defective pixels are only one artifact which a ffects the PRNU fingerprint. Compression There are many artifact s which can affect the PRNU readout. Another artifact is compression. There are two main types of compression: lossy and lossless compression. Lossy compression refers to data compression, or shrinkage in size, in which informati on is lost, but it is unnecessary information. The data is still mostly intact. Lossless compression is shrinkage without any data or information loss. All the data is still intact. An example of this is when a raw image file is compressed to a portabl e network graphics (PNG) file. A PNG compression is an example of lossless compression. An example of lossy compression is when a raw image file is compressed to a joint photographic experts group (JPG) file. The image is still inta ct, but some informat ion is lost and the file may appear a bit grainy or pixelated. In

PAGE 26

18 the lossy scheme, a JPEG converts the color in images into a suitable color space and processes these color components independently from one another. C ompression is performed in three ba sic steps. The first step is called the discrete cosine transform (DCT). This technique is when an image is divided into 8x8 or 16x16 non overlapping blocks From there, each block is shifted from an unsigned integer to a signed integer. The DCT transf ormation occurs and the signal is converted into el ementary frequency components. The remainder of the image consists of visually significant information and is concentrated in a few coefficients of the DCT [22 ]. The second step is quantization. This st ep is defined as a division of each DCT coefficient by the corresponding quantizer step size, followed by rounding to the nearest integer. It is this step where the most information is lost. The third and final st ep is entropy coding. The DCT quantized coefficient are lossless coded and written into a bitstream. It is here where Huffman tables are formed. These tables are from the Huffman algorithm and are viewed as a variable length code table, which presents a source s ymbol fo r the lossy compression [23 ]. L ossy compression is the higher compression rate, of about 10x the original size, with some information loss. Lossless compression is used to compress raw images into smaller images without information or data loss (i.e. PNG). This compression is at a rate of 2x the original size [24 ] After analyzing how an image is compre ssed, we must explore how this a ffe cts the PRNU pattern of an image When an image is compressed, specifically through lossy compression, it creates the block artifact, common ly seen in JPEG images. Although the PRNU fingerprint is robust and can survive certain levels of compression, heavy lossy compression compromises the PRNU pattern and causes an impairment in camera identification. There is also an issue of recompression This is the image processing operation of decompressing an image, possibly

PAGE 27

19 changing the uncompressed image, and then compressing the image again. If using the same quantization tables as the initial compression, further degradation will not occur. How ever, if using different quantization tables from the first compression, additional degradation will occur and the PRNU fingerprint may be lost. Photo editing programs will often introduce different quantization table s This will alter the PRNU pattern [25 ]. So to summarize, lossless compression does not alter the PRNU fingerprint enough to cause much error in camera identification; however, lossy compression and alterations through editing soft ware programs will degrade the pattern and may prevent a positive identification. Alterations It has already been discovered the effects of compression on the PRNU fingerprint. Studies have also been conducted about editing effects on PRNU, specifically pertaining to image forgeries and the ability to still identify a camera source through such alterations. Video and analog cameras often have distinctive scratches on the film and negatives. This makes it easy to identify the origin of an image even wit h alterations and overexposure; however, in the age of digital photography and software editing, it has been more challenging There are many different filters used in software editing programs. These filters are used to attack pixel values by assignin g them new values based on neighboring pixels. When a filter is used on a color image, the three values (red, blue, green) are determined separately. In the study by Bouman, Khanna, and Delp, five filters were tested on an original image, then compared t o an average to test for the PRNU pattern and the effects of each filter on the image. The filters used were blurring, weighted blurring, histogram equalization, sharpening and pseudo random noise. It is said that the closer a correlation value is to 1,

PAGE 28

20 noise pattern and a more positive camera identification can be made. From this study, it was found that all the source cameras could be matched to the subject images, regardless of filter or editing progress ; however some filters degrade the pattern more than others. From the five filters tested and f rom most damaging to least damaging to the PRNU were histogram equalization, blurring, pseudo random noise, sharpening and weighted blurring. This was just one notable test of filter effects on PRNU within the scientific community [26 ] Saturation Much like defective pixels, compression and editing a so does image saturation. First, we must differentiate between certain terms. The term luminance refers to the intensity of light emitted from a surface in a given direction. The term saturation refers to the st ate or process which contains the maximum amount of C hroma or purity. It is of the highest intensity of hue and free of admixture of white. There have been little studies of the effects of highly saturated scenes on the PRNU fingerprint. Before we can compare low color verses high color satur a ted photos against reference images from their camera source we must explore dynamic range. The dynamic range is the number of exposure stops between the lightest white and the darkest black in a digital camera. It is tricky to determine the darkest useable black in a digital camera as the darker tones produce more noise pattern. In the lighter are as of a scene, there will be less noise pattern visible [27 ] So we have established lumination is necessary for a good readout of PRNU; however, a scene that is too bright or too dark will disguise the pattern making it more difficult to read and compare to the the saturation experiment

PAGE 29

21 CHAPTER IV FRAMEWORK FOR MEASURING SATURATION EFFECTS ON PRNU In this study, I looked at ten different camera models shown in Table 1 The Table shows camera make and mode l, type of sensor, image resolution, ISO and image format. A number, the lower the random noise exists and the pattern noise wi ll be easier to detect [28 ]. I took 30 40 flat field photos and average d them for one solid reference PRNU fingerprint per camera model. From there, I analyzed ten low color saturated and ten high color saturated image captures per camera to determine exactly if and how saturation a ffects the PRNU fin gerprint. I calculate d the correlation coefficient figures and examine d w hich camera model best preserved the pattern fingerprint in highly saturated environments. I notate d the camera make and model, the settings and image capture conditions First, I took 30 40 flat field photos, of a solid neutral color and applied th e MATLA B code found in Figure 11 [29 ] I then took ten low color photos with each camera. From there, I took another ten high color photos with each camera some outside with na tural light and some inside with artificial light Once the reference photos were averaged for one soli d residual pattern and the subject photos were taken, I compared them to each other to determine correlation coefficient figures in each environment A fter those numbers were calculated, I was able to d etermine if and how saturation a ffects the PRNU readout

PAGE 30

22 Table 1 Devices and Specifications DEVICE SENSOR RESOLUTION ISO FORMAT Samsung Galaxy S3 CMOS 3264x2448 80 JPEG LG G4 CMOS 5312x2988 50 JPEG Nokia Lumia 635 No light/proximity sensor 2592x1456 100 JPEG LG G3 Vigor CMOS 3264x2448 100 JPEG Slate 8 Tablet CMOS 2560x1920 50 JPEG Alcatel One Touch Tablet No light/proximity sensor 2560x1920 100 JPEG GoPro Hero3 CMOS 2592x1944 100 JPEG Kodak E asyShare V1003 CCD 3648x2736 80 JPEG Canon Powershot G2 CCD 2272x1704 100 JPEG Motorola Nexus6 CMOS 4160x3120 40 JPEG % read + average all JPGs in a folder % clear all ; % --------------------Detect files --------------------dir1=uigetdir; % sel ect the JPG folder cd(dir1); % DOS cd to dir1 D=dir( '*.JPG' ); % dir for JPG files [a,b]=size(D); % a = number of JPG files M1=[D(1).name]; % first file M1i=imread(M1); % read image I=im2double(M1i); % convert to double for k=2: a M2=[D(k).name]; Mi=imread(M2); Mi=im2double(Mi); I=I+Mi; % add new image to the previous end clear M1i imwrite(I,name1); % save the averaged image disp( 'Average computed and saved.' ) Figure 11 MATLAB code for frame averaging for res idual pattern Camera Studies For the study, I took ten separate camera devices and calculated an averaged resi dual photo and compared it to ten low color photos and ten high color or high color photos from the same devices. I then took an average of the correlation coefficient figures from each camera for the low color and the high color photo study. For the purpose of this paper, details

PAGE 31

23 will be given for three of the ten cameras studied. Picked at random, t he cameras which will be highlighted here are the Nokia Lumia 635 the LG G3 Vigor and the Kodak EasyShare V1003. The Nokia and the LG are both cellular devices. The Kodak is solely a digital camera. Figure s 12 14 show samples of low color saturated and high color saturated photos from the Kod ak used in the study Also shown is the Kodak PRNU reference image derived from flat field frame averaging. All of the low color photos taken with each camera were of doors, walls, furniture and anything else of low saturation or very neutral colors. The high color photos were mostly taken outside at high noon daylight displaying bright blue skies, greenery and flowers. It must be noted these photos were taken in the daylight, but not with direct sunlight, sunbeams or sunbeam reflection. This ensured the proper collection was made to show the deepest colors of the spectrum without heavy luminance or reflection of high sunlight. It should also be noted these photos consisted of many different environments; however, they mostly consisted of heavy blues and greens. As mentioned previously, PRNU patterns are most dominate in red or IR scenes. Figure 12 Kodak Low Color Imag e Figure 13 Kodak High Color Image

PAGE 32

24 Figu re 14 Kodak PRNU Reference Image The images above, and similar imag es for each c amera, were used in this experi ment. Using written code in MATLAB, I ran the residual image, or PRNU r eference, against the low color photos and then ran the residua l image against the high color photos. To ex plain this further, the code promotes a clipping effect. In this process, a white box is displayed on the test image to show where pixel saturation is being clipped for analysis. It is clipped in a percentage format which can be shown in Figure s 15 18 From that analysis, the correlation c an be drawn. When viewing the results, it will be found that some percentages of pixel saturation, or measurement via clipping, stop at 55% as opposed to 70% or higher in other images This is due to each camera capture presenting different resolutions and dimensions The clipping process starts as a perfect squ are of data taken from the top left of the photo at 0% or 1% This square in creases a certain percentage with each level of pixel saturation c lipped until the clipping achieves the largest perfect square based on the size of the image and the i ncrement of the increase In an ideal experiment, resolution and picture dimensions are exactly the same from one camera model to another; however, this cannot always be achieved Due to that factor of the experiment, the actual percentage of pixel saturation varies.

PAGE 33

25 Figure 15 Pixel Saturation at 10 % Figure 16 Pixel Saturation at 34 % Figure 17 Pixel Saturation at 50 % Figure 18 Pixel Saturation at 7 0 % It should be noted t his w il l not affect the final results in comparing correlation between one camera model and another and between low color and high colo r saturated photos. At each level of pixel saturation, the correlation coefficient is assigned between 1 and 1 to determine the linear relationship between subject images and the PRNU reference. These numbers will signify how strong the match is between a subject image and a camera possibly used to capture that image. Tables 2 4 and Figures 19 21 show the Nokia, LG G3 and the Kodak camera correlations to their own PRNU in both low color and high color saturated environments. It should be noted the correlation figures in these tables are of one low color and one high color photo from the each camera.

PAGE 34

26 Table 2 Nokia Lumia 635 Low vs. High Color Saturation Percentage of Pixel Saturation Low Color Satura tion High Color Saturation 0% 0.30851 0.025203 1% 0.30624 0.024962 3% 0.30462 0.024815 5% 0.30269 0.024525 6% 0.29985 0.024159 8% 0.29708 0.023876 10% 0.29382 0.023423 12% 0.28972 0.022952 15% 0.28583 0.02268 17% 0.28048 0.021974 20% 0.27545 0.0 22316 24% 0.26923 0.021258 27% 0.26366 0.021192 31% 0.25609 0.020398 35% 0.24897 0.019251 40% 0.2405 0.018241 45% 0.23138 0.015694 50% 0.22134 0.01467 55% 0.21073 0.014389 Fi gure 19 Nokia Lumia 635 Plot Low Color vs. High Color Saturation 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0% 10% 20% 30% 40% 50% 60% Correlation Coefficient Percentage of Pixel Saturation Nokia Lumia 635 Low-Color Saturation High-Color Saturation

PAGE 35

27 Tabl e 3 LG G3 Vigor Low vs. High Color Saturation Percentage of Pixel Saturation Low Color Saturation High Color Saturation 0% 0.0010026 0.00039006 1% 0.00098032 0.00039935 2% 0.00089497 0.00037365 4% 0.00093415 0.00054247 5% 0.0010093 0.00050572 8% 0.0010536 0.00050318 10% 0.0011603 0.00056076 13% 0.0010981 0.00057124 16% 0.0010595 0.00053795 20% 0.0010098 0.00054959 24% 0.0010974 0.00073298 28% 0.0012627 0.00051283 33% 0.0011283 0.00042726 38% 0.00089573 0.00045035 44% 0.0011883 0.00036856 50% 0.0015186 0.0004383 56% 0.0018156 0.00037294 63% 0.0011286 0.00010892 70% 0.0011585 0.00054099 77% 0.0011454 0.00013703 Figure 20 LG G3 Plot Low Color vs. High Color Saturation -0.001 -0.0005 0 0.0005 0.001 0.0015 0.002 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% Correlation Coefficient Percentage of Pixel Saturation LG G3 Vigor Low-Color Saturation High-Color Saturation

PAGE 36

28 Table 4 Kodak EasyShare V1003 Low vs. High Color Saturation Percentage of Pixel Saturation Low Color Saturation High Color Saturation 0% 0.21285 0.085872 1% 0.21016 0.084553 2% 0.20763 0.084215 4% 0.20472 0.083154 5% 0.20148 0.081153 8% 0.198 0.079486 10% 0.19403 0.077688 13% 0.19 0.0759 16% 0. 18494 0.074382 20% 0.18004 0.072829 24% 0.17472 0.069823 29% 0.16966 0.06686 33% 0.1632 0.062687 39% 0.15664 0.061329 44% 0.1494 0.058998 50% 0.14232 0.058393 57% 0.13366 0.056248 63% 0.12448 0.053903 70% 0.1126 0.0515 Figure 21 Kodak Eas yShare Plot Low Color vs. High Color Saturation 0 0.05 0.1 0.15 0.2 0.25 0% 10% 20% 30% 40% 50% 60% 70% 80% Correlation Coefficient Percentage of Pixel Saturation Kodak EasyShare V1003 Low-Color Saturation High-Color Saturation

PAGE 37

29 Based on the correlation figures from the Nokia Lumia 635, it can be said that the low color saturated photo has a weak positive relationship through a shaky linear rule as the numbers range between 0.2 and 0.3. Althoug h the saturated photo shows lower correlation to the same category: weak positive. The correlation figures for the LG G3 show a large difference between low color and high color environments The se differences are great enough to fall into different categories. The low color photo with a range of 0.001 and 0.0011, falls under a weak positive while the high color photo with a range of 0.0003 and 0.0001, falls under a we ak negative correlation. The correlation figures for the Kodak EasyShare show similar to that of the Nokia camera The low color photo produced better results than that of the high colo r photo; however, all correlation coefficients are between 0 and 0.3, which indicate a weak positive linear relationship to the PRNU pattern for that camera. Final Results After reviewing all the relationship s between subject images and PRNU patterns I compared the correlation coefficient figures to make a determination between the different camera models. Tables 5 6 show the breakdown of each camera and the results of the low color and high color saturation photos. In Table 5, it shows the lowest correlation and the highest correlation of all ten l ow color saturation photos taken for each camera model. Table 6 displays the lowest and highest correlation of all ten high color saturation photos taken for each camera model.

PAGE 38

30 Table 5 Low Color Saturation Correlation Coefficients per Camera Model Dev ice Low Color Saturation (Lowest CC) Low Color Saturation (Highest CC) Samsung S3 0.039618 0.095084 LG G4 0.00194 0.10441 Nokia Lumia 635 0.012831 0.30851 LG G3 Vigor 0.000211 0.0010026 Slate 8 Tablet 0.13728 0.38318 Alcatel Tablet 0.012415 0.10434 GoPro Hero3 0.039513 0.16715 Kodak EasyShare 0.073982 0.21216 Canon Powershot 0.0063403 0.026674 Motorola Nexus6 0.088532 0.16229 Table 6 High Color Saturation Correlation Coefficients per Camera Model Device High Color Saturation (Lowest CC) High Color Saturation (Highest CC) Samsung S3 0.02052 0.038594 LG G4 0.023825 0.055266 Nokia Lumia 635 0.014389 0.049567 LG G3 Vigor 0.000162 0.0030136 Slate 8 Tablet 0.092558 0.32575 Alcatel Tablet 0.032627 0.074759 GoPro Hero3 0.018619 0.055404 Kodak EasyShare 0.050754 0.17164 Canon Powershot 0.0049013 0.029744 Motorola Nexus6 0.03563 0.16579 It was found that the Nokia Lumia 635, the Alcatel Tablet the Slate 8 Tablet, the GoPro Hero3 and the Nexus6 produced much better results when the PRNU pattern wa s compared to low color image verses a high color image. High color s aturation clearly made a difference in camera identification with those particular devices. It was also found in most models tested, the higher the pixel saturation level, the closer in n umbers the low color and high color correlations were when compared to each other. Based on the correlation of each photo, whether low color or high color saturation it can be determined that all figures fall between

PAGE 39

31 0 and 0. 3 + which indicate either a weak posit ive weak negative or in some cases, moderate positive correlation. Furthermore, it was found the ISO level and whether photos were taken indoors or outdoors were of little importance ; however, the sensor type did play a significant role in the study T wo of the ten cameras contained CCD sensors: the Kodak and the Canon Powershot. The PRNU fingerprint of these cameras were n ot grossly a ffec ted by high color saturation compared to the CMOS cameras researched Figures 22 28 further illustrate low color vs. high color saturation correlation coefficients in the remaining seven devices. Figure 22 Samsung Galaxy S3 Plot of Low Color vs. High Color Saturation 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% Correlation Coefficient Percentage of Pixel Saturation Samsung Galaxy S3 Low-Color Saturation High-Color Saturation

PAGE 40

32 Figure 23 LG G4 Plot of Low Color vs. High Color Saturation Figure 24 Slate 8 Tablet Plot of Low Color vs. High Color Saturation 0 0.02 0.04 0.06 0.08 0.1 0.12 0% 10% 20% 30% 40% 50% 60% 70% Correlation Coefficient Percentage of Pixel Saturation LG G4 Low-Color Saturation High-Color Saturation 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0% 10% 20% 30% 40% 50% 60% 70% 80% Correlation Coefficient Percentage of Pixel Saturation Slate 8 Tablet Low-Color Saturation High-Color Saturation

PAGE 41

33 Figure 25 Alcatel Tablet Plot of Low Color vs. High Color Saturation Figure 2 6 GoPro Hero3 Plot of Low Color vs. High Color Saturation 0 0.02 0.04 0.06 0.08 0.1 0.12 0% 10% 20% 30% 40% 50% 60% 70% 80% Correlation Coefficient Percentage of Pixel Saturation Alcatel Tablet Low-Color Saturation High-Color Saturation 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% Correlation Coefficient Percentage of Pixel Saturation GoPro Hero3 Low-Color Saturation High-Color Saturation

PAGE 42

34 Figure 27 Canon Powershot G2 Plot of Low Color vs. High Color Saturation Figure 28 Nexus6 Plot of Low Color vs. High Color Saturation 0 0.005 0.01 0.015 0.02 0.025 0.03 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% Correlation Coefficient Percentage of Pixel Saturation Canon Powershot G2 Low-Color Saturation High-Color Saturation 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0% 10% 20% 30% 40% 50% 60% 70% 80% Correlation Coefficient Percentage of Pixel Saturation Nexus 6 Low-Color Saturation High-Color Saturation

PAGE 43

35 CHAPTER V CONCLUSION Through this study, it has been noted pattern to subject image correlation This study was conducted to determine if saturation a ffects PRNU pattern readout and if so could it be considered a form of anti forensics After the research concluded it has been determined high color saturated images do a ffect a t especially in certain tablets and camera models ; however, it does not a ffect it enough to create a false positive nor does it totally prevent positive c amera identification. It does however, disguise the fingerprint enough to create lower positives in correlation figures. Th e results of high color saturation do not a ffect the PRNU fingerprint enough that it will not cause anti forensics to occur, but it will aid researchers and exam iners to be mindful of the environment of an image capture Factors which should be considered when explaining camera identification in relation to PRNU fingerprint are color saturation levels, camera models their sensors, if the scene of a capture is high in infrared or red hues, how the flat field reference images were captured and how fr ame averaging was conducted If an examiner understands all the factors which cause weak positives, it will allow him to further explain the linear relationships in his report. Based on the findings of this experiment and the research conducted it can be said the scientific community could benefit from additional research surrounding frame averagi ng with respect to ex posure t ime of reference images and testing the PRNU fingerprint against different camera battery levels and environmental temperatures. There could be m ore analysis on dark s ignal non uniformity (DSNU) regarding the thermal component, which depends on the

PAGE 44

36 temperat ure and exposure times of capture. There could be more emphasis on decomposed PRNU (DPRNU) experiments and how well the PRNU fingerprint upholds when the artificial component is separated from the physical component to allow PRNU collection without the interference of interpo lation noise. These are just a few areas which could prove very valuable to the community for further authentication purposes.

PAGE 45

37 REFERENCES [1] Nakamura, Junichi. Image Sensors and Signal Processing for Digital Still Cameras. Boca Raton, FL: Taylor & Francis, 2005. Web. 29 Feb. 2016. . [2] Ali, Tanweer. Protecting Your Visual Identity in the Digital Age. Amsterdam: Innovation Labs, 2015. Web. 29 Feb. 2016. . [3] Waterloo, Ontario: Teledyne DALSA, 2016. CCD Vs. CMOS Web. 29 Feb 2016. . [4] Esser, Felix. The Phoblographer, 2013. An Introduction To And Brief History Of Digital Imaging Sensor Technologies Web. 29 Feb. 2016. . [5] Knight, Simon, Simon Moschou, and Matthew Sorell. Analysis of Sensor Photo Response Non Uniformity in RAW Images. Adelaide, Australia: ICST Ins titute For Computer Sciences, 2009. Web. 29 Feb. 2016. . [6] Mukherjee, Rajib. CCD Vs. CMOS: A Comparative Analysis of the Two Most Popular Digital Sensor Technologies. Udemy, Inc, 2016. Udemy Blog Web. 29 Feb. 2016. < https://blog.udemy.com/ccd vs cmos/ >. [7] Darden, Jamel. Lecture 4: Introduction to CCDs. 2006. SlidePlayer Web. 29 Feb. 2016. . [8] Calizo, Irene G. Reset Noise in CMOS Im age Sensors. San Jose, CA: San Jose State University, 2005. SJSU ScholarWorks Web. 29 Feb. 2016. . [9] Sensor Noise. United Kingdom: Stemmer Imaging, 2016. Web. 29 Feb. 201 6. .

PAGE 46

38 [10] Hilgarth, Alexander. Noise in Image Sensors. Berlin: Technische University Berlin, 2016. Web. 29 Feb. 2016. . [11] Muammar, Hani. Source Camera Identification Using Image Sensor PRNU Pattern. London: Communications And Signal Processing Research Group, 2014. Imperial College Communications And Signal Processing Research Group Web. 29 Feb. 2016 . [12] Burke, Michael W. Image Acquisition. London: Chapman & Hall, 1996. Web. 29 Feb. 2016. . [13] Fridrich, Jessica. Digital Image Forensics Using Sensor Noise. Web. 29 Feb. 2016. . [14] Ratner, Ph.D., Bruce. The Correlation Coefficient: Definition. North Woodmere, NY: DM Stat 1 Consulting, 2016. DM STAT 1 Articles: The Only Online Newsletter About Quantitative Methods In Direct Marketing Web. 29 Feb. 2016. . [15] Theuwissen, Albert. How to Measure: Fixed Pattern Noise in Light or PRNU (1). Belgium: Harvest Imaging, 2012. Harvest Imaging Web. 2 9 Feb. 2016. . [16] Li, Chang Tsun, and Yue Li. Colour Decoupled Photo Response Non Uniformity for Digital Image Forensics. IEEE, 2012. The Warwick Research Archive Portal (WRAP) Web. 29 Feb. 2016. .

PAGE 47

39 [17] Hu, Yongjian, Binghua Yu, and Chao Jian. Source Camera Identification Using Large Components of Sensor Pattern Noise. IEEE Xplore, 2010. Research Gate Web. 29 Feb. 2016. . [18] Gloe, Thomas, Stefan Pfennig, and Matthias Kirchner. ACM Multimedia and Securi ty Workshop Unexpected Artifact s in PRNU Based Camera Identification: A 'Dresden Image Database' Case Study. Coventry, UK: Dartmouth, 2012. Dartmouth College: Digital Forensic Database Web. 29 Feb. 2016. . [19] ISL 3200 Advanced Analysis Guide: Digital Imaging Sensor Interface and Test Solution. San Francisco, CA: Jova Solutions, 2009. Web. 1 Mar. 2016. . [20] Van Houten, Wiger, and Zeno Geradts. Using Sensor Noise to Identify Low Resolution Compressed Videos from YouTube. Netherlands: Netherlands Forensic Institute, Netherlands Forensic Institute: Digital Techology & Biometrics Web. 1 Mar. 2016. . [21] Impixel. MathWorks, 2006. MathWorks Documentation Web. 1 Mar. 2016. . [22] Discrete Cosine Transform. MathWorks, 2006. MathWorks Documentation Web. 1 Mar. 2016. . [23] REWIND: Reverse Engineering of Audio Visual Content Data. Imperial, 2011. State of the Art On Multimedia Footprint Detection Web. 1 Mar. 2016. . [24] Jerian, Martino. AMPED Authenticate Effective Photo Forensics. Trieste, Italy: AMPED SRL, Web. 1 Mar. 2016. .

PAGE 48

40 [25 ] Rosenfeld, Kurt, Taha Sencar, and Nasir Memon. A Study of the Robustness of PRNU based Camera Identification. Web. 3 Mar. 2016. . [26 ] Bouman, K. L ., N. Khanna, and E. J. Delp. Digital Image Forensics Through the Use of Noise Reference Patterns. West Lafayette, IN: Purdue University, School Of Electrical And Computer Engineering Web. 3 Mar. 2016. . [27 ] Edberg, Timothy. Measuring Digital Dynamic Range. Phototech Magazine, 2007. Phototech Magazine Web. 3 Mar. 2016. . [28 ] Martinec, Emil. Noise, Dynamic Range and Bit Depth in Digital SLRs. Chicago, IL: University Of Chicago, 2008. University Of Chicago Web. 3 Mar. 2016. . [29 ] Grigoras, Ph.D., Catalin. MATLAB for Forensic Video and Image Analysis. Denver, CO : National C enter For Media Forensics, 2008. University Of Colorado Denver: National Center For Media Forensics Web. 1 Jan. 2016.