Citation
Proposed framework for digital video authentication

Material Information

Title:
Proposed framework for digital video authentication
Creator:
Wales, Gregory Scott
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English

Thesis/Dissertation Information

Degree:
Master's ( Master of science)
Degree Grantor:
University of Colorado Denver
Degree Divisions:
Department of Music and Entertainment Industry Studies, CU Denver
Degree Disciplines:
Recording arts
Committee Chair:
Grigoras, Catalin
Committee Members:
Smith, Jeffrey M.
Rogers, Marcus

Notes

Abstract:
One simply has to look at news outlets or social media to see our society video records events from births to deaths and everything in between. A trial court’s acceptance of videos supporting administrative hearings, civil litigation, and criminal cases is based on a foundation that the videos offered into evidence are authentic; however, technological advancements in video editing capabilities provide an easy method to edit digital videos. The proposed framework offers a structured approach to evaluate and incorporate methods, existing and new, that come from scientific research and publication. The thesis offers a quick overview of digital video file creation chain (including factors that influence the final file), general description of the digital video file container, and description of camera sensor noises. The thesis addresses the overall development and proposed use of the framework, previous research of analysis methods / techniques, testing of the methods / techniques, and an overview of the testing results. The framework provides the forensic examiner a structured approach to subjecting the questioned video file to a series of smaller tests while using previously published and forensic community recognized methods / techniques. The proposed framework also has a proposed workflow optimization option for use by management in an effort to manage resources and personnel.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
Copyright Gregory Scott Wales. Permission granted to University of Colorado Denver to digitize and display this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.

Downloads

This item has the following downloads:


Full Text
PROPOSED FRAMEWORK FOR DIGITAL VIDEO AUTHENTICATION
by
GREGORY SCOTT WALES A.S., Community College of the Air Force, 1990 B.S., Champlain College, 2012 M.S., Champlain College, 2015
A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Master of Science Recording Arts
2019


©2019
GREGORY SCOTT WALES ALL RIGHTS RESERVED
11


This thesis for the Master of Science degree by Gregory Scott Wales has been approved for the Recording Arts Program by
Catalin Grigoras, Chair Jeffrey M. Smith Marcus Rogers
Date: May 18,2019


Wales, Gregory Scott (M.S., Recording Arts Program)
Proposed Framework for Digital Video Authentication Thesis directed by Associate Professor Catalin Grigoras.
ABSTRACT
One simply has to look at news outlets or social media to see our society video records events from births to deaths and everything in between. A trial court’s acceptance of videos supporting administrative hearings, civil litigation, and criminal cases is based on a foundation that the videos offered into evidence are authentic; however, technological advancements in video editing capabilities provide an easy method to edit digital videos. The proposed framework offers a structured approach to evaluate and incorporate methods, existing and new, that come from scientific research and publication. The thesis offers a quick overview of digital video file creation chain (including factors that influence the final file), general description of the digital video file container, and description of camera sensor noises. The thesis addresses the overall development and proposed use of the framework, previous research of analysis methods / techniques, testing of the methods / techniques, and an overview of the testing results. The framework provides the forensic examiner a structured approach to subjecting the questioned video file to a series of smaller tests while using previously published and forensic community recognized methods / techniques. The proposed framework also has a proposed workflow optimization option for use by management in an effort to manage resources and personnel.
The form and content of this abstract are approved. I recommend its publication.
Approved: Catalin Grigoras
IV


DEDICATION
I dedicate this paper to God and family. To God I thank for guidance, strength, power of mind, making me a lifelong learner, and for giving me so many blessings that frequently comes out of adversity.
I also dedicate this paper to my wife Maricon and my son Scott. My wife has encouraged me for over 36 years of marriage with her firm belief in the value of education. Thank you for giving me the strength when I thought of giving up during times of difficulty, and continually providing moral, spiritual, and emotional support at all times. My son challenged me to be a positive role model who responds to hardships and misfortune by using these things as challenges to improve myself.
v


ACKNOWLEDGEMENTS
I would like to express my appreciation to Dr. Catalin Grigoras and Jeff Smith for their continued support in my study of multimedia forensics and their continued encouragement to contribute to our scientific community. I also want to express my gratitude to both for their continued support and efforts to help law enforcement when we need help in our investigations that involve multimedia. I wish to thank Dr. Marcus Rogers, the other member of my thesis committee, for challenging me in this project and especially during my thesis defense.
I want to express my gratitude to an unsung hero in my pursuit of the degree at UC Denver and especially completing my thesis research paper, Leah Haloin. You keep us on-track with our milestones and deliverables, but you also go the extra mile in your editing role to help us produce papers that meet the school’s standards. You obviously care about the graduate students.
I also want to thank Cole Whitecotton for your IT support when I visit the National Center for Media Forensics (NCMF). And I want to thank my cohort classmates and the other cohorts who provided invaluable assistance and contributions to this paper.
I want to thank Steven Futrowsky for your feedback on Chapter I and IF I also want to thank Gregory Scott Wales Jr. for your feedback on Chapter I. My thanks to Jesus Valenzuela and Gretchel Lomboy, Forensic Digital Imaging Section, Seattle Police Department for providing a couple of videos for a case study included in this paper. Thank you to previous NCMF researchers and others researchers in the forensic community whose scientific research I used to contribute to my paper.
vi


TABLE OF CONTENTS
CHAPTER
I. INTRODUCTION ........................................................... 1
Public Perception ................................................ 2
Forensic Science ................................................. 2
Forensic Video Enhancement or Manipulation ....................... 4
Video Manipulation Example ................................... 4
Forensic Audio Enhancement or Manipulation ....................... 6
Challenges ....................................................... 6
Challenge From Artificial Intelligence ....................... 7
Challenge From Different Types of Video Recorders ....... 9
Challenge From Mobile Devices ................................ 9
Challenge From Advances In Technology & New CODECs ...... 10
Scope ........................................................... 11
II. LEGAL CONSIDERATIONS OF AUTHENTICATION ................................ 14
Legal Aspects For Use In Proposed Framework ................. 14
Scientific Method .......................................... 15
Criteria Used To Assess Techniques ......................... 15
Expert Testimony Approach Using Proposed Framework .............. 18
III. LIGHT TO DIGITAL VIDEO FILE ........................................... 20
Video Creation Chain ............................................ 20
vii


Audio Stream Creation Chain
21
Video Stream Creation Chain ............................... 21
Influences on Audio Stream During Creation ................ 21
Influences on Video Stream During Creation ................ 22
Combined Audio & Video Creation Chain With Influences ......... 23
Digital Multimedia File ............................................ 23
Sensor Noises ...................................................... 24
CMOS Sensor ................................................... 24
CCD Sensor .................................................... 26
Overview of Sensor Noise Types ................................ 27
IV. PROPOSED FRAMEWORK 30
File Structure Analysis ............................................ 30
Workflow Optimization .............................................. 31
File Preparation For Audio & Video Stream Analysis ................. 32
Audio Authentication ............................................... 32
Video Authentication ............................................... 32
Device Identification / Verification ............................... 32
V. FRAMEWORK JUSTIFICATION - RESEARCH, TESTING, &
RESULTS .............................................................. 34
Framework Development & Use ........................................ 34
viii


Analysis Questions ....................................... 34
Digital Multimedia File .................................. 35
Tools For Video Authentication Toolbox ................... 39
Research For Analysis Tools .................................. 40
File Structure Analysis Techniques ....................... 41
Audio Stream(s) & Video Stream(s) Bifurcated Approach . 44
Audio Authentication Analysis Tools ...................... 44
Video Authentication Analysis Tools ...................... 45
Testing of Methods & Proposed Framework ...................... 57
Adding New Method To Video Authentication Toolbox - Case
Study 1 57
Clone Alteration Test Videos - Case Study 2 ........... 58
Axon Fleet 2 Camera Video - Case Study 3 ........... 64
Proposed Framework Overall Test Results ...................... 70
VI. CONCLUSION .......................................................... 71
REFERENCES ............................................................... 72
APPENDIX
A. ADDITIONAL LEGAL INFORMATION ................................... 79
B. SCIENTIFIC METHOD .............................................. 93
C. DIGITAL VIDEO AUTHENTICATION FRAMEWORK WORKFLOW
IX


ANALYSIS FORM
94
D. CASE STUDY 1 ................................. 98
E. CASE STUDY 2 ................................. 134
F. CASE STUDY 3 ................................. 163
x


LIST OF TABLES
TABLE
1 CMOS Sensor Noise Types ....................................................... 25
2 CCD Sensor Noise Types ........................................................ 26
3 File Structure Analysis Authentication Method / Technique Relevance ........... 43
4 Audio Stream Authentication Methods / Techniques Relevance .................... 44
5 Clone / Duplicate Frame Detection Analysis Authentication Method
/ Technique Relevance ......................................................... 46
6 Directional Lighting Inconsistency Detection Analysis Authentication
Method / Technique Relevance .............................................. 47
7 Local (Spatio-Temporal) Tampering Detection Analysis Authentication
Method / Technique Relevance .............................................. 47
8 Interlaced & Deinterlaced Video Inconsistency Detection Analysis
Authentication Method / Technique Relevance ............................... 49
9 MPEG Double Compression Detection Analysis Authentication
Method / Technique Relevance .................................................. 50
10 Color Filter Array Analysis Authentication Method / Technique Relevance ... 52
11 Source Video Camera Identification Using PRNU Detection
Method For Authentication Relevance ........................................... 53
12 Source Camera Successful Identification Rates Using Different
Interpolations & Dimensions ................................................... 54
xi


13 Source Camera Identification Rates ............................................. 55
14 G-PRNU & Image Resize For Camera Identification Detection
Method For Authentication Relevance ............................................ 55
15 Block Level Manipulation Detection Method For Authentication
Relevance ...................................................................... 56
16 2D Phase Congruency CC Detection Method For Authentication
Relevance ...................................................................... 56
17 Test Results For Method Validation ............................................ 100
18 Planned Test Scenarios ........................................................ 103
19 Original Audio Stream Hash Listing ............................................ 117
20 Original Video Stream Hash Listing ............................................ 120
21 Validation Test Comparison Of Original Versus Copied Audio Stream
Hashes ........................................................................ 125
22 Validation Test Comparison Of Original Versus Copied Video Stream
Hashes ........................................................................ 130
23 Test Results .................................................................. 134
24 Test Video Tampering Method ................................................... 136
25 Planned Test Scenarios ........................................................ 137
xii


LIST OF FIGURES
FIGURE
1 Selected Frames From “Extreme Crosswind | Airliner Spins 360”
YouTube Video ........................................................ 5
2 Frame 1 of Deepfake YouTube Video ......................................... 8
3 Person 1 Body, Hair, Ears, & Face Combined With Person 2 Face
Used To Create Deepfake Video ........................................ 8
4 Final Video Creation Chain With Influence Factors ........................ 20
5 Example of Digital Multimedia File Using Book Analogy .................... 23
6 General Block Diagram of CMOS Sensor Noise Model ......................... 25
7 General Block Diagram of CCD Sensor Noise Model .......................... 26
8 Proposed Video Authentication Framework .................................. 30
9 Workflow Optimization Based On File Structure Analysis ................... 31
10 Framework Development General Areas of Analysis .......................... 36
11 Case Study 2 Test 1 Test Validation Results .............................. 60
12 Case Study 2 Test 2 Test Validation Results .............................. 61
13 Case Study 2 Test 3 Test Validation Results .............................. 62
14 Case Study 2 Test 4 Test Validation Results .............................. 63
15 Temporal Analysis of Y Plane Revealing Current Frame Versus Preceding
xiii
Frame Differences
65


16 Comparison of Frame 598, 599, & 600 Visual Content Using Temporal
Difference Filter ........................................................ 66
17 2D Phase Congruency With Correlation Coefficient of Adjacent Frames ....... 66
18 Temporal Analysis of Y Plane Revealing Current Frame Versus
Preceding Frame Differences .............................................. 67
19 Comparison of Frame 1074 & 1075 Visual Content ............................ 68
20 2D Phase Congruency With Correlation Coefficient of Adjacent Frames ....... 69
21 Diagram of Scientific Method Used In Proposed Framework ................... 93
xiv


LIST OF ABBREVIATIONS
Abbreviations Explanations
AAFS American Academy of Forensic Sciences
ADC Analog to Digital Converter
AES Audio Engineering Society
AI Artificial Intelligence
ASTM American Society for Testing and Materials
CC Correlation Coefficient
CCD Charge Coupled Device
CODECs Coder-Decoders
CFA Color Filter Array
CGI Computer-Generated Imagery
CMOS C ompl ementary Metal-Oxi de- S emi conductor
CYGM Cyan, Yellow, Green, & Magenta
DAP Digital Audio Processing
DC Direct Current
DCT Discrete Cosine Transform
DFRWS Digital Forensic Research Workshop
DFT Discrete Fourier Transform
DVP Digital Video Processing
ENF Electronic Network Frequency
xv


ESI Electronically Stored Information
FPN Fixed Pattern Noise
FRE Federal Rules of Evidence
HEIF High Efficiency Image File
HEVC High Efficiency Video Coding
ICC International Criminal Court
IR Infrared
JFS Journal of Forensic Sciences
JDI Journal of Digital Investigation
JPG /JPEG Joint Photographic Expert Group
MMS Multimedia Message Systems
NCMF National Center for Media Forensics
PCM Pulse-Code Modulation
PRNU Photo Response Non-Uniformity
G-PRNU Green Photo Response Non-Uniformity
RGBE Red Green Blue Emerald
SWGDE Scientific Working Group on Digital Evidence
VFX Visual Effects
XVI


CHAPTER I
INTRODUCTION
Our society has become obsessed with recording videos of anything and almost everything that happens in their lives. One simply has to look at news outlets or social media to see people video record events from births to deaths and everything between. News outlets and social media websites encourage our society to provide them with videos of news worthy events for further dissemination through their respective venues.
Terrorist elements in our society record and post videos of violent acts involving beheadings and suicide bombings. Criminals record sexual assaults, child exploitation, kidnapping, aggravated assaults, armed robberies, arsons, and murders on videos. Some criminals seek their “five minutes of fame” by posting the videos or broadcasting them live on the Internet for the world to see.
Witness testimony is frequently less than perfect; however, our society has learned to be better witnesses by using video recordings of a crime to help recall events and better illustrate what they observed. Law enforcement agencies have seen a significant increase over the last few years in the reporting of criminal activity along with a video recording of the crime to support witness testimony. Witnesses have provided videos of almost every type of crime.
Video recording of contentious events have also increased in non-criminal matters. Videos supporting civil litigation have become in vogue. A party, or witness in, litigation may have recorded an event that is the heart of the litigation. Videos may be offered in personal injury, breach of contract, family law, property disputes, and landlord and tenant disputes.
A trial court’s acceptance of videos supporting administrative hearings, civil litigation, and criminal cases is based on a foundation that the videos offered into evidence in court are
1


authentic; however, technological advances in video editing capabilities provide an easy method to edit digital videos.
Public Perception
The average person tends to believe much of what they see in movies and television shows regarding crime scene investigation and the ability of forensics to solve a crime within five minutes or less. Frequently, the science presented to the audience is inaccurate and the time to conduct forensic analysis is unrealistic. Many people believe that these things are possible because they see it in a video form. What many people do not understand is that videos may be edited or altered to manipulate the content of what the viewer observes similar to still images that are “photoshopped” to manipulate the viewers perspective. The authenticity of a digital video recording offered into evidence can be problematic if the video is offered as an original and someone, judge or jury, must make a decision based on that video without authenticity being proven.
Forensic Science
Forensic science is generally defined as the use of scientific methodology to answer questions of relevance for the legal system while meeting legal requirements for the answers and the evidence to be accepted into a legal system. Our society needs impartial forensic scientists to analyze physical and digital evidence and help solve crimes and other wrongs.
Anyone who has watched crime dramas on television or the movies understands a key point made by criminalist Richard Saferstein (2007) who noted that physical evidence establishes a crime has been committed and provides a link between the crime, its victims, and those who perpetrated the crime [1], Digital Forensic Researcher Eoghan Casey (2000) expanded the physical evidence concept and advocated that digital evidence also establishes that a crime has
2


been committed, can provide a link between a crime and its victim, or can provide a link between a crime and the perpetrator [2],
Physical evidence and digital evidence have similarities and differences. Both digital and physical evidence have the same legal requirements for introduction into the U.S. court system. Two of the digital and physical evidence similarities are that the evidence must be relevant and authentic. Rule 401 of the Federal Rules of Evidence (FRE) (Test for Relevant Evidence), is (a) has any tendency to make a fact more or less probable than it would be without the evidence; and (b) the fact is of consequence in determining the action [3], Additionally, FRE Rule 901 (Authenticating or Identifying Evidence) stated that the test for authentication follows a general rule that the party offering the evidence “must produce evidence sufficient to support a finding that the item is what the proponent claims it is” [4],
The differences between digital and physical evidence are based upon the substance or materials that make up the evidence. Physical evidence may take the form of blood, saliva, tissue, fire arms, bullets, tools, marks, prints, or any other substance that has been collected. Digital evidence presents its own challenge to authentication in that it can be much more volatile and may be constantly changing in content if a single computer or series of computers are powered on. Digital evidence may be:
• Easy to alter without any trace of the alteration.
• Encrypted, encoded, or in a digital format that may not be easy to view or read.
• Easy to duplicate.
• Stored in multiple locations or across multiple storage devices.
3


Forensic Video Enhancement or Manipulation
The Scientific Working Group on Digital Evidence (SWGDE) defines video enhancement as “any process intended to improve the visual appearance of video sequences or specific features within video sequences” [5], Video enhancement is usually associated with forensic concepts of accuracy and precision, repeatability by the same examiner, and the ability of other equally qualified examiners to reproduce the same results. The average person does not usually conduct video enhancement with the exception of improving the visual appearance of a video. These enhancements may take the form of rotation, stabilizing a movie because of the videographer’s shaking hand, sharpening, adjusting lighting, adjusting contrast, and other techniques. These changes alter the video but are not necessarily accomplished with the intent to influence the viewer perceptions.
Video manipulation on the other hand may involve frame deletions and insertions, copy and paste inside a frame (intra-frame), copy and paste to another frame (inter-frame), and content aware (removing objects and / or people) and with the intent to change the meaning of events or manipulate the viewer’s perspective. Additionally, not all video frame deletions have negative implications. Video montages summarizing a trial, Congressional hearing, or other lengthy event presented by news outlets with the intent to present the viewer with an unbiased summary of key events is not necessarily manipulative.
Video Manipulation Example
Video manipulation is no longer reserved for Hollywood or advanced computer users.
The Internet provides low cost and open source computer-generated imagery (CGI) and visual effects (VFX) software to create and manipulate video. There are websites with extensive open-source CGI and VFX training that will allow the average computer use to create or manipulate
4


videos to influence viewers. The public has seen a significant increase in “fake” videos that influence people from all comers of our society.
Washington Post Technology Columnist Geoffrey Fowler (2018) wrote a column titled “I fell for Facebook fake news. Here's why millions of you did, too” [6], In that column, Fowler discussed a Facebook video he received in his news feed that was linked to YouTube. The YouTube video was titled “Extreme Crosswind | Airliner Spins 360” [7], As Fowler noted he and millions of others believed the video of an airliner spinning 360 degrees just prior to landing was real. Fowler researched the video’s origin and discovered the video was part real video and part CGI. He talked to a Hollywood film director who told him that in 2017, he created a YouTube video using CGI that showed an airplane doing a 360 degree spin that was used in the YouTube video he referenced. According to Fowler, miscreants combined the film director’s CGI video with real video of a plane landing and then published the video as fake news [6], The fake news video spread across the Internet and social media, influencing many people to believe that an actual airliner conducted a 360 degree spin as it was landing. Fowler reported in his column that the video had been viewed on Facebook almost 14 million times [6], Figure 1 below contains selected frames from the YouTube video titled “Extreme Crosswind | Airliner Spins 360.”
Frame 300
Frame 320
Frame 340
Frame 360
Frame 380
Figure 1 - Selected Frames From “Extreme Crosswind \ Airliner Spins 360’’ YouTube Video [7] This video and the frames noted in Figure 1 are indicative of why videos offered as evidence in civil litigation or criminal prosecution should be authenticated using sound forensic science.
5


The video manipulation example above illustrates a major effort to influence the viewers’ perception; however, manipulation may simply involve the addition or deletion of an object in a few frames or the cropping of a video to exclude an object or person along one edge of a video.
Forensic Audio Enhancement or Manipulation
Video frequently contains an audio stream that supports the visual contents of a video. SWGDE defines audio enhancement as “processing of recordings for the purpose of increased intelligibility, attenuation of noise, improvement of understanding the recorded material and/or improvement of quality or ease of hearing” [5], Similar to video enhancement, audio enhancement is usually associated with forensic concepts of accuracy and precision, repeatability by the same examiner, and the ability of other equally qualified examiners to reproduce the same results. The average person does not usually conduct audio enhancement except for improving what they hear. The enhancements may involve removing background sounds, clicks or pops, and other techniques. These changes alter the overall audio, but are not necessarily accomplished with the intent to influence the listener’s perceptions.
Audio manipulation on the other hand may involve audio snippet deletions or copying and moving segments of conversations into positions that change the meaning of speaker’s conversation or manipulate the perception of the listener. Not all audio segment deletions have negative implications. Audio montages summarizing a trial, Congressional hearing, or other lengthy event presented by news outlets with the intent to present a listener with an unbiased summary of key events is not necessarily manipulative.
Challenges
Forensic video examiners who conduct video authentication face many challenges today and in the future. These challenges include multiple types of video recording systems with
6


various operating systems and video storage locations, increased availability and usability of video editing software, multiple video coder-decoders (codecs), and continuous advances in technology that directly impact video authentication process.
Challenge From Artificial Intelligence
In 2017, researchers in Artificial Intelligence (AI) successfully used open source software and a computer with a basic gaming video card to conduct machine deep learning on input data and then used two input data sources to transfer select portions of one data set to another data set to create a fake video in a process called “deepfake.” The first data set included celebrity faces from digital photos and videos publicly available on the Internet. The second data set contained pornographic videos. The researchers repeatedly transferred celebrity faces onto the pornographic videos and posted them to the Internet for public display of their work.
Shortly after this technological breakthrough, an Internet website appeared that aided the novice in producing pornographic videos using a celebrity’s face and the faces of other innocent people using the same methodology. The impact of this advancement means that it has become easier to manipulate videos, for malicious purposes without the need for high end computers and CGI or VFX software. A well-informed computer user now only needs a computer and the knowledge of a publicly available website tailored to this process to edit a video for nefarious purposes.
7


An example of a non-pornographic deepfake video is offered in Figure 2 below.
Figure 2 - Frame 1 of Deepfake YouTube Video [8]
Figure 3 - Person 1 Body, Hair, Ears, & Face Combined With Person 2 Face Used To Create
Deepfake Video [8]
The video frame above contains a compilation of two people. The deepfake video persona in Figure 2 and Figure 3 middle picture was created with Person l’s video and retaining their body, hair (including part of the facial hair), and voice in the deepfake video. Person 2’s face was inserted, in part, into the deepfake video persona. Note the red box area on both Person 2 and deepfake video face in Figure 3 above. The contents of the red box area on Person 2 illustrates the impact of AI in video manipulation. The deepfake video persona is completely different than
8


either Person 1 or Person 2. Authenticating deepfake videos will present unique challenges to detect what is altered in the original video.
Challenge From Different Types of Video Recorders
Digital video recorders exist in a multitude of different shapes, sizes, manufactures, operating systems, and configurations. Some devices provide the forensic video examiner distinctive metadata for use in video authentication. The same make and model of video recorder may contain optional settings and configurations that are available to the user. These options may overlay a title or clock on a video or present an option to include or eliminate metadata in a video file upon export. These are just a few examples video recorders present as challenges in video authentication.
Challenge From Mobile Devices
Along with the ability of mobile devices to record videos, those who use mobile phones to record videos have the ability to easily edit the video with pre-loaded basic or third-party software applications while the video is on the mobile device. Mobile phone manufacturers have made it easy to synchronize mobile devices with computers to allow the user to have access to robust video editing and enhancement software. Mobile devices have become the preferred method of communications in the 21st Century. Videos or links to download videos are transferred between computers and mobile devices in Multimedia Message Systems (MMS), iPhone Messages and emails. These communications present the challenge that a video located on a mobile device may not have been created with the camera on the device containing the video. An example in actual casework involved a video of criminal activity transferred from a Motorola phone to an iPhone when the witness purchased the new iPhone and the provider transferred data to the new device from the Motorola phone. The video of interest, created on
9


the Motorola phone, was accessible on the new iPhone by the user to show investigators, but the video was not stored in the normal camera folder when collected using mobile forensic processing tools. The video was stored in the iPhone file system consistent with other images and videos received from outside the phone’s camera system.
Challenge From Advances In Technology & New CODECs
Apple introduced Live Photo in early 2016 in its iPhone 6S and 6S Plus that uses iOS 9 and higher. In 2017, Apple implemented a new image (photograph) file type and video format with a new video codec with the introduction of iOS 11. The new video codec, High Efficiency Video Coding (HEVC), implemented the H.265 video compression standard. The HEVC codec was also available as one of the different codecs for High Efficiency Image File (HEIF) photographs in iOS 11 on iPhone 7 and later devices. Apple also included these in the OSX High Sierra. With those systems, the user has an option to change the settings of the camera and use the older Joint Photographic Experts Group (JPEG) image file format and H.264 video compression for movies or the legacy type multimedia files [9],
Apple devices have Live Photo turned on by default from iPhone 6S and 6S Plus using iOS 9 to the latest devices and iOS 12.1.2 at the time of this paper. Apple described Live Photo in their support website as the camera capturing 1.5 seconds of activity before the photo and 1.5 seconds of activity after the photograph was taken. Apple noted the photograph as the “key photo” and allows the user to change the key photo to any photograph in the approximately 3 seconds of recording [10],
Forensic analysis of an Apple iPhone using Live Photo revealed not only the key photo in both JPEG and HEIC forms, but also a video file with the same file name containing the entire recorded video (a derivative or side car video). The new Apple codecs and derivative videos
10


present unique challenges. The new codecs are not viewable on Windows computers without downloadable plug-ins that install updated codecs. Many forensic analysis tools at the time of this paper do not support the HEIC and HEVC files.
A video authentication framework is needed that can be adapted to changes in technology, codecs, other challenges noted above, and any challenges in the future. The proposed video authentication framework is offered with modules that may be utilized in a general workflow that facilitates the inclusion of new or updated techniques as technology advances.
Scope
This thesis proposes a framework that incorporates the analysis of the different features of a digital recording into a workflow for digital video authentication that may meet legal expectations for authentication.
The multimedia forensic science community does not have a digital video authentication framework. A framework would need to meet legal expectations for acceptance in court when used by the forensic video examiner in expert testimony. The development of a proposed framework will also need to use video and audio authentication methodology based upon a series of testing techniques / methods that are peer-reviewed and accepted in the multimedia forensic science community.
There are individual techniques / methods from peer-reviewed publications and conferences that are accepted in multimedia forensic science community, but the community does not have a structured process or system that assesses and uses the appropriate methods for inclusion / exclusion in video authentication examinations.
11


This thesis proposes to research, develop, and test a proposed video authentication framework that is generally reliant on three interrelated areas.
• Framework use (approach) is based upon the general analysis question posed to the forensic video examiner in the employment of the scientific method.
• The decision to include or exclude specific methods, in the video authentication methodology toolbox, to use in the execution of the framework is based upon the digital multimedia file submitted for examination.
• Development of an evaluation tool to test techniques or methods for forensic video examiner to use to validate each proposed technique or method in the methodology toolbox based upon method validation testing and legal assessment of each method.
While acceptance of the proposed framework for video authentication by the courts will
always be based upon a case by case basis dependent upon each cases facts, the proposed framework offers a structured approach to assess and use forensic science community accepted video authentication techniques or methods that are evaluated for reproducibility, repeatability, accuracy, and precision while meeting the general legal requirements recognized by courts in the International community, U.S., and many countries around the world.
The proposed digital video authentication framework is not carved in stone as a rigid and low-level step-by-step protocol that must be explicitly followed by a forensic examiner. Instead, this framework is intended to be used as the basic foundation to implement the required scientific methodology using peer reviewed and community recognized video authentication methods as experiments or tests to detect video authenticity. The individual video authentication techniques employed in this thesis and noted in various scientific papers are intended to be updated as new methods are discovered, refined, or tested in the community and as technology advances in the
12


future. The individual techniques are considered to be modules that can be inserted or removed as appropriate depending upon the video file and circumstances of the case. The thesis also recognizes the legal aspects of video authentication. The proposed framework is intended to provide forensic examiners, who posses a solid foundation of training and education, with a structured process for use in authenticating digital video in court and other types of litigation.
13


CHAPTER II
LEGAL CONSIDERATIONS OF AUTHENTICATION
This thesis addresses a proposed framework for digital video authentication, based upon forensic science that may be used by the court recognized expert to aid the trier of fact in determining the authenticity of digital videos offered to the court. The use of the proposed framework is intended for use by the plaintiff or prosecution and/or the defense to scientifically illustrate that digital video has or has not been manipulated. The proposed framework is also useful to the court for recognition of the type of scientific tests that a digital video file should undergo for authentication.
It is important to understand the legal aspects of authentication and to recognize the importance of a scientific based framework for digital video authentication. Appendix A contains information concerning the following:
1) Authentication of Electronically Stored Information in the U.S.
2) Survey of 2018 U.S. Federal Cases of Questioned Video Authentication
3) Authentication of Electronically Stored Information in International Criminal Court
4) Expert Testimony
The information in Appendix A influenced the development of the proposed framework and the overall legal aspects for the framework use.
Legal Aspects For Use In Proposed Framework
This portion of the thesis clarifies the use of the scientific method in the framework development, the criteria to assess individual techniques used in the framework, and a discussion on expert testimony approach when using the proposed framework.
14


Scientific Method
In Daubertv. MerrellDow Pharmaceuticals, Inc., 509 U.S. 579, 113 S. Ct. 2786, 125 L. Ed. 2d 469 (1993), hereafter noted as Daubert case, the U.S. Supreme Court specifically articulated the basis of the reliability that the trial judge should use to assess potential expert testimony when it wrote “...in order to qualify as "scientific knowledge," an inference or assertion must be derived by the scientific method” [11],
The Oxford Dictionary described the scientific method as “a method of procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses” [12].
The National Center for Media Forensics (NCMF) teaches and uses a six-step scientific method in scientific research. (See Appendix B for the scientific method and a general explanation of its use in the proposed framework development and use in video authentication.) Criteria Used To Assess Techniques
The scientific method noted in Appendix B contains a “process” section noted as experiments /test. This section is where the examiner uses a series of smaller testing procedures or techniques to test video authenticity. The examiner subsequently analyzes the smaller testing procedures or techniques, which is then reported in the analysis section of the scientific method. The decision which specific smaller testing techniques to use should be based on the repeatability and reliability of the proposed scientific technique(s) / method(s).
The examiner who uses the framework should assess potentially new techniques / methods using criteria similar to the courts in the jurisdiction where the examiner may be expected to provide expert testimony on the results of the video authentication. The U.S. Federal
15


courts, ICC, and courts in many other countries have generally accepted a criteria to test forensic science, including the framework and the techniques used in the framework, based on the methodology noted in the Daubert case as noted above. Additionally, examiners who use the framework in the future and update the techniques in this framework should use similar criteria for assessing the use of the techniques as new methods are developed. The assessment criteria are offered below.
Has The Theory Or Technique Been Tested?
The underlying methodology may be based upon basic forensic science principles that have been previously tested. One example is device identification based upon Photo Response Non-Uniformity (PRNU) comparison methodology that is based upon the forensic science principal of individualization. The foundations of individualization in forensic science have an extensive history of testing. The examiner should review and retain the documentation to support their answers to this criteria portion.
Has The Theory Or Technique Been Subject To Peer Review & Publication?
The video forensic examiner who will use this framework should research each proposed technique in the series of smaller testing procedures or techniques to test the video’s authenticity in order to validate the proper level of peer review and publication. Peer review and publication have multiple levels of acceptance, which may be inside or outside the laboratory.
Peer review and publication. The preferred peer review process should start with the submission of a paper to an entity that is associated with scientific research including multimedia forensics such as the American Academy of Forensic Sciences (AAFS) and Digital Forensic Research Workshop (DFRWS). The preference for these entities is based upon the stringent peer review process undertaken by those organizations. Papers are submitted to a double-blind
16


peer review, which means that both the reviewer(s) and author(s) identities are hidden from each other throughout the review process. Papers submitted to, and accepted by AAFS and DFRWS for review are subsequently published (see publication section below).
There are other less stringent peer review processes that are acceptable, such as those that are not reviewed in the double-blind process.. An example of other processes include research that is associated with thesis and dissertations which are subjected to peer review by an advisory committee.
The preferred method of publication is to use journals linked to the entities that use a stringent peer review process. AAFS publishes the Journal of Forensic Sciences ; DFRWS publishes the Journal of Digital Investigation.
What Is The Error Rate Of The Theory Or Technique?
The next criteria for assessing the theory or technique based upon the methodology noted in Daubert involves error rates. Error rates are used across forensic sciences to characterize the likelihood that a specific result is correct or accurate.
In 2018, The Scientific Working Group on Digital Evidence (SWGDE) published a document titled “Establishing Confidence in Digital and Multimedia Evidence Forensic Results by Error Mitigation Analysis” [13] that directly addressed error rates relevant in the field of digital and multimedia forensics through the use of error mitigation.
The SWGDE analysis begins with three questions.
• Is the theory or technique based upon valid science?
• Are the implementations of the technique correct and appropriate for the environment where they are used?
• Are the results interpreted correctly? [13]
17


The SWGDE error mitigation approach addresses potential sources of error, takes steps to mitigate systemic errors such as those commonly arising from the implementation, and generally use a quality assurance approach of continuous human oversight and improvement process to ensure the production of reliable results. SWGDE noted that error rates should be included in the overall error mitigation analysis when those rates can be calculated [13],
Is The Theory Or Technique Accepted In The Forensic Science Community?
The next criteria for assessing the theory or technique is determining the acceptance of that theory or technique in the forensic science community. This may include searching court cases for references to the specific technique’s acceptance or recognition; researching presentations at forensic science organization events where peer review of presentations occur (e.g., AAFS, DFRWS, Audio Engineering Society (AES) Technical Committee - Audio Forensics, etc.); reviewing peer reviews found in forensic science publications; and reviewing best practices published by community organizations like SWGDE and American Society for Testing and Materials (ASTM) related to digital and multimedia forensics.
What Are The Standards Controlling The Use Of The Theory Or Technique?
The last criteria for assessing the theory or technique is to determine the existence and maintenance of any standards controlling the operations or use of the theory or technique. The review of the SWGDE best practices where the theory or technique was cited may list standards controlling the use of the theory or technique.
Expert Testimony Approach Using Proposed Framework The survey of questioned video authentication found in 2018 U.S. Federal cases revealed that the Grimm et. al. findings of 2009 can still be found in federal cases almost 10 years later. While the actual merit of the challenges to the video authentication cannot be addressed in this
18


thesis; the survey found that in no instance did the challenging party use any technique or method noted later in this proposed framework. It was evident from the review of the cases in the survey that the trial court expected specific scientific evidence to indicate any alteration of the video or audio consistent with the Lorraine case when it offered a “how to” guide related to ESI and in part discussed authentication.
The proposed framework is a process that may be used by a court recognized and properly trained forensic video expert to authenticate a video focusing on its contents, substance, and distinctive characteristics.
19


CHAPTER III
LIGHT TO DIGITAL VIDEO FILE
This chapter provides a quick overview of the digital video file creation chain, general description of the digital multimedia file, and a description of camera sensor noises. It is important to understand the digital video file creation chain, digital multimedia file, and sensor noises as later chapters refer to some of these concepts.
Video Creation Chain
Today’s video files are created with and without audio depending upon camera
configuration. It is important to understand digital video file creation chain before one can
analyze videos for alterations or editing. Figure 4 below illustrates the video creation chain with an audio content and builds upon the concepts in the subsequent sections of this chapter.
Susceptible to ENF Contamination I
Voices, Audio Microphone
Environments, Spatiality
Echoes, Reverberation
Signal's Amplitude & DC Offset
n i
Analog to Digital
Conversion Digital Audio Processing
Signal's Resolution Effects
Acoustic
Wave
i
Microphone
Amplifier
♦♦♦♦♦♦♦
mm*
♦♦♦♦♦♦♦
♦♦♦♦♦♦♦
mm*
♦♦♦♦♦♦♦
*»»««*«
ADC
DAP
Container
File
Auto-Exposure Auto-Focus Image Stabilization

IR,
Anti-Aliasing
Filters
Color Filter Arrays:
Bayer (GRGB) CYGM
Sensor / Matrix
(CMOS, CCD)
t
\____
T
Digital Video Processing
White Balance, Noise Reduction Sharpening, Aperture, & Gamma Correction, etc.
Source of Sensor Noises
Figure 4 - Final Video Creation Chain With Influence Factors
20


Figure 4 illustrates the overall video creation chain as light rays reflect off an object and sound emanates from an object. The video creation chain (with audio) has the creation of both the audio and video stream recording flow noted from left to right in Figure 4 above. The illustration notes single video and audio streams in the creation of the video file. It is important to also note that a video file may contain more than one audio and video stream.
Audio Stream Creation Chain
The top half of Figure 4 addresses the audio stream creation / recording flow that produces the final audio stream. The audio stream creation flows from left to right in the top half of Figure 4. An acoustic wave originates from an object / the scene and enters the microphone of the camera to the amplifier. The acoustic wave continues through the amplifier to the analog to digital converter (ADC). The ADC passes the signal to the digital audio processing (DAP) for processing effect to the original output / container file’s audio stream.
Video Stream Creation Chain
The bottom half of Figure 4 addresses the video stream creation / recording flow that produces the final video stream. The video stream creation flows from left to right in the bottom half of Figure 4. Light rays reflect off an object in the video scene and enter lens of the camera to the filters. Light rays continue through the filters to the color filter array to the sensor. The sensor digitizes the light rays reflected off an object and passes the digitized video processing (DVP) through digital processing to the original output file.
Influences on Audio Stream During Creation
There are multiple creation aspects that influence the final audio stream in the original output file that the forensic examiner should consider in the overall authentication process as part of the framework. Figure 4 addresses some, but not all, of the creation aspects that influence the
21


final audio stream. Figure 4 contains some of the creation aspects that influence the final audio stream noted above the overall audio stream creation chain. Figure 4 notes above the acoustic wave that voices, audio environments, echoes, and reverberations may influence the final audio stream. Spatiality has major impact on the audio streams creation. The forensic examiner should consider the signals amplitude and direct current (DC) offset associated with the amplifier. The forensic examiner should also consider the signal’s resolution in the ACD process. Figure 4 also notes that all of the previous audio stream creation elements (microphone, amplifier, and ADC) are susceptible to Electronic Network Frequency (ENF) influence depending upon the camera power source. The forensic examiner should consider ENF during the authentication process.
Influences on Video Stream During Creation
There are multiple creation aspects that influence the final video stream in the original output file that the forensic examiner should consider in the overall authentication process as part of the framework. Figure 4 addresses some, but not all, of the creation aspects that influence the final video stream. Figure 4 contains some of the creation aspects that influence the final video stream noted below the overall video stream creation chain. Figure 4 notes below the lens that auto-exposure, auto-focus, and image stabilization may influence the final video stream and be of interest to the forensic examiner in the authentication process. The forensic examiner should be interested in infrared (IR) and anti-aliasing filters applied to the video stream as noted in Figure 4 under filters. The forensic examiner should consider the type of color filter array used in the video stream creation such as Bayer filter (RGBE) and CYGM (cyan, yellow, green, and magenta) filters as noted in Figure 4. A major contributor to camera identification is the Photo Response Non-Uniformity (PRNU) characters in the video stream from the sensor as noted in
22


Figure 4. The forensic examiner should consider white balance, noise reduction, sharpening, gamma correction digital video processing aspect of the original video stream.
Combined Audio & Video Creation Chain With Influences
Figure 4 combines all of the video creation chain aspects with the previously referenced influences as a reference for use during the video authentication process.
Digital Multimedia File
The digital multimedia file has a file header, metadata, video stream(s), and may have audio stream(s). See Figure 5 below for an example of a digital multimedia file using a book analogy.
Chapter 1 - Video Stream
Encoded Video: H.263, H.264, H.265, etc.
Chapter 2 - Audio Stream
Encoded Audio: AAC, WMA, PCM, etc.
Figure 5 - Example of Digital Multimedia File Using Book Analogy
File Format Container:
.avi, .mp4, .mov, etc.
Metadata:
Sample Table, Location, Date, Recorder/ Camera Info,
Index Chunks, etc.
Analogy:
Book Title Copyright Pages Table of Contents, etc.
23


Figure 5 illustrates an example of a digital multimedia file using a book analogy. The digital multimedia file regardless of the audio or video codec or container type (lossy compression or lossless compression) have at least a file header and the relevant streams. The example in Figure 5 contains one video stream and one audio stream. The digital multimedia files frequently contains metadata. The metadata may be very small or a minimum amount of data.
Additionally, metadata may reside before the audio / video streams or after or both locations. Metadata could reside between streams.
Sensor Noises
Digital video cameras today use either CMOS or CCD sensor chips. The sensor chips generate noises during the image or frame creation process as noted in Figure 4 above. It is important to understand the types of noises a sensor generates and their general origin as the forensic examiner develops their method toolbox for video authentication. Some of the recent research in video authentication methods use different noises for indicators of video alteration and camera identification. The two sensor chips have similar noise types but there are a couple of differences. This section will address each sensor chip’s noise types.
CMOS Sensor
The Complementary Metal-Oxide Semiconductor (CMOS) sensor is common in older and lower end cameras as they are lower cost to produce. The following figure offers a general block diagram of the CMOS sensor noise model based upon research published by Gow, Renshaw, Findlater, Grant, McLeod, Hart, and Nicol (2007) [14],
24


Signal From Scene
Figure 6 - General Block Diagram of CMOS Sensor Noise Model [14]
As noted in the figure above, the noiseless signal arrives at the sensor surface, the noise accrues throughout the process as the noisy frame / image leaves the sensor. The following table offers explanations of the noise types noted in the block diagram above.
Table 1 - CMOS Sensor Noise Types [14]
Noise Type Origin Manifestation Description
SNph - photon shot noise. CMOS sensor Additive temporal variance. Incident to pixel illumination
SNdark- dark-current shot noise CMOS sensor Additive temporal and spatial variance. Fixed Pattern Noise Temperature
PRNU - photo response non-uniformity CMOS sensor Multiplicative spatial variance only. Fixed Pattern Noise Incident to pixel illumination
PCT - Pixel cross talk CMOS sensor Additive temporal and spatial variance. Fixed Pattern Noise Incident to pixel illumination
Ntherm - Combined thermal noise CMOS support integrated circuits Additive temporal and spatial variance. Temperature
Ni/f- Low frequency flicker noise CMOS support integrated circuits Additive temporal variance. Fixed Pattern Noise Temperature
Nrts - Random Telegraph Signal CMOS support integrated circuits Additive temporal variance. Fixed Pattern Noise Temperature
Nm - Row noise CMOS sensor and CMOS support integrated circuits Additive temporal variance. Temperature
Ncn - Column noise CMOS sensor and CMOS support integrated circuits Additive temporal variance. Temperature
Nadc-q - Analog to Digital Quantization CMOS support integrated circuits Additive is image content dependent Variance of image data.
Colorization - Pixel cross talk CMOS support integrated circuits Additive is image content dependent Variance of image data.
25


CCD Sensor
The Charge Couple Device (CCD) sensor is common in newer and mid to higher end cameras as they are more expensive to produce but tend to offer better quality video. The following figure offers a general block diagram of the CCD sensor noise model based upon research published by Irie, McKinnon, Unsworth, & Woodhead, table 3008 [15] [16],
Scene
Figure 7 - General Block Diagram of CCD Sensor Noise Model [15][16]
As noted in the figure above, the noiseless signal arrives at the sensor surface, the noise accrues throughout the process as the noisy frame / image leaves the sensor. The following table offers explanations of the noise types noted in the block diagram above.
Table 2 - CCD Sensor Noise Types [15][16]
Noise Type Origin Manifestation Description
SNph - photon shot noise. CCD sensor Additive temporal variance. Incident to pixel illumination.
PRNU - photo response non-uniformity. CCD sensor Multiplicative spatial variance only. Incident to pixel illumination.
FPN- offset fixed-pattern noise. CCD sensor Additive spatial variance only. Temperature, exposure time.
SNlark dark-current shot noise CCD sensor Additive temporal and spatial variance. Temperature, exposure time.
Nother- combined CCD sensor and CCD Additive temporal and Temperature, CCD
flicker noise, transistor dark currents, and other minor contributors. support integrated circuits spatial variance. readout rate.
Ntherm - Combined thermal noise. CCD support integrated circuits Additive temporal and spatial variance. Temperature.
26


Nr - reset noise CCD support integrated circuits Additive temporal and spatial variance. Temperature.
Nread- readout noise. Combined Nl<+ Ntherm + Nother As per Nr, Ntherm, and Nother Additive temporal variance. Temperature, CCD readout rate.
Nd - demosaicing noise. CCD support integrated circuits. Multiplicative noise amplification or attenuation. Demosaicing implementation, combined sensor noise.
Nfdt- post image-capture effects. CCD support integrated circuits. Multiplicative noise effect. Parameters for image enhancement, combined sensor noise.
Nq - quantization noise CCD support integrated circuits Additive noise. Image content dependent. Variance of image data. Sets lower noise limit for non-trivial image content.
Overview of Sensor Noise Types
The sensor noise types are briefly described below. Photon Shot Noise
Photon shot noise is described by Gow et al, 2007, as an inescapable uncertainty in the number of photons collected in the photodiode and this is due to the quantum nature of light [14], It indicates the variations in number of the photons detected due to the occurrence independent of each other.
Dark Current Shot Noise
Hytti (2006) described dark current shot noise as thermal generation of electrons in the silicon that is usually different from pixel to pixel [17],
Photo Response Non-Uniformity
Irie et al., 2008, described PRNU as the difference in pixel responses to uniform light sources [15],
27


Pixel Cross Talk
Pixel cross talk is described by Gow et al, 2007, as a phenomenon that causes mixing, image blur, and degrades the signal-to-noise ratio after color reconstruction in CMOS sensors
[14].
Offset Fixed-Pattern Noise
Irie et al., 2008, described offset FPN as changes in dark currents from variations in pixel geometry which originates from fabrication of the sensor [15],
Thermal Noise
Irie et al.,2008, described thermal noise as fluctuations of an electric current inside the electrical conductor from the random thermal motion of charge carriers [15],
Flicker Noise
Van Houten and Geradts, in their 2009 research, cited flicker noise as a temporal noise where charges are trapped in surface states and subsequently released after some time in the charge to the voltage amplifier [18],
Random Telegraph Signal
Ishida, Kagawa, Komuro, Zhang, Seo, Takasawa, and Kawahito (2018) described random telegraph signal (RTS) noise as mainly generated by CMOS traps of the source follower transistor [19],
Row Noise
Gow et al, 2007, noted when a row in the photodiode is released from reset, all pixels in that row are unprotected from noise entering through the reset line, transfer gate, or read transistor. Gow et al, 2007, noted the row noise manifests in images as horizontal lines and with fixed and temporal components [14],
28


Column Noise
Gow et al, 2007, noted column noise is introduced by the sample and hold capacitors during reset [14],
Reset Noise
Irie et al., 2008, described reset noise as a specific type of thermal noise originating from the capacitors (kJC) when resetting the charge sensor capacitor to a reference voltage [15], Demosaicing Noise
Irie et al., 2008, described demosaicing noise as the interpolation of the RGB color data for each pixel [15],
ADC Quantization Noise
Gow et al, 2007, noted the noise as a linear quantization of the input signal based upon the analog to digital conversion architecture [14],
29


CHAPTER IV
PROPOSED FRAMEWORK
The structure for the proposed framework uses a repeatable approach. The proposed framework analyzes the file structure, the video stream(s), the audio stream(s), and device verification (if applicable). These analysis are cumulative in their influence in reaching a conclusion about the authenticity of the questioned video file. The file structure analysis is performed on a forensic duplicate (working copy) of the submitted video file. The figure below illustrates the proposed framework.
* Note: Comparisons With Questioned Device(s)f Known Device Library /Database, & Core Software Library /Database
Figure 8 - Proposed Video Authentication Framework File Structure Analysis
The file structure analysis includes the sub-analysis components of file format analysis, header analysis, and analysis of hex data.
30


Workflow Optimization
Management may wish to optimize the workflow of the forensic video examiner. A workflow optimization method in the proposed framework would be the insertion of a logic condition at this point. A decision point in the workflow is offered below.
Figure 9 - Workflow Optimization Based On File Structure Analysis The figure above illustrates the decision point that if the file structure was consistent with an original file, a workflow optimization condition would be to continue to the video and audio stream analyses. However, if the file structure was not consistent with an original file, a workflow optimization condition would be to stop the analyzes and report the findings.
The simple detection of an altered or edited video may be sufficient for a case and the workflow optimization is a useful tool for management in an effort to manage resource and personnel. Nevertheless, there are incidents where it is important to know the location of tampering and the general methodology to determine if the alteration or editing was consistent with intent to manipulate the viewers perspective.
31


File Preparation For Audio & Video Stream Analysis
The file may be prepared for the analyzes of both the audio stream analysis and video stream analysis through bifurcation. The audio stream may be copied to a Wave PCM file for subsequent authentication. The stream copy is accomplished while not changing the sample rate. The video stream may need to be transcoded out of a proprietary file to a lossless format for subsequent authentication processing. However, the frames per second (FPS), dimension, or other important properties should not be changed. The most advisable way to transcode the video or audio streams is by using multimedia stream hashing method [20], Using this method a stream hash can be calculated prior to transcoding and compared with the bifurcated file’s stream hash to verify no modifications have been made to the target signals.
Audio Authentication
Each audio stream should be subjected to a series of smaller testing methods or techniques using a previously published and forensic community recognized framework [22]
[24] [25], The series of testing should include methods for global and local analysis areas.
Video Authentication
Each video stream in the video file should by subjected to a series of smaller testing methods or techniques as part of the authentication framework. The series of testing should include methods for global and local analysis areas.
Device Identification / Verification
Device verification analyses are useful in corroborating a questioned video file originated / created by an alleged camera / recorder or attributing the video to an unknown recording device (specifically excluding submitted camera / recorder). These analyses involve various techniques
32


that could be used for just one analyses area (e.g., global, etc.,) or span all three areas (global, local, and device verification).
The analysis of the each section in the framework should be documented. Appendix E may be used to document the digital video authentication framework analysis.
33


CHAPTER V
FRAMEWORK JUSTIFICATION - RESEARCH, TESTING, & RESULTS
This chapter addresses the overall development and use of the proposed framework, previous research on analysis methods / techniques, testing of techniques, and a general overview of the testing results.
Framework Development & Use
The development and use of the proposed video authentication framework is reliant on three interrelated issues. The first issue is the general analysis question, based upon the scientific method (reference Appendix - A). The second issue is based upon the digital multimedia file that is submitted for examination. The third issue is an assessment of the tools available in our toolbox.
Analysis Questions
The scientific method begins with the analysis question (reference Appendix-A). The requestor for authentication of questioned video submits the following question to the forensic examiner.
Analysis Question #1 (AQ-1) - Has the video stream, video stream and audio stream, or
audio stream been processed or manipulated?
The request for video authentication usually involves this analysis question (AQ-1). It is important to make a key distinction in this analysis question. The distinction is the difference between “processed” and “manipulated” when the forensic examiner analyzes data (step #5 of scientific method) from the experiment / test results (involved in step #4 of scientific method). [21] described a processed video and / or audio stream involves recompression and transcoding that does not involve media manipulation. Media manipulation is the application of different
34


editing techniques to audios, photographs, videos, or electronic data in order to create an illusion or deception, through analogue or digital means. A manipulated video and / or audio stream involves media manipulation. The editing (media manipulation) may involve deleting video and / or audio, adding video and / or audio, enhancement of either or both with intent to deceive the viewer / listener, or the creation of deepfake video and / or audio.
A second, and related, question may also be asked of the forensic examiner if a questioned camera / recorder is available for experiments / testing. The second question posed to the forensic examiner follows.
Analysis Question #2 (AQ-2) - Did the submitted camera / recorder create the questioned video?
The two analysis questions influence the general components involved in the authentication framework. Another aspect of the framework depends upon the questioned file’s container and it’s contents.
Digital Multimedia File
The digital multimedia file may consist of the file header, metadata, video stream(s), and audio stream(s). The digital multimedia file may use lossy compression or lossless compression. The digital multimedia file format may be open source or proprietary. All of these factors influence the video authentication process. However, there are some common digital multimedia file components that influence analysis areas in the development of the framework. See figure below for a graphic of the general analysis areas for the proposed framework.
35


Framework Development
File Structure Analysis
_______________+_______________
Video Stream(s) Analysis
+
Audio Stream(s) Analysis
+
Camera / Recorder Analysis
Figure 10 - Framework Development General Areas of Analysis The figure above illustrates the development and areas of the proposed video authentication framework. The proposed general analysis areas include File Structure Analysis, Video Stream(s) Analysis, Audio Stream(s), and Camera / Recorder Analysis. The general areas are included depending upon the analytical question(s) and the digital multimedia file contents. These analyses may be global, local, or device verification.
Global analyses are conducted on the questioned video file as a whole and produce results relevant to authenticity without focusing on specific portions or segments of the multimedia streams. Local analyses are conducted on specific portions or segments of the questioned video file to detect the file’s authenticity. Device verification analyses are useful in corroborating whether a questioned video file originated / created by an alleged camera / recorder or attributing the video to an unknown recording device (excluding submitted camera / recorder). These analyses involve various techniques that could be used for just one analyses area (e.g., global, etc.,) or span all three areas (global, local, and device verification). An example of a technique that spans all three areas would be sensor pattern noise (SPN) analysis.
36


SPN analysis may be used for global analyses test, local analyses test, and detecting individual characteristics within the device verification analysis.
File Structure Analysis
The file structure analysis involves a comprehensive examination of the file format, the file header, and the hex data within the digital multimedia file. The file structure examination involves a non-exhaustive list of areas to review including the file extension, file header, various metadata components, various video and audio recorded contents, and possible footer. Much of this data may be in hex and some recognizable information in American Standard Code for Information Interchange (ASCII). The forensic examiner observes the file structure information to identify the number of videos streams and audio streams (if audio is present) as well as the camera / recorder information and any software used to create the recorded contents.
Video Stream(s) Analysis
The video stream(s) analysis examines all videos streams present within the digital multimedia file for evidence of global and local alterations or edits. The specific techniques used for testing the video stream(s) are noted in the research section of this chapter. The forensic examiner compiles the collected findings data relative to each tests for subsequent development of a conclusion.
Audio Stream Analysis
The audio stream(s) analysis examines all audio streams present within the digital multimedia file for evidence of global and local alterations or edits. The specific techniques used for testing the audio stream(s) are noted in the research section of this chapter. The forensic examiner compiles the collected findings data relative to each tests for subsequent development of a conclusion.
37


Camera / Recorder Analysis
The camera / recorder analysis uses questioned device(s) to created exemplars under similar recording environments as those of the questioned video digital multimedia file to conduct comparison of exemplars and the questioned video file for video authentication including device verification. The specific techniques used for camera / recorder testing are noted in the research section of this chapter. The forensic examiner complies the collected findings data relative to each tests for subsequent development of a conclusion.
Data Interpretation & Conclusion Development
The findings from the individual analysis areas above are combined to develop the conclusion. Each test result should be scientifically interpreted and technical descriptions should be articulated in an unbiased way. Authenticity conclusions should be factually documented and thoroughly supported by the analyses conducted.
Grigoras, Rappaport, and Smith (2012), noted in their paper “Analytical Framework for Digital Audio Authentication” at the Audio Engineering Society’s 46th International Conference, that no scientific inquiry, including those in forensics, produce a result of absolute certainty. Therefore, conclusions in digital audio examinations related to an audio recording’s authenticity should not be stated in terms of absolutes. Language implying 100% certainty should be avoided unless speaking about known alterations or deletions [22], This approach also applies to digital video authentication.
An authentication examination may have the following conclusions:
• Consistent with an original recording.
• Inconclusive.
• In-consistent / not consistent with an original recording.
38


However, the same results above should be used as a grading scale for each analysis results or finding.
Tools For Video Authentication Toolbox
The third issue in the framework development and use process is a discussion of the tools (methods / techniques) available for the video authentication framework. Each method or technique the examiner uses in the authentication framework should be subjected to testing / evaluations as to their viability for use in the framework.
The video authentication framework, as noted in the introduction chapter’s scope, is proposed for use by the forensic examiner to authenticate digital videos and support their conclusion in court testimony as an expert. The forensic examiner will need to continuously update the methods / techniques in video authentication framework. Additionally, some methods / techniques are not relevant to every use of the authentication framework depending upon the analysis question(s) and files contents. The forensic examiner should conduct two overall evaluations of the tools in their toolbox for video authentication. The evaluations are a Tool Validation Testing and a Admissibility Assessment.
Tool Validation Testing
Validation testing is defined by SWGDE as “an evaluation to determine if a tool, technique, or procedure functions correctly and as intended” [23], The first part of the test is whether the technique functions correctly. As previously noted in Chapter 1, there are multiple published articles and papers on various digital video authentication techniques, but many of them involved techniques in laboratory-controlled environments that subject the videos to a specific video authentication technique that is not reproduceable. The technique’s reproducibility is critical to use for forensic science. Additionally, it is important that the
39


forensic examiner validate a technique for its intended use and that the technique performs as expected.
Admissibility Assessment
The admissibility assessment is used to ensure the forensic examiner can support the technique or method in court. The Daubert case is the guiding precedent for the admissibility assessment. As previously stated in the legal aspects chapter of this thesis, the assessment should ask the following questions.
• Has the theory or technique been tested?
• Has the theory or technique been subject to peer review and publication?
• What is the error rate of the theory or technique or is error mitigation implemented?
• Is the theory or technique accepted in the forensic science community?
• What are the standards controlling the use of the theory or technique?
Refer to the Legal Aspects chapters of this document for a detailed discussion of these topics. Updating Analyses Tools
The forensic examiner should continuously research new methods or techniques in the analyses of digital multimedia file analysis, audio stream analysis, video stream analysis, and camera / recorder device verification analysis for new or updated authentication methods.
Research For Analysis Tools
The following section covers a non-exclusive list of researched methods / techniques for use in the digital video authentication framework.
40


File Structure Analysis Techniques
All digital cameras create files with unique file structures and contents. The file structure is important to investigate as it is interpreted by the computer, mobile device, or camera as to how to process the contents of the file.
The file structure analysis, for the forensic examiner conducting video authentication, involves a comprehensive examination of the file format, the file header, and hex data within the digital multimedia file. The file structure analysis technique involves analysis of the questioned file format for inconsistencies observed in this area that may lead the forensic examiner to more conclusive analyses. The file header and hex analysis inconsistencies may provide the forensic examiner more conclusive results [22] [24] [25] [26] [27],
The forensic examiner may want to use one of two in-depth approaches to the file structure analysis based upon the published standard or, if the suspected originating recorder is available, use exemplars from suspected originating recorder for comparison with the questioned file.
File Format
There are many different types of video files in use today. Video files have chunks of data organized based upon file format and encoding applied to some chunks of data based upon the video codec used and if audio is present the audio codec used. The different digital multimedia files each have a standardized format that defines how data is stored within the file. Examples of these digital multimedia files are 3GP, AVI, MOV, MKV, and MP4. Examples of various video codecs include H.263, H.264, H.265, and MJPEG. Examples of various audio codecs include AAC, WMA, and PCM.
41


A technical review of the file format should be made to document information for
subsequent analyses. Part of the file format analysis may involve reverse engineering the file format in order to explain how the file could attain the current state. The main areas of the file format analysis include format, codecs, sample rates, bit depth, etc. [22][24][25],
A video file today does not just contain video streams and audio streams. The forensic examiner may also find closed caption data. Caption data may use the following non-inclusive listing of formats:
• Web Video Text Track (WEBVTT),
• Consumer Electronics Associations (CEA) 608 / 708,
• Distribution Format Exchange Profile (DFXP),
• Timed Text Markup Language (TTML),
• Synchronized Accessible Media Interchange (SAMI).
The video files today also contains metadata.
Metadata written by cameras / recorders today may write metadata in Exchangeable Image Format (EXIF) or Adobe’s Extensible Metadata Platform (XMP) format. Metadata tags may contain information about the recording date, time, camera, and location of recordings. It is important to understand that metadata may be easily changed or deleted. Research has revealed that some social media websites and video sharing websites (e.g., YouTube, etc.,) typically remove the original camera metadata from videos and images [28] [29] [30],
Header Analysis
Metadata analysis tools extract and interpret file header data and file properties into human readable information. A forensic examiner should compare metadata analysis information to the actual hex data. However, it is more important to overcome tool error or
42


completeness by using 2 or more tools to cross-verify metadata information. File headers may contain recorder device information such as make, model, firmware version, serial number, and the date, time, and length of the recording.
Hex Analysis
Hex analysis of a video file is important for file structure analysis of the file to locate post-processing artifacts and understanding the presentation of the video, audio, and closed capture information. Additionally, metadata tags not normally recognized by metadata analysis tools may be discovered deep in the hex of a video stream in a digital multimedia file.
As previously noted, the file structure analysis, for the forensic examiner conducting video authentication, involves a comprehensive examination of the file format, the file header, and hex data within the digital multimedia file. Jake Hall (2015) authored an MPEG-4 file format and metadata analysis methodology for video authentication in his graduate thesis at University of Colorado Denver [31], In addition, Scott Anderson (2011) offered a detailed file structure authentication methodology in his graduate thesis also at University of Colorado Denver [26],
The file structure analysis method / technique has potential application for the entire analysis perspective of the file. See Table 3 below for the relevant analyses areas.
Table 3 - File Structure Analysis Authentication Method Technique Relevance
Method / Technique Global Analysis Local Analysis Device Identification (Characteristics) References
Class Individual
File Structure Analysis Yes Yes Yes Yes [22] [24] [25] [26] r27] [31]
43


Audio Stream(s) & Video Stream(s) Bifurcated Approach
A video file containing both audio stream(s) and video stream(s) should be bifurcated for further analysis. This approach facilitates the forensic examiner separating the file into smaller data sets while leveraging a previous forensic community recognized framework for audio authentication.
Audio Authentication Analysis Tools
The audio stream(s) in the video file should by subjected to a series of smaller testing methods or techniques using a previously published and forensic community recognized framework [22] [24] [25], This framework may use the methods or techniques noted in the following table to test the audio stream(s) authentication.
Table 4 - Audio Stream Authentication Methods Techniques Relevance
Method / Technique Global Analysis Local Analysis Device Identification (Characteristics) References
Class Individual
Critical Listening Yes Yes No No 1221 1241 (251
High Resolution Waveform Analysis Yes Yes No No [22] [24] [25]
Signal Power Analysis Yes Yes No No 1241 [25]
DC Offset Yes Yes Yes Yes [22] [24] [25]
Long Term Average Spectrum (LTAS) Yes Yes Yes Yes [22] [24] [25]
LTAS Sorted Spectrum Yes Yes Yes Yes [221 1241 [251
Differentiated Sorted Spectrum Yes No No No [22] [24] [25] [32]
Butt-Splice Detection Analysis No Yes No No [22] [24] [25]
Interpolation Analysis (Transitions) No Yes No No [22] [24] [25]
Compression Level Analysis Yes No No No [22] [24] [25]
Electronic Network Frequency Yes Yes No Yes [22] [24] [25]
Phase Continuity (Mono & Stereo) No Yes No No [22] [24] [25]
Table 4 methods / techniques are not discussed in this paper since they have been covered in-depth in other papers noted in references.
44


Video Authentication Analysis Tools
The framework breaks down the tools into device identification, global analysis, and local analysis. Global analysis are format / structure as well as analysis that produces a plot representing the video as a whole. Local analysis is broken down into temporal and spatial analyzes. Temporal analysis compares one frame to the next or a series of frames to another series of frames. Spatial analysis localizes edits or manipulations accomplished within a frame on the pixel level to reveal removal, clone, or spliced (from a different source) areas.
The video stream(s) in the video file should by subjected to a series of smaller testing methods or techniques as part of the authentication framework. A non-exclusive listing of potential test methods or techniques are offered in no specific order below.
Video Copy / Paste / Editing Detection
Today’s society has many free and low cost video editing software tools. Additionally, many cameras’ have built in video editing capabilities and mobile devices have several 3rd party applications that allow the user to edit videos easily. The following methods / techniques are potential tools in the forensic examiner’s video authentication tool box.
Detection of cloning / duplicating frames. Wang and Farid (2007) offered a method to detect a common video manipulation technique of cloning or duplication of frames [33], These video manipulation techniques are commonly used for removing people or objects from a video. Wang and Farid’s (2007) detection method involved two techniques. The first technique detected entire frame duplication while the second technique detected portions of a frame duplicated across one or more frames. The method also discusses the use of the technique across blocks or segments of a video to be more efficient. Wang and Farid’s (2007) detection method
45


could also detect duplicates in both high and low quality compressed video with few false positives [33],
The clone and duplicate frame detection analysis method / technique has potential application for both global and local analysis perspective of the file. See Table 5 below for the relevant analyses areas.
Table 5 - Clone Duplicate Frame Detection Analysis Authentication Method Technique
Relevance
Method / Technique Global Analysis Local Analysis Device Identification (Characteristics) References
Temporal Spatial Class Individual
Clone / Duplicate Frame Detection No Yes Yes No No [33]
Directional lighting inconsistency detection. Hany Farid’s (2006) Significance magazine article discussed how to detect fake digital images. The article presented information about detecting directional lighting inconsistency and presented some published digital images as illustrations [34], The directional lighting detection methodology was based upon research Farid and Micah Johnson published in 2005 paper titled “Exposing Digital Forgeries by Detected Inconsistencies in Lighting.” The Farid and Johnson (2005) paper offered an example of a digital image where two people standing next to each other was created with different light source directions revealing inconsistencies that may be used to reveal traces of digital tampering [35],
The directional lighting inconsistency analysis may be detected by visual content review or by using algorithms noted in Farid and Johnson (2005) papers. The overall methodology / technique has potential application for both global and local analysis perspective of the file. See Table 6 below for the relevant analyses areas.
46


Table 6 - Directional Lighting Inconsistency Detection Analysis Authentication Method
Technique Relevance
Method / Technique Global Analysis Local Analysis Device Identification (Characteristics) References
Temporal Spatial Class Individual
Directional Lighting Inconsistency Detection Yes Yes Yes No No [34] [35]
Local (spatio-temporal) tampering detection. Bestagini, Milani, Tagliasacchi, and Tubaro (2013) offered a method / technique using an algorithm that is able to detect a spatio-temporal region of frames that were replaced with frames or a series of frames from a different time interval where the video contains duplicate blocks of data. The algorithm detects duplicated blocks of data after correlation analysis of the frames. This method has worked with videos with high and low level compression. Bestagini et al., (2013) noted testing correctly detected duplicate blocks in 90% of the sequences when the video was not re-compressed and correctly detected duplicate blocks in 87% of the re-compressed videos. They further noted the incorrect detection activity was not the detection of duplicate blocks, but rather duplicate blocks were not detected [36],
The overall methodology / technique has potential application for both global and local analysis perspective of the file. See Table 7 below for the relevant analyses areas.
Table 7 -Local (Spatio-Temporal) Tampering Detection Analysis Authentication Method
Technique Relevance
Method / Technique Global Analysis Local Analysis Device Identification (Characteristics) References
Temporal Spatial Class Individual
Local (Spatio-Temporal) Tampering Detection No Yes Yes No No [36]
47


Interlaced & Deinterlaced Video Inconsistency Detection
Wang and Farid (2007) proposed two methods for detecting inconsistencies in interlaced and deinterlaced videos for detecting altered or edited videos. An interlaced video is an interlaced signal that contains two fields of a video frame at different times. The video camera records first half the video lines at the initial scan (t= 1st time scan). The video camera records the second half of the video lines at the second scan (t+1 = 2nd time scan). The interlaced video combines the two scan results and produces the frame [37],
Wang and Farid (2007) noted the motion between the two fields of a single frame and in the surrounding frames should be equal in an interlaced video. They noted if the video was tampered with the motion between fields of a single frame and across fields of neighboring frames will reveal inconsistencies [37],
They also offered a model to illustrate the correlation that is introduced by deinterlacing algorithms when software is used on an interlaced video. Wang and Farid (2007) noted tampering can alter these correlations. They also made it a point to note that compression artifacts make it difficult to estimate these deinterlacing correlations. Wang and Farid (2007) recommended the deinterlacing correlation estimate approach be used for high to medium quality videos [37],
In addition, Wang and Farid (2007) suggested the algorithms for the previous techniques could be adapted to detect frame rate conversions that may have occurred post video manipulation. They noted the standard approach to reducing the frame rate is remove the number of frames to meet the expected frame ratio. Wang and Farid (2007) noted this action alters the inter-field and inter-frame motion ratio [37],
48


The overall methodology / technique has potential application for both global and local analysis perspective of the file. See Table 8 below for the relevant analyses areas.
Table 8 - Interlaced & Deinterlaced Video Inconsistency Detection Analysis Authentication
Method Technique Relevance
Method / Technique Global Analysis Local Analysis Device Identification (Characteristics) References
Temporal Spatial Class Individual
Interlaced & Deinterlaced Video Inconsistency Detection Yes Yes No No No [37]
Video Double-Compression Detection
Detection of double compression MPEG video. The MPEG video standard uses a Group of Pictures (GOP) with I (intra) - frame (usually the highest quality), P (predictive) -frame, and B (bi-directional) - frame. The I-frame is a full inter-coded frame. The P-frame contains only changes from the previous frame. B-frames contain differences between the previous and following frame.
Global analysis technique. Wang and Farid (2006) offered a method to detect doubly compressed MPEG video sequences as they introduce static and temporal statistical deviations in the video stream whose presence indicate evidence of tampering. Wang and Farid (2006) noted that their method using statistical artifacts made the detection of tampering in doubly-compressed MPEG videos likely [37], The method they offered approached the detection of the double compression from a global analysis perspective by detecting frame insertion or deletion points. The doubly-compressed 1-frame (a double JPEG compression) prior to the frame insertion or deletion point, presents a statistical pattern that is observable in the distribution of discrete cosine transform (DCT) coefficients that may be plotted on a histogram. Additionally, Wang and Farid (2006) revealed that when a P-frame is predicted from a frame that belonged to
49


a different GOP there was an increase in the total prediction error that may be observed. Wang and Farid (2006) proposed detecting frame deletion or addition by visually inspecting the sequence for a periodic fingerprint. They proposed using a Discrete Fourier Transform (DFT) of the sequence to detect peaks in the periodic fingerprint displayed in a histogram [37],
Local analysis technique. Wang and Farid (2009) offered another double compression detection method of MPEG video sequences that focused on localized analysis that may detect alterations in 16x16 pixel macroblocks [39], A key limitation in the localized analysis method is that the second compression must be higher than the original video’s compression for detection. In addition, the higher the difference in compression the higher the detection performance rate (lower false positives). Wang and Farid (2009) noted this method is particularly useful in detecting the fairly common digital effect of green-screening (a process of combining two videos into one) [39],
Wang and Farid (2009) acknowledge that both the global analysis and local analysis techniques of MPEG double compression detection are vulnerable to countermeasure that can hide traces of tampering [38][39], However, this is why a series of tests are used in the authentication framework. The MPEG double compression detection analysis method / technique has potential application for both global and local analysis perspective of the file. See Table 9 below for the relevant analyses areas.
Table 9 - MPEG Double Compression Detection Analysis Authentication Method Technique
Relevance
Method / Technique Global Analysis Local Analysis Device Identification (Characteristics) References
Temporal Spatial Class Individual
MPEG Double Compression Detection Yes 1371 No Ygc 13811391 No No [37] [3 8] [39]
50


Sensor Noises Detection Methods
Overview. The camera sensor introduces imperfections and noise in images and videos that are not part of the original scene. There is a large body of research in this area that is applicable to video authentication.
Color filter array inconsistency detection. Popescu and Farid (2005) noted in their research on color filter array (CFA) inconsistency detection, that a single color sample is captured by the camera sensor and the other two colors are estimated from neighboring samples. Their research offered a methodology to detect tampering as it creates inconsistencies in the correlations across the image / frame [40],
The CFA interpolation may be based upon several demosaicing approaches. The interpolation may use one of the following:
• Bilinear
• Bicubic
• Smooth Hue Transition
• Median Filter
• Gradient Based
• Adaptive Color Plan
• Threshold Based Variable Number of Gradients [40]
Detection of CFA interpolation uses an expectation / maximization (E/M) algorithm which was a two step iterative algorithm. The E-step calculates the estimated probability each sample belongs to each interpolation approach. This step produces a two dimensional array (probability map) with each entry indicating similarity of each image pixel to one of the two groups of samples (the ones correlated to their neighbors). The E-step iteration of the algorithm
51


will detect if the interpolation was a linear approach and a region of the image / frame was altered (typically requiring up-sampling). Then the M-step estimates each specific form of correlations between samples. The M-step estimates the weight (interpolation coefficients) which tell the amount of input each pixel has in the interpolation kernel [40],
Popescu and Farid (2005) noted their results for detection of inconsistencies in all CFA interpolation techniques had accuracies (with 0% false positives) with either 100% or 98% (with 1 in 50 misclassified) [40],
Table 10 - Color Filter Array Analysis Authentication Method Technique Relevance
Method / Technique Global Analysis Local Analysis Device Identification (Characteristics) References
Temporal Spatial Class Individual
Color Filter Array Analysis No No Yes No No [40]
Source camera identification. The general technique used to perform source camera identification is based upon extracting sensor pattern noise from images or frames left behind in the video by the source camera. The specific pattern noise of interest is PRNU.
Van Houten and Geradts (2009) research that these sensor pattern noises are unique to each sensor or camera. The sensor pattern noise may be compared to reference patterns from a database of cameras or a suspect camera. Van Houten and Geradts (2009) also noted PRNU is present in all images / videos created by CCD or CMOS active pixel sensors and cannot be removed by a layman. Furthermore, CCD and CMOS image sensors are present in a wide range of electronic devices including:
• mobile phones,
• webcams,
• photo camera,
• video cameras, and
52


• image scanners [18],
Sensor pattern noise approach for camera identification. Multiple researchers have conducted studies into sensor pattern noise. However, Lukas, Fridrich, and Goljan (2006) offered a sensor pattern noise extraction filter and approach that has illustrated a high degree of reliability in detecting forged images based upon PRNU. Lukas et al., (2006) research also looked at the stability of sensor pattern noise over the course of a short period of time (one to two years) and noted it to be fairly stable [41] [42],
Sensor pattern noise in videos from YouTube for camera identification. Other researchers have continued to build upon the concepts of sensor patter noise for camera identification. Van Houten and Geradts (2009) used Lukas et al., (2006) approach to expand on source camera identification while focusing on identification of video cameras from multiple compressed videos collected from YouTube. Van Houten and Geradts (2009)noted by extracting and comparing the senor noise patterns they could identify the source camera even after the video was uploaded to YouTube where the added layer of compression further degraded the sensor noise. Their research indicated they were able to correctly identify the source camera even after two or three layers of compression was applied. Van Houten and Geradts (2009) also identified limitations to their approach. The limitations included changing aspect ratio or resizing the input video was detrimental to sensor noise an this could prevent accurate identification [18],
Table 11 - Source Video Camera Identification Using PRNU Detection Method For
Authentication Relevance
Method / Technique Global Analysis Local Analysis Device Identification (Characteristics) References
Temporal Spatial Class Individual
PRNU Detection Method For Camera ID No No No Yes Yes [18] [41] [42]
53


G-PRNU & Resizing Images For Camera Identification.
Al-Athamneh, Kurugollu, Crookes, and Farid (2018) noted in their research that after an exposure time of 0.15 seconds the green channel is the noisiest channel among the three colors of RGB. As a result of their research, they proposed a new method for digital video source identification focusing on the green channel of PRNU [43],
Al-Athamneh et al., (2018) research included resizing the images to 512x512 as a standard size. However, they also conducted research into the best interpolation method and resize dimension for use in their proposed method [43], Their research results in this area are presented in the following table.
Table 12 - Source Camera Successful Identification Rates Using Different Interpolations &
Dimensions [43]
Dimension
Intel potation 64x64 128x128 256x256 512x512 640x640
Bicubic 76.51 82.53 88.55 92.77 92.17
Bilinear 72.29 82.53 87.35 99.15 93.37
Nearest 71.69 80.72 85.96 88.55 79.52
Al-Athamneh et al., (2018) research indicated bilinear at 512x512 offered the most optimal settings for use of their proposed method [43],
Al-Athamneh et al., (2018) also tested using 2-D correlation coefficient detection to identify the source of each of the 236 test videos versus matching with six video references using PRNU, Green-PRNU (G-PRNU) only, and using G-PRNU interpolated by resized 512x512 bilinear interpolations. They used six different cameras with both CMOS and CCD sensors with movies in .MOV, .AVI, and .MP4 formats [43], The results of the source camera identification rates reported by Al-Athamneh et al., (2018) are presented in the following table.
54


Table 13 - Source Camera Identification Rates [43]
Camera PRNU G-PRNU G-PRNU With Bilinear Interpolation
Cl 15% 97.5% 100%
C2 36.58% 95.12% 97.56%
C3 25.8% 96.77% 97.56%
C4 37.83% 100% 100%
C5 26.47% 100% 100%
C6 95.34% 97.67% 100%
Total Average 41.15% 97.79% 99.15%
Al-Athamneh et al., (2018) research indicated using G-PRNU with bilinear interpolation could correctly determine the source of 234 videos with a correct detection rate of 99.15% [43], Al-Athamneh et al., (2018) proposed method is noted below:
1. Extract the green channel frames from the video (350 frames per video).
2. Resize the extracted frames to 512x512 using bilinear interpolation.
3. Perform wavelet-based de-noising on the green channel frames.
4. Create the G-PRNU map for the video by averaging the results of step 3.
5. Create a reference by performing steps 1-4 on 9 videos captured by the same camera.
6. Use 2-D correlation coefficient as the camera detection test [43],
Table 14 - G-PRNU & Image Resize For Camera Identification Detection Method For
Authentication Relevance
Method / Technique Global Analysis Local Analysis Device Identification (Characteristics) References
Temporal Spatial Class Individual
G-PRNU & Image Resize For Camera ID No No No Yes Yes [43]
Block Level Manipulation Detection Method
Hsu, Hung, Lin, and Hsu (2008) proposed in their research the use a temporal correlation of block level pattern noise to locate tampered regions of videos. Their method used models
55


based upon the distribution of temporal sensor noise correlation values of video blocks in tampered regions and normal regions by using a Gaussian mixture model (GMM). Hsu et al., (2008) initially subtract the original frame from the noise-free version, using a wavelet denoising filter, to obtain the sensor pattern noise of each frame. Their method subsequently partitions each video frame into non-overlapping blocks of size NxN. Hsu et al., (2008) then correlate the noise residuals between the same spatially indexed block of two successive frames. Their method then locates the tampered blocks by analyzing the statistical properties of block-level PRNU correlations [44],
Table 15 - Block Level Manipulation Detection Method For Authentication Relevance
Method / Technique Global Analysis Local Analysis Device Identification (Characteristics) References
Temporal Spatial Class Individual
Block Level Manipulation Detection No No Yes No No [44]
Pixel Level Manipulation Detection Method
Li, Wang, and Xu (2018) proposed in their research to use a 2D phase congruency with correlation coefficient analysis of adjacent frames to detect pixel tampering. Li et al., (2018) proposed method measures the inter-frame continuity of the content of each frame. This method may be applied in a global analysis and local analysis [45],
Table 16 - 2D Phase Congruency CC Detection Method For Authentication Relevance
Method / Technique Global Analysis Local Analysis Device Identification (Characteristics) References
Temporal Spatial Class Individual
2D Phase Congruency Correlation Coefficient Analysis Yes Yes No No No [45]
56


Testing of Methods & Proposed Framework
The following section offers case studies of use of the proposed digital video authentication framework.
Adding New Method To Video Authentication Toolbox - Case Study 1
Case Study 1 illustrates the use of the proposed method evaluation tool for the forensic video examiner to validate a new method for their methodology toolbox. The process includes examiner validation of the proposed method.
The method evaluation tool report addressed the potential use of multimedia stream hash validation method [20][65], The evaluation contained two tests or assessments. The first test was a validation test using SWGDE tool validation testing guidance [23], The second test or assessment was a admissibility assessment of the proposed method.
The method validation testing assessed the proposed multimedia stream hash validation method for the following:
1) reproducibility, repeatability, accuracy, and precision.
2) use for intended purpose, and
3) method performance as expected.
The test involved to test scenarios. The specific tests involved the following steps.
1) Hash multimedia streams (both audio and video) in test data set. This is test preparation and establishes the original hashes for subsequent comparison.
2) Forensically copy each test data set’s audio stream to a wave PCM audio digital multimedia file. Test scenario #1.
3) Hash the audio stream in each derivative wave PCM audio digital multimedia file. Test scenario #1.
57


4) Analyze results of test scenario #1.
5) Forensically copy each test data set’s video stream to a lossless MP4 digital multimedia file. Test scenario #2
6) Hash the video stream in each derivative lossless MP4 digital multimedia file. Test scenario #2.
7) Analyze results of test scenario #2.
The admissibility assessment evaluated the following:
1) Has the method been tested?
2) Has the technique / method been subjected to peer review & publication?
3) What is the error rate of the theory or is an error mitigation method implemented?
4) Is the technique / method accepted in the forensic science community?
5) What are the standards controlling the use of the technique / method?
The test results demonstrated that multimedia stream hash validation method, as it relates to video and audio streams, was a viable method for use in video authentication process when transcoding video and audio streams for further authentication. The method was added to the video authentication method toolbox of the author for use within the limitations noted in the method validation testing report. See Appendix B for case study 1.
Clone Alteration Test Videos - Case Study 2
Clone alterations, or copy and pasting one region of a frame to another to the same or different region across multiple frames, is a very popular method to add or remove people or objects to a video with the intent to manipulate the viewer perspective. This cases study focuses on this type of manipulation where the goal is to change how the viewer would interpret imaged events from how they actually happened. See Appendix C for case study 2.
58


Case study #2 involved testing the proposed video authentication framework against four videos known to have local clone alterations. The test videos were the same videos used by Hsu et al., (2008) in their block level manipulation detection paper. All four videos had local clone alterations. One video also used an example-based texture synthesis technique along with the clone technique. In addition, one of the videos involved a panning camera while using the clone technique to remove a person walking in one direction and a car passing in the background in the opposite direction [44],
Examination of all four test videos (known to have tampered regions) using the proposed video authentication framework revealed the file structure analysis detected the alteration of the videos with a known video processing tool. The framework would have allowed the forensic video examiner to opt out of detecting the precise tampering regions by using the workflow optimization option. However, the tests were designed to continue without using the workflow optimization options. Further examination of all four video streams resulted in accurate and precise detection of the regions within each frame of all four videos.
Text Video #1 Frame Tampering Summary
See figure below for test video #1 validation of test results.
59


Frame 0 - Tampered Video Frame 25 - Tampered Video Frame 50 - Tampered Video Frame 75 - Tampered Video
Frame 0 — Original Video Frame 25 — Original Video Frame 50 — Original Video Frame 75 — Original Video
Figure 11 - Case Study 2 Test 1 Test Validation Results Figure above provides a summary of Frames 0, 25, 50, & 100 from test video 1. The top row
presents the respective frame contents visually for the tampered video. The middle row contains the respective frames after a temporal difference filter was applied to each video. Each of the middle row frames contain greyish boxes for the clone altered regions. The bottom row of the figure offers the respective frame contents from the original video and shows the person walking who was removed from the video frames in the tampered video in the top row. The middle row detected tampered regions directly correlates to the original video frames of the person walking. Text Video #2 Frame Tampering Summary
See figure below for test video #2 validation of test results.
60


Frame 0 - Temporal Difference Filter Frame 25 - Temporal Difference Filter Frame 50 - Temporal Difference Filter Frame 75 - Temporal Difference Filter
Frame 0 — Original Video Frame 25 — Original Video Frame 50 — Original Video Frame 75 — Original Video
Figure 12 - Case Study 2 Test 2 Test Validation Results Figure above provides a summary of Frames 0, 25, 50, & 100 from test video 2. The top row presents the respective frame contents visually for the tampered video. The middle row contains the respective frames after a temporal difference filter was applied to each video. Each of the middle row frames contain greyish boxes for the clone altered regions. The bottom row of the figure offers the respective frame contents from the original video and shows the person walking who was removed from the video frames in the tampered video in the top row. The middle row detected tampered regions directly correlates to the original video frames of the person walking. Text Video #3 Frame Tampering Summary
See figure below for test video #3 validation of test results.
61


Frame 0 - Temporal Difference Filter
Frame 0 - Original Video
Frame 25 - Temporal Difference Filter Frame 50 - Temporal Difference Filter Frame 75 - Temporal Difference Filter
Figure 13 - Case Study 2 Test 3 Test Validation Results Figure above provides a summary of Frames 0, 25, 50, & 100 from test video 3. The top row presents the respective frame contents visually for the tampered video. The middle row contains the respective frames after a temporal difference filter was applied to each video. Each of the middle row frames contain red circles highlighting differences in the greyish areas for the clone altered regions. The bottom row of the figure offers the respective frame contents from the original video and shows the person walking who was removed from the video frames in the tampered video in the top row. The middle row tampered regions directly correlates to the original video frames of the person walking. A major difference in this test from the other tests in case study #2 was that the camera was panning right while one tampered region of interest moved with the camera panning right and a smaller background entity moved across the screen to the left. The camera movement caused the temporal differences filter to present the examiner much less contrast in the detected tampered regions than found in fixed camera positioning text videos (1, 2, & 4).
62


Text Video #4 Frame Tampering Summary
See figure below for test video #4 validation of test results.
Frame 0 — Tampered Video
Frame 0 — Temporal Difference Filter
Frame 0 — Original Video
Frame 25 - Temporal Difference Filter Frame 50 - Temporal Difference Filter
Frame 25 — Original Video Frame 50 — Original Video
Frame 75 - Tampered Video
Frame 75 — Temporal Difference Filter
Frame 75 - Original Video
Figure 14 - Case Study 2 Test 4 Test Validation Results Figure above provides a summary of Frames 0, 25, 50, & 100 from test video 2. The top row presents the respective frame contents visually for the tampered video. The middle row contains the respective frames after a temporal difference filter was applied to each video. Each of the middle row frames contain greyish boxes for the clone altered regions. The bottom row of the figure offers the respective frame contents from the original video and shows the person walking who was removed from the video frames in the tampered video in the top row. The middle row detected tampered regions directly correlates to the original video frames of the person walking.
Case Study #2 illustrates the framework may be used as a structured process for executing video authentication methods from a forensic video examiner’s methodology toolbox for accurate and precise detection of video manipulation.
63


Axon Fleet 2 Camera Video - Case Study 3
Law enforcement today uses body cameras and video cameras mounted in their patrol vehicles to document events. Axon is a major provider of law enforcement video cameras. The cameras may be activated by the law enforcement officer to record an event, but Axon video cameras also have a pre-event buffer video recording of 30 seconds by default. The event recording and the pre-event recording are important to document all events at the scene. Both the pre-event buffer portion and the event portion of the video’s integrity is important to the legal system to protect all parties. See Appendix D for case study 3.
Case study #3 tests the proposed video authentication framework against an Axon Fleet 2 camera video. Working copies of the video were edited to remove both a large amount of frames and a small amount of frames. The video editing occurred in both the pre-event buffer recording area and the event recording area to simulate someone tampering with the video to hide part of the overall event. Three test videos were created for case study #3.
Examination of the three test videos (known to have tampered / spliced regions) using the proposed video authentication framework revealed the file structure analysis detected the alteration of the videos with a known video processing tool. The framework would have allowed the forensic video examiner to opt out of detecting the precise tampered regions by using the workflow optimization option. However, the tests were designed to continue without using the workflow optimization options. Further examination of all three test video streams resulted in accurate and precise detection of the spliced areas of all three videos.
Test Video #1 Frame Tampering Summary
The first video had frames 600-700 deleted from the video using Adobe Premiere software. This area of tampering was the pre-event buffer area of the recording. The file structure analysis detected artifacts of Adobe Premiere software use.
64


Global analysis. A global analysis of the overall video file did not reveal any
inconsistencies.
Local analysis. A temporal analysis of the video file revealed pixel level inconsistencies. See figure below for the results of the local analysis findings of pixel level irregularities.
Figure 15 - Temporal Analysis of Y Plane Revealing Current Frame Versus Preceding Frame
Differences
The figure above illustrates a temporal analysis of the Y plane of each current frame and the preceding frame which revealed a major visual change from one frame to the next between frame 598 and frame 600.
The local analysis also included a visual content analysis. See figure below for the results of the local analysis’s visual content analysis.
65


Frame 598 (P-Frame) Frame 599 (B-Frame) Frame 600 (B-Frame)
Figure 16 - Comparison of Frame 598, 599, & 600 Visual Content Using Temporal Difference
Filter
Visual analysis of frames 598, 599, & 600 using a temporal difference filter between frames revealed a significant visual change at frame 599 as noted above.
In addition, see figure below for local pixel analysis results.
Figure 17 - 2D Phase Congruency With Correlation Coefficient of Adjacent Frames Figure above illustrates the results of using 2D phase congruency with correlation coefficient on adjacent frames of the video. The noted spike in the histogram at frame 599 in the
66


video was the same location as noted in the local pixel level analysis and visual anomaly analysis noted above.
Test Video #2 Frame Tampering Summary
Test video #2 had only two frames deleted from the video using Adobe Premiere software. This area of tampering was in the event area of the recording. Frames 1075 and 1076 were removed from the video stream. A file structure analysis of the edited video detected artifacts of Adobe Premiere software use.
Global analysis. A global analysis of the overall video file did not reveal any inconsistencies.
Local analysis. A temporal analysis of the video file did not revealed pixel level inconsistencies. See figure below for the results of the local analysis findings of pixel level irregularities.
Figure 18 - Temporal Analysis of Y Plane Revealing Current Frame Versus Preceding Frame
Differences
The figure above illustrates a temporal analysis of the Y plane of each current frame and the preceding frame. The subtle differences in the video between frame 1074 and 1076 were not detected in this analysis as noted in Figure 16 above.
67


Local analysis. A local analysis included visual content analysis. See figure below for
the results of the local analysis’ visual content analysis.
Frame 1074 Frame 1075
Figure 19 - Comparison of Frame 1074 & 1075 Visual Content
Visual inconsistencies were very minor when comparing frame 1074 to 1075. Two frames were
removed, but without the local pixel manipulation analysis in Figure 18 below the subtle
inconsistency would probably be undetected. See figure below for local pixel analysis results.
68


Figure 20 - 2D Phase Congruency With Correlation Coefficient of Adjacent Frames
Figure above illustrates the results of using 2D phase congruency with correlation coefficient on adjacent frames of the video. The analysis detected a spike in the histogram at frame 1075 consistent with the tampered area of the video.
Test Video #3 Frame Tampering Summary
Test video #3 had the entire pre-event buffer recording deleted from the video using Shortcut, an open source video editing software. This resulted in only the event area of the recording present in the tampered video. The file structure analysis revealed the presence of Lavf58.20.100 encoder rather than the Ambarella Advanced Video Coding noted in original video from Axon fleet 2 cameras.
69


Global analysis. A global analysis of the overall video file did not reveal any inconsistencies.
Local analysis. A local analysis included visual content analysis. The visual content analysis revealed no inconsistencies in the frames, but noticeably absent was the pre-event buffer video stream that contains 30 seconds of default video content prior to law enforcement officer activating the camera to record the event. In addition, 2D phase congruency with correlation coefficient on adjacent frames of the video revealed no alterations.
Case Study #3 illustrates the framework may be used as a structured process for executing video authentication methods from a forensic video examiner’s methodology toolbox for accurate and precise detection of video manipulation.
Proposed Framework Overall Test Results
Testing of the proposed framework in video authentication of known data sets has produced results consistent with test hypotheses. Testing has illustrated the framework may be used as a structured process for executing video authentication methods from a forensic video examiner’s methodology toolbox for accurate and precise detection of video manipulation.
70


CHAPTER VI
CONCLUSION
The proposed framework offers a structured approach to assess and use forensic science community accepted video and audio authentication methods. The proposed framework incorporates methods or techniques that are evaluated for reproducibility, repeatability, accuracy, and precision while meeting the general legal requirements recognized by courts in the International community, U.S., and many countries around the world.
The framework has a built in methodology evaluation tool. The methodology evaluation tool includes a methodology validation assessment and a legal assessment to aid the user in determining if a proposed method should be included or excluded from use as part of the specific framework protocol for each video file considered for authentication. Testing of the proposed framework assessment processes and use in video authentication of known data sets has produced results consistent with test hypotheses. And the proposed framework offers the forensic video examiner a methodology to assess published video and audio authentication techniques recognized in the forensic science community while using generally accepted criteria to test and evaluate the techniques as expected by the courts.
However, there are limitations. Acceptance of the proposed framework for video authentication by the courts will always be based upon a case by case basis dependent upon each cases facts, proper use of the scientific methods, and the overall experience, training, and knowledge of the forensic video examiner who testifies as an expert. The proposed framework is intended for digital video and not applicable to analog video. New methods that are developed in the deep learning and computer vision communities may be incorporated into new methods.
71


REFERENCES
[1] Saferstein, R. (2007). Criminalistics: an introduction to forensic science (9th ed.). Upper Saddle River, NJ: Prentice Hall.
[2] Casey, E. (2000). Digital Evidence and Computer Crime (1st ed.). London: Academic Press.
[3] Legal Information Institute Staff - Cornell Law School. (2011, December 01). Rule 401. Test for Relevant Evidence. Retrieved December 5, 2018, from https://www.law.cornell.edu/rules/fre/rule_401
[4] Legal Information Institute Staff - Cornell Law School. (2011, December 05). Rule 901. Authenticating or Identifying Evidence. Retrieved December 5, 2018, from https://www.law.cornell.edu/rules/fre/rule_901
[5] Scientific Working Group on Digital Evidence (SWGDE). (2016, June 3). SWGDE Digital & Multimedia Evidence Glossary, v 3. Retrieved January 8, 2019, from https://www.swgde.org/documents/Current Documents/SWGDE Digital and Multimedia Evidence Glossary.
[6] Fowler, G. A. (2018, October 18). Ifell for Facebook fake news. Here's why millions of you did, too. Retrieved January 8, 2019, from
https: //www.washingtonpost. com/technology/2018/10/18/i-fell-facebook-fake-news-heres-why-millions-you-did-too/?utm term=. 464a38303f84
[7] YouTube Video From MeniThings. (2017, June 14). Extreme Crosswind \ Airliner spins 360. Retrieved January 8, 2019, from https://youtu.be/AgvzhJpynlO
[8] YouTube Video From Jeff Smith. (2018, December 23). No Title. Retrieved January 9, 2019, from https://youtu.be/B0ZG9IUCD3k
[9] Apple Inc. (2018, November 08). Using HEIF or HEVC media on Apple devices. Retrieved January 10, 2019, from https://support.apple.com/en-us/HT207022
[10] Apple Inc. (2018, September 19). Take and edit Live Photos. Retrieved January 9, 2019, from https://support.apple.com/en-us/HT207310
[11] Daubert v. MerrellDow Pharmaceuticals, Inc., 509 U.S. 579, 113 S. Ct. 2786, 125 L.
Ed. 2d 469 (1993).
[12] Oxford University Press, (n.d.). Scientific method | Definition of scientific method in English by Oxford Dictionaries. Retrieved January 13, 2019, from https://en.oxforddictionaries.com/defmition/scientific_method
72


[13] Scientific Working Group Digital Evidence. (2018, November 20). SWGDE Establishing Confidence in Digital and Multimedia Evidence Forensic Results by Error Mitigation Analysis. Retrieved January 19, 2019, from https://swgde.org/documents/Current Documents/SWGDE Establishing Confidence in Digital Forensic Results by Error Mitigation Analysis
[14] Gow, R. D., Renshaw, D., Findlater, K., Grant, L., McLeod, S. J., Hart, J., & Nicol, R. L. (2007). A comprehensive tool for modeling CMOS image-sensor-noise performance. IEEE Transactions on Electron Devices, 54(6), 1321-1329.
doi:10.1109/TED.2007.896718.
[15] Irie, K., McKinnon, A. E., Unsworth, K., & Woodhead, I. M. (2008). A model for measurement of noise in CCD digital-video cameras. Measurement Science and Technology, 19(4), 045207. doi: 10.1088/0957-0233/19/4/045207.
[16] Irie, K., McKinnon, A. E., Unsworth, K., & Woodhead, I. M. (2008). A technique for evaluation of CCD video-camera noise. IEEE Transactions on Circuits and Systems for Video Technology, 18(2), 280-284.
[17] Hytti, H. T. (2006, January). Characterization of digital image noise properties based on RAW data. In Image Quality and System Performance III (Vol. 6059, p. 60590A). International Society for Optics and Photonics.
[18] Van Houten, W., & Geradts, Z. (2009). Source video camera identification for multiply compressed videos originating from YouTube. Digital Investigation, 6(1), 48-60.
doi: 10.1016/j. diin.2009.05.003
[19] Ishida, H., Kagawa, K., Komuro, T., Zhang, B., Seo, M., Takasawa, T., Kawahito, S. (2018). Multi-aperture-based probabilistic noise reduction of random telegraph signal noise and photon shot noise in semi-photon-counting complementary-metal-oxide-semiconductor image sensor. Sensors (Basel, Switzerland), 18(4), 977. doi:10.3390/sl8040977
[20] Whitecotton, C. M. (2017). YouTube: Recompression Effects. University of Colorado at Denver.
[21] Grigoras, C. (2017). Forensic Audio Authentication. Retrieved from University of Colorado Denver Canvas - Forensic Audio Analysis course material.
[22] Grigoras, C., Rappaport, D., and Smith, J. (2012). Analytical Framework for Digital Audio Authentication. Presented at AES 46th International Conference.
[23] Scientific Working Group Digital Evidence. (2014, September 5). SWGDE Recommended Guidelines for Validation Testing, V2. Retrieved February 5, 2019, from https://www.swgde.org/documents/Current Documents/SWGDE Recommended Guidelines for Validation Testing.
73


[24] Rappaport, D. L. (2012). Establishing a standardfor digital audio authenticity: A critical analysis of tools, methodologies, and challenges. University of Colorado at Denver.
[25] Scientific Working Group Digital Evidence. (2018, September 20). SWGDE Best Practices for Digital Audio Authentication v 1.3. Retrieved February 8, 2019, from https://www.swgde.org/documents/Current Documents/SWGDE Best Practices for Digital Audio Authentication
[26] Anderson, S. D. (2011). Digital image analysis: Analytical framework for authenticating digital images (University of Colorado Denver).
[27] Grigoras, C., & Smith, J. (2013, February 21). Digital Imaging: Enhancement and Authentication. Retrieved from
https://www.sciencedirect.com/science/article/pii/B9780123821652001276
[28] Wolanin, J. E. (2018). ANALYSIS OFFACEBOOK’S VIDEO ENCODERS (University of Colorado at Denver).
[29] Lawson, M. (2017). A Forensic Analysis of Digital Image Characteristics Associated to Flickr and Google Plus (University of Colorado Denver).
[30] Giammarrusco, Z. P. (2014). Source identification of high definition videos: A forensic analysis of downloaders and YouTube video compression using a group of action cameras. University of Colorado at Denver.
[31] Hall, J. R. (2015). MPEG-4 video authentication using file structure and metadata (University of Colorado Denver).
[32] B.E. Koenig and D.S. Lacey, “Forensic Authentication of Digital Audio Recordings.” J. Audio Eng. Soc., Vol 57, No. 9, 2009 Sept.
[33] Wang, W., & Farid, H. (2007, September). Exposing digital forgeries in video by detecting duplication. In Proceedings of the 9th workshop on Multimedia & security (pp. 35-42). ACM.
[34] Farid, H. (2006). Digital doctoring: How to tell the real from the fake. Significance, 3(4), 162-166. doi:10.1111/j. 1740-9713.2006.00197.
[35] Johnson, M., & Farid, H. (2005). Exposing digital forgeries by detecting inconsistencies in lighting. Paper presented at the 1-10. doi: 10.1145/1073170.1073171
[36] Bestagini, P., Milani, S., Tagliasacchi, M., & Tubaro, S. (2013). Local tampering detection in video sequences. Paper presented at the 488-493.
doi: 10.1109/MMSP.2013.6659337
74


[37] Wang, W., & Farid, H. (2007). Exposing digital forgeries in interlaced and deinterlaced video. IEEE Transactions on Information Forensics and Security, 2(3), 438-449.
doi: 10.1109/TIFS.2007.902661
[38] Wang, W., & Farid, H. (2006, September). Exposing digital forgeries in video by detecting double MPEG compression. In Proceedings of the 8th workshop on Multimedia and security(pp. 37-47). ACM.
[39] Wang, W., & Farid, H. (2009, September). Exposing digital forgeries in video by detecting double quantization. In Proceedings of the 11th ACM workshop on Multimedia and security (pp. 39-48). ACM.
[40] Popescu, A. C., & Farid, H. (2005). Exposing digital forgeries in color filter array interpolated images. IEEE Transactions on Signal Processing, 53(10), 3948-3959. doi: 10.1109/TSP.2005.855406
[41] Lukas, J., Fridrich, J., & Goljan, M. (2006). Digital camera identification from sensor pattern noise. IEEE Transactions on Information Forensics and Security, 7(2), 205-214. doi: 10.1109/TIFS.2006.873602
[42] Lukas, J., Fridrich, J., & Goljan, M. (2006). Detecting digital image forgeries using sensor pattern noise. Paper presented at the , 6072(1) 60720Y-60720Y-11.
doi: 10.1117/12.640109
[43] Al-Athamneh, M., Kurugollu, F., Crookes, D., & Farid, M. (2016). Digital video source identification based on green-channel photo response non-uniformity (G-PRNU).
[44] Hsu, C. C., Hung, T.Y., Lin, C. W., & Hsu, C. T. Video forgery detection using sensor pattern noise. In 2008 21th Conference on Computer Vision, Graphics and Image Processing.
[45] Li, Q., Wang, R., & Xu, D. (2018). An Inter-Frame Forgery Detection Algorithm for Surveillance Video. Information, 9(12), 301.
[46] Lorraine v. Markel American Insurance Company, 241 F.R.D. 534 (D.Md. May 4, 2007)
[47] Grimm, P. W., Ziccardi, M. V., & Major, A. W. (2009). Back to the future: Lorraine v. Markel American Insurance Co. and new findings on the admissibility of electronically stored information. Akron L. Rev., 42, 357.
[48] UNITED STATES OF AMERICA, v. KENNETH PETTWAY, JR., Defendant., 2018 U.S. Dist. LEXIS 176848, 2018 WL 4958962 (United States District Court for the Western District of New York October 15, 2018, Filed). Retrieved from https ://advance-lexis-
com.aurarialibrary.idm.oclc.org/api/document?collection=cases&id=urn:contentItem:5T GW-F2W1 -FGCG-S0 lB-00000-00&context= 1516831.
75


[49] UNITED STATES OF AMERICA, Plaintiff - Appellee, v. MATTHEW LANE DURHAM, Defendant - Appellant., 902 F.3d 1180, 2018 U.S. App. LEXIS 24546 (United States Court of Appeals for the Tenth Circuit August 29, 2018, Filed). Retrieved from https://advance-lexis-
com.aurarialibrary.idm.oclc.org/api/document?collection=cases&id=urn:contentItem:5T4 V-BCT1 -JFDC-X4XW-00000-00&context= 1516831.
[50] VIRGINIA NESTER, Individually and As Next Friend of C.N. and S.N., minors; ROBERT SCOTT NESTER, Individually and As Next Friend of C.N. and S.N., minors, Plaintiffs - Appellees v. TEXTRON, INCORPORATED, doing business as E-Z-GO, Defendant - Appellant, 888 F.3d 151, 2018 U.S. App. LEXIS 9783, CCH Prod. Liab.
Rep. P20,339, 2018 WL 1835816 (United States Court of Appeals for the Fifth Circuit April 18, 2018, Filed). Retrieved from https://advance-lexis-
com.aurarialibrary.idm.oclc.org/api/document?collection=cases&id=urn:contentItem:5S4
K-8MMl-JJSF-20C2-00000-00&context=1516831.
[51] MARIO COSTELLO, Petitioner, -against- THOMAS GRIFFIN, Superintendent, Greenhaven Correctional Facility, Respondent., 2018 U.S. Dist. LEXIS 202412 (United States District Court for the Eastern District of New York November 28, 2018, Filed). Retrieved from https://advance-1 exis-
com.aurarialibrary.idm.oclc.org/api/document?collection=cases&id=urn:contentItem:5T VD-5 YY1-JFDC-X4M1 -00000-00&context= 1516831.
[52] JOSE RAYMOND COLON, Petitioner, v. SECRETARY, DEPARTMENT OF CORRECTIONS, Respondent., 2018 U.S. Dist. LEXIS 152747, 2018 WL 4281466 (United States District Court for the Middle District of Florida, Tampa Division September 7, 2018, Filed). Retrieved from https://advance-lexis-
com.aurarialibrary.idm.oclc.org/api/document?collection=cases&id=urn:contentItem:5T6
S-RTMl-JFSV-G391-00000-00&context=1516831.
[53] MARY A. GILMORE VERSUS SPRINGHILL MEDICAL CENTER, 2018 U.S. Dist. LEXIS 75327, 2018 WL 2069728 (United States District Court for the Western District of Louisiana, Shreveport Division May 3, 2018, Filed). Retrieved from https ://advance-lexis-
com.aurarialibrary.idm.oclc.org/api/document?collection=cases&id=urn:contentItem:5S7 P-FW21 -JP9P-G4NY-00000-00&context= 1516831.
[54] UNITED STATES OF AMERICA v. JOSHUA MOSES, 2018 U.S. Dist. LEXIS 12030 (United States District Court for the Eastern District of Pennsylvania January 25, 2018, Filed). Retrieved from https://advance-lexis-
com.aurarialibrary.idm.oclc.org/api/document?collection=cases&id=urn:contentItem:5R GT-8PG1 -JWJ0-G20K-00000-00&context= 1516831.
[55] STEVEN KLEIN, Petitioner, v. NEIL MCDOWELL, warden, Respondent., 2018 U.S. Dist. LEXIS 196069, 2018 WL 6018208 (United States District Court for the Southern
76


District of California November 16, 2018, Filed). Retrieved from https://advance-lexis-com.aurarialibrary.idm.oclc.org/api/document?collection=cases&id=urn:contentItem:5T RT-BWM1 - JT42-S12G-00000-00&context= 1516831.
[56] MARCUS SIMPSON, Plaintiff, and MICHAEL GLOVER, Plaintiff-Appellant, v. VILLAGE OF LINCOLN HEIGHTS, et al., Defendants-Appellees., 2018 U S. App. LEXIS 13240 (United States Court of Appeals for the Sixth Circuit May 21, 2018, Filed). Retrieved from https://advance-1 exis-
com.aurarialibrary.idm.oclc.org/api/document?collection=cases&id=urn:contentItem:5S CP-1J61 -FFMK-M0M6-00000-00&context= 1516831.
[57] UNITED STATES OF AMERICA v. CHARLES STAGNER, Defendant., 2018 U S.
Dist. LEXIS 105617 (United States District Court for the Southern District of Alabama, Southern Division June 25, 2018, Filed). Retrieved from https://advance-lexis-com.aurarialibrary.idm.oclc.org/api/document?collection=cases&id=urn:contentItem:5S NO-Y0D1 -F900-G0CT-00000-00&context= 1516831.
[58] Laurel E. Fletcher, Chris Jay Hoofnagle, Eric Stover, Jennifer Urban, et al.. "Working Paper: An Overview of the Use of Digital Evidence in International Criminal Courts, Salzburg Workshop on Cyber Investigations" Salzburg Workshop on Cyberinvestigations (2013) Available at: http://works.bepress.com/laurel_fletcher/41/
[59] Laurel E. Fletcher, Chris Jay Hoofnagle, Eric Stover, Jennifer Urban, et al.. "Working Paper: Digital Evidence: Investigatory Protocols" Salzburg Workshop on Cyberinvestigations (2013) Available at: http://works.bepress.com/laurel_fletcher/43/
[60] National Forensic Science Technology Center, (n.d.). Forensic Evidence Admissibility & Expert Witnesses. Retrieved January 10, 2019, from http://www.forensicsciencesimplified.org/legal/702.html
[61] General Electric Co. v. Joiner, 522 U.S. 136, 118 S. Ct. 512, 139 L. Ed. 2d 508 (1997).
[62] Kumho Tire Co. v. Carmichael, 526 U.S. 137, 119 S. Ct. 1167, 143 L. Ed. 2d 238 (1999).
[63] Legal Information Institute Staff - Cornell Law School. (2011, December 01). Rule 702. Testimony by Expert Witnesses. Retrieved January 12, 2019, from https://www.law.cornell.edu/rules/fre/rule_702
[64] Knoops, G. J. A. (2009). The Proliferation of Forensic Sciences and Evidence before International Criminal Tribunals from a Defence Perspective. In Criminal Law Forum (pp. 1-28). Springer Netherlands.
[65] Warren, J., Clear, M., & McGoldrick, C. (2012, October). Metadata Independent Hashing for Media Identification & P2P Transfer Optimisation. In 2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (pp. 58-65). IEEE.
77


[66] Scientific Working Group Digital Evidence. (2018, November 20). SWGDE Position on the Else of MD5 and SHA1 Hash Algorithms in Digital and Multimedia Forensics v 1.0. Retrieved March 22, 2019, from
https://swgde.org/documents/Released%20For%20Public%20Comment/SWGDE%20Bes
t%20Practices%20for%20Mobile%20Device%20Evidence%20and%20Collection,%20Pr
eservation,%20and%20Acquisition.
[67] Scientific Working Group Digital Evidence. (2018, November 20). SWGDE Technical Notes on FFmpeg v 2.0. Retrieved March 22, 2019, from
https://www.swgde.org/documents/Current%20Documents/SWGDE%20Technical%20N
otes%20on%20FFmpeg
78


APPENDIX A
ADDITIONAL LEGAL INFORMATION
This appendix reviews the expectations for authentication of digital video found in case precedent and rules in both the U.S. federal and International courts. This chapter also provided a discussion of what is expected of expert witnesses who testify in U.S. federal and International courts. The information from these areas was then used to develop certain features of the proposed framework.
Authentication of Electronically Stored Information In U.S.
U.S. federal courts have addressed various areas of authentication of electronically stored information (ESI) that includes digital video. The Lorraine v. Markel American Insurance Company (241 F. R. D. 534 (D. Md. 2007)) {Lorraine) case is one of the leading comprehensive cases that address the complex aspects of evidential law and its applicability in ESI [46], Lorraine v. Markel American Insurance Company
In Lorraine the court addressed the admission of ESI based upon the following:
• Relevance
• Authenticating Evidence
• Hearsay Evidence
• Evidence should be original or admissible duplicate
• Admissibility of Evidence (Probative Value).
The Lorraine court provided guidance on authenticating ESI. The court noted that in order for ESI to be admissible, the party offering the evidence “must produce evidence sufficient to support a finding that the item is what the proponent claims it is” as noted in U.S. Federal Rules of Evidence (FRE) 901(a).
79


The Lorraine case noted several precedent cases where courts excluded ESI because the proffering party did not properly offer sufficient evidence to support a finding that the ESI was what its proponents claimed. Although FRE 901(a) addresses the requirement for authentication, it does not address how to authenticate the evidence.
The Lorraine court noted that FRE 901(b) offered a non-exclusive list of methods for authentication. The Lorraine court offered additional guidance on how to authenticate ESI under FRE 901(b) that are directly relevant to this thesis and are discussed below.
It is important for the forensic scientist to understand how authentication methodology testimony may be offered to the court under the FRE. The forensic scientist may offer testimony to the trier of fact as an expert witness (FRE 901(b)(3)) based upon contents, substance and distinctive characteristics (FRE 901(b)(4)), or a description of the process to produce results (FRE 901(b)(9)).
This thesis focuses on the Lorraine decision as it relates to authenticating evidence; the requirement that evidence be original or an admissible duplicate; and the case guidance as it relates to a contested authenticity of proffered evidence.
Witness With knowledge
FRE 901(b)(1) allows “[tjestimony that a matter is what it is claimed to be” [4], The Lorraine court clarified that this means a witness may testify with knowledge through “having participated in or observed the event reflected by the exhibit” [46],
Expert Witness
FRE 901(b)(3) allows “... comparison with an authenticated specimen by an expert witness or the trier of fact” [4], The Lorraine court noted the “authenticated specimen” used by the expert witness may be authenticated by any means allowable under FRE 901 and FRE 902 and that
80


authentication is permitted based upon knowledge that an item is what it claimed to be as stated in FRE 901(b)(1) [46],
The court also clarified that the “knowledge” of the specimen (exemplar) may be obtained based upon first hand data (creating the specimen or observing the creation) and provide the forensic scientist a judicious approach to obtaining authenticated specimens (exemplars) [46], Contents. Substance. & Distinctive Characteristics.
FRE 901(b)(4) allows exhibits to be authenticated by appearance, contents, substance, internal patterns, or other distinctive characteristics, taken in conjunction with circumstances”
[4]. The Lorraine court provided three general areas of characteristics for authentication under FRE 901(b)(4): hash values, metadata, and other distinctive characteristics [46],
Hashing. Hashing is the application of a mathematical algorithm to digital data that results in a unique alpha numeric value that is a unique identifier of the digital data [46], The recognition of hash values for authentication in the Lorraine case as it related to digital data hashes includes the hash of stream data found in digital video or audio files.
Metadata. Metadata is generally data, frequently embedded within a file, that may include file creation and /or modification dates and times, specific applications and hardware used to create the file, and specific storage locations. Many, but not all, multimedia files contain metadata that may be used for authentication.
The Lorraine court noted a third method of authenticating electronic evidence under FRE 901(b)(4) included other distinctive characteristics linked to email, text messages, and web pages [46], The examples noted in Lorraine are analogous to file structure and format analysis, global and local analysis, and device identification of digital multimedia files. Specific illustrations of those examples are found in Chapter IV of this document.
81


Description of Process to Produce Results.
FRE 901(b)(9) allows authentication based upon “evidence describing a process or system and showing that it produces an accurate result” [4], The Lorraine court noted this authentication method was specifically useful in authenticating electronically stored information created or generated by a computer [46],
Applicable Elements of FRE 1001 (Definitions That Apply to This Article) and FRE 1003 (Admissibility of Duplicates)
The Lorraine court restated the following definitions found in FRE 1001:
• Photographs - included still photographs, X-ray film, video tapes, and motion pictures.
• Original - An original of a photograph, if data stored in a computer or similar device, includes any printout or other output readable by sight and shown to reflect the data accurately.
• Duplicate - A counterpart product by the same impression as original or by same means as photography, including enlargements and miniatures, or by electronic re-recording, or by other equivalent techniques which accurately reproduces the original [46],
The Lorraine court also noted that FRE 1003 essentially allowed duplicates to be admissible unless there was an issue as to the authenticity of the original [46],
Survey of 2018 U.S. Federal Cases of Questioned Video Authentication The court decided the Lorraine, case in 2007. The presiding judge, the Honorable Paul W. Grimm, along with attorneys Michael V. Ziccardi and Alexander W. Major, reviewed the impact of Lorraine in recent decisions in their 2009 Akron Law Review article titled “Back to the future: Lorraine v. Markel American Insurance Co. and new findings on the admissibility of electronically stored information.” Grimm et al., noted:
82


.. in the two years since Lorraine was issued, courts and counsel still seem to struggle with the basic principles of authentication as it applies to electronic evidence. Some courts are still permitting only rudimentary admissibility standards and counsel are still, at times, failing to meet that low bar. As electronic evidence becomes more ubiquitous at trial, it is critical for courts to start demanding that counsel give more in terms of authentication—and counsel who fail to meet courts’ expectations will do so at their own peril” [47],
In addition to understanding how digital video is best admitted into evidence, it is also important to understand how opposing counsel can challenge the authentication of digital video offered into evidence and how courts have responded to those challenges.
Survey of Federal Court Decisions
In light of information noted in the Grimm et al., Akron Law Review article, a search of the Nexis Uni® database for federal cases with the search terms “video” and “authentication” within 150 words of each other for the period between January 1, 2018 and December 31, 2018. resulted in 102 court case publications. This search resulted in the identification of 102 court cases. A review 10 of those cases as a representative sample where the admission of the video, with or without audio, was challenged on the basis of authentication. This survey conducted a further analysis of those 10 cases to determine the origin of the video (mobile phone, surveillance system, etc.), authentication approach (witness with knowledge, expert witness, etc.), a summary of the argument made by the challenging party, and the court’s decision on the challenge to the authentication of the video evidence.
83


Survey Results
An analysis of the 10 cases involved the courts’ decision denying challenges related to the “witness with knowledge” of the authentication. The analysis found:
• All 10 cases involved the authentication approach of “witness with knowledge” pursuant to FRE 901(b)(1).
• Two of the cases involved courts permitting only rudimentary admissibility standards.
• Three of the cases involved the challenging party offering specific indicators of video / audio alteration, but not based upon forensic science.
[48] [49] [5 0] [51 ] [52] [5 3 ] [54] [5 5 ] [5 6] [5 7]
Authentication of Electronically Stored Information In International Criminal Court
Methods of authenticating ESI take many forms in countries around the world. While several international courts address various cross border and international issues, I selected the International Criminal Court (ICC), an intergovernmental organization and international tribunal located in The Hague, Netherlands, to compare authentication of digital videos with the Federal Rules of Evidence and one of the leading cases in this area from the Federal Courts in the U.S. International Criminal Court e-Court Protocol
Fletcher, Hoofnagle, Stover, and Urban (2018) provided insight into digital evidence authentication by the International Criminal Court (ICC) in their “Working Paper: An Overview of the Use of Digital Evidence in International Criminal Courts, Salzburg Workshop on Cyber Investigations.” Fletcher et al., noted the ICC rarely admitted digital information as direct evidence, but usually admitted it as corroborating evidence to corroborate oral testimony. Fletcher et al., specifically cited video evidence as an example of digital evidence of concern and
84


Full Text

PAGE 15

AAFS ADC AES AI ASTM CC CCD CODECs CFA CGI CMOS CYGM DAP DC DCT DFRWS DFT DVP ENF

PAGE 16

ESI FPN FRE HEIF HEVC ICC IR JFS JDI JPG / JPEG MMS NCMF PCM PRNU G-PRNU RGBE SWGDE VFX

PAGE 21

“ Figure 1 Selected Frames From “Extreme Cro sswind | Airliner Spins 360” YouTube Video [7]

PAGE 24

Figure 2 Frame 1 of Deepfake YouTube Video [8] Figure 3 Person 1 Body, Hair, Ears, & Face Combined With Person 2 Face Used To Create Deepfake Video [8]

PAGE 31

Daubert v. Merrell Dow Daubert

PAGE 32

Daubert

PAGE 33

Daubert

PAGE 35

Lorraine

PAGE 36

Figure 4 Final Video Creation Chain With Influence Factors

PAGE 39

Figure 5 Example of Digital Multi media File Using Book Analogy

PAGE 41

Figure 6 General Block Diagram of CMOS Sensor Noise Model [14] Table 1 CMOS Sensor Noise Types [14] SNph SNdark PRNU PCT Ntherm N1/f Nrts Nrn Ncn Nadc-q Colorization

PAGE 42

Figure 7 General Block Diagram of CCD Sensor Noise Model [15][16] Table 2 CCD Sensor Noise Types [15][16] SNph PRNU FPN SNdark Nother Ntherm

PAGE 43

NR Nread N R N therm N otherNR, Ntherm Nother ND Nfilt NQ

PAGE 45

k

PAGE 46

Figure 8 Proposed Video Authentication Framework

PAGE 47

Figure 9 Workflow Optimization Ba sed On File Structure Analysis

PAGE 50

Analysis Question #1 (AQ-1) Has the vide o stream, video stream and audio stream, or audio stream been processed or manipulated?

PAGE 51

Analysis Question #2 (AQ-2) – Did the s ubmitted camera / recorder create the questioned video?

PAGE 52

Figure 10 Framework Development General Areas of Analysis

PAGE 59

Table 3 File Structure Analysis Auth entication Method / Technique Relevance

PAGE 60

Table 4 Audio Stream Authentica tion Methods / Techniques Relevance

PAGE 62

Table 5 Clone / Duplicate Frame Detection Analysis Authenticati on Method / Technique Relevance

PAGE 63

Table 6 Directional Lighting Inconsistency Detection Anal ysis Authentication Method / Technique Relevance Table 7 Local (Spatio-Temporal) Tampering Detection Analysis Au thentication Method / Technique Relevance

PAGE 64

t t+1

PAGE 65

Table 8 Interlaced & Deinterlaced Video In consistency Detection Analysis Authentication Method / Technique Relevance Global analysis technique

PAGE 66

Local analysis technique. Table 9 MPEG Double Compression Detection A nalysis Authentication Method / Technique Relevance

PAGE 68

Table 10 Color Filter Array Analysis Au thentication Method / Technique Relevance

PAGE 69

Sensor pattern noise approach for camera identification. Sensor pattern noise in videos from YouTube for camera identification. Table 11 Source Video Camera Identific ation Using PRNU Detection Method For Authentication Relevance

PAGE 70

Table 12 Source Camera Successful Identifica tion Rates Using Different Interpolations & Dimensions [43]

PAGE 71

Table 13 Source Camera Identification Rates [43] Table 14 G-PRNU & Image Resize For Came ra Identification Detection Method For Authentication Relevance

PAGE 72

N x N Table 15 Block Level Manipul ation Detection Method For Authentication Relevance Table 16 2D Phase Congruency CC Detec tion Method For Authentication Relevance

PAGE 76

Figure 11 Case Study 2 Test 1 Test Validation Results

PAGE 77

Figure 12 Case Study 2 Test 2 Test Validation Results

PAGE 78

Figure 13 Case Study 2 Test 3 Test Validation Results

PAGE 79

Figure 14 Case Study 2 Test 4 Test Validation Results

PAGE 81

Figure 15 Temporal Analysis of Y Plane R evealing Current Frame Versus Preceding Frame Differences

PAGE 82

Figure 16 Comparison of Frame 598, 599, & 600 Visual Content Using Temporal Difference Filter Figure 17 2D Phase Congruency With Corre lation Coefficient of Adjacent Frames

PAGE 83

Figure 18 Temporal Analysis of Y Plane Revealing Current Frame Versus Preceding Frame Differences

PAGE 84

Figure 19 Comparison of Frame 1074 & 1075 Visual Content

PAGE 85

Figure 20 2D Phase Congruency With Corre lation Coefficient of Adjacent Frames

PAGE 88

Criminalistics: an introduction to forensic science SWGDE Digital & Multimedia Evidence Glossary, v 3 I fell for Facebook fake news. Here's why millions of you did, too. https://www.washingtonpost.com/technology /2018/10/18/i-fell-f acebook-fake-newsheres-why-millions-you-did-too/?utm_term=.464a38303f84 (2017, June 14). Extreme Crosswind | Airliner spins 360. No Title. Daubert v. Merrell Dow Pharmaceuticals, Inc.

PAGE 89

IEEE Transactions on Electron Devices, 54 Measurement Science and Technology, 19 IEEE Transactions on Circuits and Systems for Video Technology 18 Image Quality and System Performance III Digital Investigation, 6 Sensors (Basel, Switzerland), 18 YouTube: Recompression Effects Forensic Audio Authentication. Analytical Framework for Digital Audio Authentication.

PAGE 90

Establishing a standard for digital audio authenticity: A critical analysis of tools, methodologies, and challenges Digital image analysis: Analytical framework for authenticating digital images ANALYSIS OF FACEBOO KÂ’S VIDEO ENCODERS A Forensic Analysis of Digital Im age Characteristics Associated to Flickr and Google Plus Source identification of high de finition videos: A forensic analysis of downloaders and YouTube vide o compression using a group of action cameras MPEG-4 video authenticati on using file structure and metadata Proceedings of the 9th work shop on Multimedia & security Significance, 3

PAGE 91

IEEE Transactions on Information Forensics and Security, 2 Proceedings of the 8th workshop on Multimedia and security Proceedings of the 11th ACM workshop on Multimedia and security IEEE Transactions on Signal Processing, 53 IEEE Transactions on Informa tion Forensics and Security, 1 , 6072 Information 9 Lorraine v. Markel American Insurance Company Akron L. Rev. 42

PAGE 93

Salzburg Workshop on Cyberinvestigations Salzburg Workshop on Cyberinvestigations General Electric Co. v. Joiner Kumho Tire Co. v. Carmichael Criminal Law Forum 2012 International Conference on Cyber-Enabled Distributed Co mputing and Knowledge Discovery

PAGE 95

Lorraine v. Markel American Insurance Company Lorraine Lorraine Lorraine

PAGE 96

Lorraine Lorraine Lorraine Lorraine Lorraine Lorraine

PAGE 97

Lorraine Lorraine Lorraine Lorraine

PAGE 98

. Lorraine Lorraine Lorraine Lorraine, Lorraine

PAGE 103

Lorraine

PAGE 104

Daubert v. Merrell Dow Daubert v Merrell Dow Daubert

PAGE 105

i. e.,

PAGE 106

General Electric v. Joiner General Electric Co. v. Joiner Daubert Daubert Kumho Tire Co. v. Carmichael, et al Kumho Tire Co. v. Carmichael Daubert Dauber

PAGE 107

Criminal Law Forum. Daubert

PAGE 109

Figure 21 Diagram of Scientific Method Used In Proposed Framework Analysis Question – Is Vide o Altered / Real / Edited? Note: Steps 3 – 6 may be cyclic and require adjust ments to the hypothesis if the examiner only detects inconclusive results.

PAGE 110

Workflow Optimization Decision Go To Block 6 For File Preparation Decision I f Work f low Optimization Decisi on Is Continue Anal y sis.

PAGE 111

Document To Right If Audio And / Or Video Streams Require Transcoding In Bifurcation Process. Repeat This Analysis Area For Each Audio Stream If Audio Stream Is Analyzed Or Go To Block 27 For Video Stream Analysis

PAGE 112

Repeat This Analysis Area For Each Video Stream Analyzed

PAGE 113

Overall Decision For Hypothesis

PAGE 114

Are the results of method reproducible, repeatable, accurate, and precise? Is the method used for its intended purpose? Does the method perform as expected? Has the method been tested?

PAGE 115

Has the technique / method been subject ed to peer review & publication? What is the error rate of the theory or is an error mitigati on method implemented? Is the technique / method accepted in the forensic science community? What are the standards controlling the use of the technique / method?

PAGE 116

Table 17 -Test Results For Method Validation

PAGE 118

Test results of the multimedia stream has h validation method for reproducibility, repeatability, accuracy, and precision? Test results of the multimedia stream hash validation method to determine if it performs as expected?

PAGE 119

Table 18 Planned Test Scenarios

PAGE 133

Table 19 Original Audio Stream Hash Listing

PAGE 136

Table 20 Original Video Stream Hash Listing

PAGE 141

Table 21 Validation Test Comparison Of Orig inal Versus Copied Audio Stream Hashes M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched

PAGE 142

M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched

PAGE 143

M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched

PAGE 146

Table 22 Validation Test Comparison Of Orig inal Versus Copied Video Stream Hashes M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched

PAGE 147

M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched

PAGE 148

M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched M atched

PAGE 150

Table 23 Test Results

PAGE 152

Table 24 Test Video Tampering Method Has the video stream, video stream and audio stre am, or audio stream been altered or edited?

PAGE 153

Table 25 Planned Test Scenarios

PAGE 156

Has the video stream, video stream and audio stre am, or audio stream been altered or edited?

PAGE 157

General Complete name : Taiwan 1 Tampered.avi Format : AVI Format/Info : Audio Video Interleave File size : 169 MiB Duration : 6 s 840 ms Overall bit rate : 207 Mb/s Writing library : VirtualDub build 32842/release Video ID : 0 Format : RGB Codec ID : 0x00000000 Codec ID/Info : Basic Windows bitmap format. 1, 4 and 8 bpp versions are palettised. 16, 24 and 32bpp contain raw RGB samples Duration : 6 s 840 ms Bit rate : 207 Mb/s Width : 720 pixels Height : 480 pixels Display aspect ratio : 3:2 Frame rate : 25.000 FPS Bit depth : 8 bits Bits/(Pixel*Frame) : 24.000 Stream size : 169 MiB (100%)

PAGE 158

Workflow Optimization Decision Go To Block 6 For File Preparation Decision I f Work f low Optimization Decisi on Is Continue Anal y sis. Document To Right If Audio And / Or Video Streams Require Transcoding In Bifurcation Process. Repeat This Analysis Area For Each Audio Stream If Audio Stream Is Analyzed Or Go To Block 27 For Video Stream Analysis

PAGE 159

Repeat This Analysis Area For Each Video Stream Analyzed

PAGE 160

Overall Decision For Hypothesis

PAGE 161

Has the video stream, video stream and audio stre am, or audio stream been altered or edited?

PAGE 162

eneral Complete name : Taiwan 2 Tampered.avi Format : AVI Format/Info : Audio Video Interleave File size : 167 MiB Duration : 6 s 760 ms Overall bit rate : 207 Mb/s Writing library : VirtualDub build 32842/release Video ID : 0 Format : RGB Codec ID : 0x00000000 Codec ID/Info : Basic Windows bitmap format. 1, 4 and 8 bpp versions are palettised. 16, 24 and 32bpp contain raw RGB samples Duration : 6 s 760 ms Bit rate : 207 Mb/s Width : 720 pixels Height : 480 pixels Display aspect ratio : 3:2 Frame rate : 25.000 FPS Bit depth : 8 bits Bits/(Pixel*Frame) : 24.000 Stream size : 167 MiB (100%)

PAGE 163

Workflow Optimization Decision Go To Block 6 For File Preparation Decision I f Work f low Optimization Decisi on Is Continue Anal y sis. Document To Right If Audio And / Or Video Streams Require Transcoding In Bifurcation Process. Repeat This Analysis Area For Each Audio Stream If Audio Stream Is Analyzed Or Go To Block 27 For Video Stream Analysis

PAGE 164

Repeat This Analysis Area For Each Video Stream Analyzed

PAGE 165

Overall Decision For Hypothesis

PAGE 167

Has the video stream, video stream and audio stre am, or audio stream been altered or edited?

PAGE 168

General Complete name : Taiwan 3 Tampered.avi Format : AVI Format/Info : Audio Video Interleave File size : 198 MiB Duration : 8 s 0 ms Overall bit rate : 207 Mb/s Writing library : VirtualDub build 32842/release Video ID : 0 Format : RGB Codec ID : 0x00000000 Codec ID/Info : Basic Windows bitmap format. 1, 4 and 8 bpp versions are palettised. 16, 24 and 32bpp contain raw RGB samples Duration : 8 s 0 ms Bit rate : 207 Mb/s Width : 720 pixels Height : 480 pixels Display aspect ratio : 3:2 Frame rate : 25.000 FPS Bit depth : 8 bits Bits/(Pixel*Frame) : 24.000 Stream size : 198 MiB (100%)

PAGE 169

Workflow Optimization Decision Go To Block 6 For File Preparation Decision I f Work f low Optimization Decisi on Is Continue Anal y sis. Document To Right If Audio And / Or Video Streams Require Transcoding In Bifurcation Process. Repeat This Analysis Area For Each Audio Stream If Audio Stream Is Analyzed Or Go To Block 27 For Video Stream Analysis

PAGE 170

Repeat This Analysis Area For Each Video Stream Analyzed

PAGE 171

Overall Decision For Hypothesis

PAGE 173

Has the video stream, video stream and audio stre am, or audio stream been altered or edited?

PAGE 174

General Complete name : C:\Users\Greg\Documents\UC Denver\Thesis Research\Case Studies\CS 2 (Taiwan Videos)\Taiwan Paper Info\test4\Taiwan 4 Tampered.avi Format : AVI Format/Info : Audio Video Interleave File size : 198 MiB Duration : 8 s 0 ms Overall bit rate : 207 Mb/s Writing library : VirtualDub build 32842/release Video ID : 0 Format : RGB Codec ID : 0x00000000 Codec ID/Info : Basic Windows bitmap format. 1, 4 and 8 bpp versions are palettised. 16, 24 and 32bpp contain raw RGB samples Duration : 8 s 0 ms Bit rate : 207 Mb/s Width : 720 pixels Height : 480 pixels Display aspect ratio : 3:2 Frame rate : 25.000 FPS Bit depth : 8 bits Bits/(Pixel*Frame) : 24.000 Stream size : 198 MiB (100%)

PAGE 175

Workflow Optimization Decision Go To B l ock 6 For File Preparation Decision I f Work f low Optimization Decisi on Is Continue Anal y sis. Document To Right If Audio And / Or Video Streams Require Transcoding In Bifurcation Process. Repeat This Analysis Area For Each Audio Stream If Audio Stream Is Analyzed Or Go To Block 27 For Video Stream Analysis

PAGE 176

Repeat This Analysis Area For Each Video Stream Analyzed

PAGE 177

Overall Decision For Hypothesis

PAGE 181

Has the video stream, video stream and audio stream, or audio stream been altered or edited?

PAGE 183

Has the video stream, video stream and audio stre am, or audio stream been altered or edited?

PAGE 184

File Name : 9 Spliced 600 700 Removed.mp4 File Size : 67 MB File Modification Date/Time : 2019:04:04 05:51:50 04:00 File Access Date/Time : 2019:04:05 14:06:44 04:00 File Creation Date/Time : 2019:04:05 14:04:41 04:00 File Permissions : rw rw rw File Type : MP4 MIME Type : video/mp4 Major Brand : MP4 v2 [ISO 14496 14] Minor Version : 0.0.0 Compatible Brands : mp42, mp41 Movie Header Version : 0 Create Date : 2019:04:04 09:51:47 Modify Date : 2019:04:04 09:51:49 Time Scale : 90000 Duration : 0:00:53 Preferred Rate : 1 Preferred Volume : 100.00% Preview Time : 0 s Preview Duration : 0 s Poster Time : 0 s Selection Time : 0 s Selection Duration : 0 s Current Time : 0 s Next Track ID : 3 Track Header Version : 0 Track Create Date : 2019:04:04 09:51:48 Track Modify Date : 2019:04:04 09:51:48 Track ID : 1 Track Duration : 0:00:53 Track Layer : 0 Track Volume : 0.00% Image Width : 1280 Image Height : 720 Graphics Mode : srcCopy Op Color : 0 0 0 Compressor ID : avc1 Source Image Width : 1280 Source Image Height : 720 X Resolution : 72 Y Resolution : 72 Compressor Name : AVC Coding Bit Depth : 24 Matrix Structure : 1 0 0 0 1 0 0 0 1 Media Header Version : 0 Media Create Date : 2019:04:04 09:51:48 Media Modify Date : 2019:04:04 09:51:48 Media Time Scale : 48000 Media Duration : 0:00:53 Media Language Code : eng Balance : 0 Handler Type : Alias Data

PAGE 185

Handler Description : Alias Data Handler Audio Format : mp4a Audio Channels : 2 Audio Bits Per Sample : 16 User Data TIM : 00:00:00:00 User Data TSC : 30000 User Data TSZ : 1001 XMP Toolkit : Adobe XMP Core 5.6 c148 79.163765, 2019/01/24 18:11:46 Metadata Date : 2019:04:04 03:51:49 06:00 Creator Tool : Adobe Premiere Pro 2019.1 (Windows) Video Frame Rate : 29.970030 Video Field Order : Progressive Video Pixel Aspect Ratio : 1 Audio Sample Rate : 48000 Audio Sample Type : 16 bit integer Audio Channel Type : Stereo Start Time Scale : 30000 Start Time Sample Size : 1001 Orientation : Horizontal (normal) Instance ID : xmp.iid:69cd4756 0890 e741 9bea d0b9a1e59366 Document ID : 20b51f75 72d3 8fa3 78bb 882b00000065 Original Document ID : xmp.did:51701fde e342 4243 9e89 1673bc8a6486 Format : H.264 Duration Value : 4830720 Duration Scale : 1.11111111111111e 005 Project Ref Type : Movie Video Frame Size W : 1280 Video Frame Size H : 720 Video Frame Size Unit : pixel Start Timecode Time Format : 29.97 fps (non drop) Start Timecode Time Value : 00:00:00:00 Alt Timecode Time Value : 00:00:00:00 Alt Timecode Time Format : 29.97 fps (non drop) History Action : saved, created, saved, saved History Instance ID : 45bf5236 f40f 150f d190 e4a300000092, xmp.iid:fbbabd26 6c68 e740 93ea 7f1c6d464edc, xmp.iid:8442e554 89c5 dd45 aa31 9f7787d694ce, xmp.iid:69cd4756 0890 e741 9bea d0b9a1e59366 History When : 2019:04:04 03:51:49 06:00, 2019:04:04 03:51:21 06:00, 2019:04:04 03:51:49 06:00, 2019:04:04 03:51:49 06:00 History Software Agent : Adobe Premiere Pro 2019.1 (Windows), Adobe Premiere Pro 2019.1 (Windows), Adobe Premiere Pro 2019.1 (Windows), Adobe Premiere Pro 2019.1 (Windows) History Changed : /, /, /metadata Ingredients Instance ID : xmp.iid:085a4b06 fc9f b34b 9938 f89d37bd6ba1, xmp.iid:085a4b06 fc9f b34b 9938 f89d37bd6ba1,

PAGE 186

xmp.iid:085a4b06 fc9f b34b 9938 f89d37bd6ba1, xmp.iid:085a4b06 fc9f b34b 9938 f89d37bd6ba1 Ingredients Document ID : 951272b3 acfc d9dd 503e c38700000035, 951272b3 acfc d9dd 503e c38700000035, 951272b3 acfc d9dd 503e c38700000035, 951272b3 acfc d9dd 503e c38700000035 Ingredients From Part : time:0d5085400320000f254016000000, time:0d5085400320000f254016000000, time:5932967040000f254016000000d8534996870400f254016000000, time:5932967040000f254016000000d8534996870400f254016000000 Ingredients To Part : time:0d5085400320000f254016000000, time:0d5085400320000f254016000000, time:5085400320000f254016000000d8534996870400f254016000000, time:5085400320000f254016000000d8534996870400f254016000000 Ingredients File Path : 9.mp4, 9.mp4, 9.mp4, 9.mp4 Ingredients Mask Markers : None, None, None, None Pantry Create Date : 2018:08:07 18:30:24Z Pantry Modify Date : 2019:04:03 13:02:50Z Pantry Metadata Date : 2019:04:04 03:51:49 06:00 Pantry Orientation : Horizontal (normal) Pantry Instance ID : xmp.iid:085a4b06 fc9f b34b 9938 f89d37bd6ba1 Pantry Document ID : 951272b3 acfc d9dd 503e c38700000035 Pantry Original Document ID : xmp.did:95aebf18 0e6c 0e43 b22b 62d097d1c960 Pantry History Action : saved Pantry History Instance ID : xmp.iid:45b0a3d8 c2fa e44e 9619 1328fdb1af2d Pantry History When : 2019:04:04 03:51:49 06:00 Pantry History Software Agent : Adobe Premiere Pro 2019.1 (Windows) Pantry History Changed : /metadata Pantry Duration Value : 27387360 Pantry Duration Scale : 2.08333333333333e 006 Pantry Tracks Track Name : Comment Pantry Tracks Track Type : Comment Pantry Tracks Frame Rate : f30000s1001 Pantry Tracks Markers Start Time: 700 Pantry Tracks Markers Guid : 1fad9e70 e74f 41a4 8d26 de1fd08906eb Pantry Tracks Markers Cue Point Params Key: marker_guid Pantry Tracks Markers Cue Point Params Value: 1fad9e70 e74f 41a4 8d26 de1fd08906eb Derived From Instance ID : xmp.iid:fbbabd26 6c68 e740 93ea 7f1c6d464edc Derived From Document ID : xmp.did:fbbabd26 6c68 e740 93ea 7f1c6d464edc Derived From Original Document ID: xmp.did:fbbabd26 6c68 e740 93ea 7f1c6d464edc Windows Atom Extension : .prproj

PAGE 187

Windows Atom Invocation Flags : /L Mac Atom Application Code : 1347449455 Mac Atom Invocation Apple Event : 1129468018 Movie Data Size : 70003722 Movie Data Offset : 41098 Avg Bitrate : 10.4 Mbps Image Size : 1280x720 Rotation : 0

PAGE 188

Workflow Optimization Decision Go To Block 6 For File Preparation Decision I f Work f low Optimization Decisi on Is Continue Anal y sis. Document To Right If Audio And / Or Video Streams Require Transcoding In Bifurcation Process. Repeat This Analysis Area For Each Audio Stream If Audio Stream Is Analyzed Or Go To Block 27 For Video Stream Analysis

PAGE 192

Repeat This Analysis Area For Each Video Stream Analyzed

PAGE 194

Overall Decision For Hypothesis

PAGE 195

Has the video stream, video stream and audio stre am, or audio stream been altered or edited?

PAGE 196

File Name : 9 spliced 1075 1076 removed.mp4 File Size : 71 MB File Modification Date/Time : 2019:04:04 06:23:40 04:00 File Access Date/Time : 2019:04:05 14:52:08 04:00 File Creation Date/Time : 2019:04:05 14:51:55 04:00 File Permissions : rw rw rw File Type : MP4 MIME Type : video/mp4 Major Brand : MP4 v2 [ISO 14496 14] Minor Version : 0.0.0 Compatible Brands : mp42, mp41 Movie Header Version : 0 Create Date : 2019:04:04 10:23:38 Modify Date : 2019:04:04 10:23:39 Time Scale : 90000 Duration : 0:00:56 Preferred Rate : 1 Preferred Volume : 100.00% Preview Time : 0 s Preview Duration : 0 s Poster Time : 0 s Selection Time : 0 s Selection Duration : 0 s Current Time : 0 s Next Track ID : 3 Track Header Version : 0 Track Create Date : 2019:04:04 10:23:38 Track Modify Date : 2019:04:04 10:23:38 Track ID : 1 Track Duration : 0:00:56 Track Layer : 0 Track Volume : 0.00% Image Width : 1280 Image Height : 720 Graphics Mode : srcCopy Op Color : 0 0 0 Compressor ID : avc1 Source Image Width : 1280 Source Image Height : 720 X Resolution : 72 Y Resolution : 72 Compressor Name : AVC Coding Bit Depth : 24 Matrix Structure : 1 0 0 0 1 0 0 0 1 Media Header Version : 0 Media Create Date : 2019:04:04 10:23:38 Media Modify Date : 2019:04:04 10:23:38 Media Time Scale : 48000 Media Duration : 0:00:56 Media Language Code : eng Balance : 0 Handler Type : Alias Data

PAGE 197

Handler Description : Alias Data Handler Audio Format : mp4a Audio Channels : 2 Audio Bits Per Sample : 16 User Data TIM : 00:00:00:00 User Data TSC : 30000 User Data TSZ : 1001 XMP Toolkit : Adobe XMP Core 5.6 c148 79.163765, 2019/01/24 18:11:46 Metadata Date : 2019:04:04 04:23:39 06:00 Creator Tool : Adobe Premiere Pro 2019.1 (Windows) Video Frame Rate : 29.970030 Video Field Order : Progressive Video Pixel Aspect Ratio : 1 Audio Sample Rate : 48000 Audio Sample Type : 16 bit integer Audio Channel Type : Stereo Start Time Scale : 30000 Start Time Sample Size : 1001 Orientation : Horizontal (normal) Instance ID : xmp.iid:dc3243fb 878f a148 b736 e6f54044469b Document ID : ba9bb9f4 4ed1 3d83 8a56 a2ff00000069 Original Document ID : xmp.did:d003a8a5 1f59 e14d 9a4f 23f514ca3881 Format : H.264 Duration Value : 5124480 Duration Scale : 1.11111111111111e 005 Project Ref Type : Movie Video Frame Size W : 1280 Video Frame Size H : 720 Video Frame Size Unit : pixel Start Timecode Time Format : 29.97 fps (non drop) Start Timecode Time Value : 00:00:00:00 Alt Timecode Time Value : 00:00:00:00 Alt Timecode Time Format : 29.97 fps (non drop) History Action : saved, created, saved, saved History Instance ID : c6860389 549e 0c05 9cc4 798200000096, xmp.iid:49584055 7c95 3743 ad0f b39a9d1644e9, xmp.iid:de0560c9 cb06 4644 ad2e 0050341f17da, xmp.iid:dc3243fb 878f a148 b736 e6f54044469b History When : 2019:04:04 04:23:39 06:00, 2019:04:04 04:23:11 06:00, 2019:04:04 04:23:39 06:00, 2019:04:04 04:23:39 06:00 History Software Agent : Adobe Premiere Pro 2019.1 (Windows), Adobe Premiere Pro 2019.1 (Windows), Adobe Premiere Pro 2019.1 (Windows), Adobe Premiere Pro 2019.1 (Windows) History Changed : /, /, /metadata Ingredients Instance ID : xmp.iid:c460e422 4f45 254d 819a a661bd96971a, xmp.iid:c460e422 4f45 254d 819a a661bd96971a,

PAGE 198

xmp.iid:c460e422 4f45 254d 819a a661bd96971a, xmp.iid:c460e422 4f45 254d 819a a661bd96971a, xmp.iid:c460e422 4f45 254d 819a a661bd96971a, xmp.iid:c460e422 4f45 254d 819a a661bd96971a, xmp.iid:c460e422 4f45 254d 819a a661bd96971a, xmp.iid:c460e422 4f45 254d 819a a661bd96971a Ingredients Document ID : 951272b3 acfc d9dd 503e c38700000035, 951272b3 acfc d9dd 503e c38700000035, 951272b3 acfc d9dd 503e c38700000035, 951272b3 acfc d9dd 503e c38700000035, 951272b3 acfc d9dd 503e c38700000035, 951272b3 acfc d9dd 503e c38700000035, 951272b3 acfc d9dd 503e c38700000035, 951272b3 acfc d9dd 503e c38700000035 Ingredients From Part : time:0d5085400320000f254016000000, time:0d5085400320000f254016000000, time:5085400320000f254016000000d847566720000f254016000000, time:5085400320000f254016000000d847566720000f254016000000, time:5932967040000f254016000000d3178375200000f254016000000, time:5932967040000f254016000000d3178375200000f254016000000, time:9128293574400f254016000000d5339670336000f254016000000, time:9128293574400f254016000000d5339670336000f254016000000 Ingredients To Part : time:0d5085400320000f254016000000, time:0d5085400320000f254016000000, time:5085400320000f254016000000d847566720000f254016000000, time:5085400320000f254016000000d847566720000f254016000000, time:5932967040000f254016000000d3178375200000f254016000000, time:5932967040000f254016000000d3178375200000f254016000000, time:9111342240000f254016000000d5339670336000f254016000000, time:9111342240000f254016000000d5339670336000f254016000000 Ingredients File Path : 9.mp4, 9.mp4, 9.mp4, 9.mp4, 9.mp4, 9.mp4, 9.mp4, 9.mp4 Ingredients Mask Markers : None, None, None, None, None, None, None, None Pantry Create Date : 2018:08:07 18:30:24Z Pantry Modify Date : 2019:04:03 13:02:50Z Pantry Metadata Date : 2019:04:04 04:23:39 06:00 Pantry Orientation : Horizontal (normal) Pantry Instance ID : xmp.iid:c460e422 4f45 254d 819a a661bd96971a Pantry Document ID : 951272b3 acfc d9dd 503e c38700000035 Pantry Original Document ID : xmp.did:95aebf18 0e6c 0e43 b22b 62d097d1c960 Pantry History Action : saved Pantry History Instance ID : xmp.iid:765a3698 d792 5746 bf83 2143898d16b4 Pantry History When : 2019:04:04 04:23:39 06:00 Pantry History Software Agent : Adobe Premiere Pro 2019.1 (Windows) Pantry History Changed : /metadata

PAGE 199

Workflow Optimization Decision Go To Block 6 For File Preparation Decision I f Work f low Optimization Decisi on Is Continue Anal y sis. Document To Right If Audio And / Or Video Streams Require Transcoding In Bifurcation Process. Repeat This Analysis Area For Each Audio Stream If Audio Stream Is Analyzed Or Go To Block 27 For Video Stream Analysis

PAGE 202

Repeat This Analysis Area For Each Video Stream Analyzed

PAGE 204

Overall Decision For Hypothesis

PAGE 206

Has the video stream, video stream and audio stre am, or audio stream been altered or edited?

PAGE 207

File Name : 9 pre event buffering removed.mp4 File Size : 23 MB File Modification Date/Time : 2019:04:06 10:18:32 04:00 File Access Date/Time : 2019:04:06 20:13:15 04:00 File Creation Date/Time : 2019:04:06 20:13:05 04:00 File Permissions : rw rw rw File Type : MP4 MIME Type : video/mp4 Major Brand : MP4 Base Media v1 [IS0 14496 12:2003] Minor Version : 0.2.0 Compatible Brands : isom, iso2, avc1, mp41 Movie Header Version : 0 Create Date : 0000:00:00 00:00:00 Modify Date : 0000:00:00 00:00:00 Time Scale : 1000 Duration : 27.16 s Preferred Rate : 1 Preferred Volume : 100.00% Preview Time : 0 s Preview Duration : 0 s Poster Time : 0 s Selection Time : 0 s Selection Duration : 0 s Current Time : 0 s Next Track ID : 3 Track Header Version : 0 Track Create Date : 0000:00:00 00:00:00 Track Modify Date : 0000:00:00 00:00:00 Track ID : 1 Track Duration : 27.12 s Track Layer : 0 Track Volume : 0.00% Image Width : 1920 Image Height : 1080 Graphics Mode : srcCopy Op Color : 0 0 0 Compressor ID : avc1 Source Image Width : 1920 Source Image Height : 1080 X Resolution : 72 Y Resolution : 72 Bit Depth : 24 Pixel Aspect Ratio : 1:1 Video Frame Rate : 25 Matrix Structure : 1 0 0 0 1 0 0 0 1 Media Header Version : 0 Media Create Date : 0000:00:00 00:00:00 Media Modify Date : 0000:00:00 00:00:00 Media Time Scale : 48000 Media Duration : 27.16 s Media Language Code : und

PAGE 208

Handler Description : SoundHandler Balance : 0 Audio Format : mp4a Audio Channels : 2 Audio Bits Per Sample : 16 Audio Sample Rate : 48000 Handler Type : Metadata Handler Vendor ID : Apple Encoder : Lavf58.20.100 Movie Data Size : 24606612 Movie Data Offset : 22089 Avg Bitrate : 7.25 Mbps Image Size : 1920x1080 Rotation : 0

PAGE 209

Workflow Optimization Decision Go To Block 6 For File Preparation Decision I f Work f low Optimization Decisi on Is Continue Anal y sis. Document To Right If Audio And / Or Video Streams Require Transcoding In Bifurcation P rocess. Repeat This Analysis Area For Each Audio Stream If Audio Stream Is Analyzed Or Go To Block 27 For Video Stream Analysis

PAGE 210

Repeat This Analysis Area For Each Video Stream Analyzed

PAGE 212

Overall Decision For Hypothesis