Citation
Sonic imagery

Material Information

Title:
Sonic imagery a view of music via mathmetical computer science and signal processing
Added title page title:
View of music via mathmetical computer science and signal processing
Creator:
Steinmetz, Shannon ( author )
Place of Publication:
Denver, Colo.
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
1 electronic file (152 pages) : ;

Thesis/Dissertation Information

Degree:
Master's ( Master of integrated science)
Degree Grantor:
University of Colorado Denver
Degree Divisions:
College of Liberal Arts and Sciences, CU Denver
Degree Disciplines:
Integrated science

Subjects

Subjects / Keywords:
Signal processing -- Digital techniques ( lcsh )
Signal processing -- Digital techniques ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Review:
For centuries humans have strived to visualize. From cave paintings to modern artworks, we are beings of beauty and expression. A condition known as Synesthesia provides some with the ability to see sound as it occurs. We propose a mathematical computer science and software foundation capable of transforming a sound \emph{a priori} into a visual representation. We explore and exploit techniques in signal processing, Fourier analysis, group theory and music theory and attach this work to a psychological foundation for coloring and visuals. We propose a new theorem for tone detection, a parallelized FFT and provide an algorithm for chord detection. We provide an extensible software architecture and implementation and compile the results of a small survey.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: Adobe Reader.
Statement of Responsibility:
by Shannon Steinmetz.

Record Information

Source Institution:
University of Colorado Denver Collections
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
982959618 ( OCLC )
ocn982959618
Classification:
LD1193.L584 2016m S74 ( lcc )

Downloads

This item has the following downloads:


Full Text
SONIC IMAGERY: A VIEW OF MUSIC VIA MATHEMATICAL COMPUTER
SCIENCE AND SIGNAL PROCESSING by
SHANNON STEINMETZ Bachelor of Science, MSU, 1999
A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Master of Integrated Sciences Integrated Sciences
2016


This thesis for the Master of Integrated Sciences degree by Shannon Steinmetz has been approved for the Integrated Sciences Program by
Ellen Gethner, Chair Gita Alaghaband Varis Carey
April 18, 2016
n


Steinmetz, Shannon (MIS, Integrated Sciences)
Sonic Imagery: A View of Music via Mathematical Computer Science and Signal Processing
Thesis directed by Associate Professor Ellen Gethner
ABSTRACT
For centuries humans have strived to visualize. From cave paintings to modern artworks, we are beings of beauty and expression. A condition known as Synesthesia provides some with the ability to see sound as it occurs. We propose a mathematical computer science and software foundation capable of transforming a sound a priori into a visual representation. We explore and exploit techniques in signal processing, Fourier analysis, group theory and music theory and attach this work to a psychological foundation for coloring and visuals. We propose a new theorem for tone detection, a parallelized FFT and provide an algorithm for chord detection. We provide an extensible software architecture and implementation and compile the results of a small survey.
The form and content of this abstract are approved. I recommend its publication.
Approved: Ellen Gethner
m


DEDICATION
This work is dedicated to my loving family Diane, Kerlin, Brandie, Lathan, Olive and Wesley Steinmetz, Harry Lordeno, Syrina, KJ, Javier and my little buddies Paco and Rambo who sat hours on end being ignored while I tapped away at my computer and scribbled on my white board. I would also like to dedicate this work to my great friends Charly Randall and Brian Parker who gave me confidence, inspiration and ideas throughout my life as well as Mike Iiams, without whom, Id never have been given the opportunities that lead me down this path.
IV


ACKNOWLEDGMENT
I would like to thank the University of Colorado and all the wonderful folks that provided us opportunities to share and grow our research. This work could not have been possible without great mentors. I would like to thank Dr. Ellen Gethner for her ideas, inspiration and for being the single greatest contributor to my academic experience. I would like to thank Dr. Martin Huber who showed me patience and understanding during tough times and offered me this incredible opportunity. Without these wonderful scholars I literally would not have made it. I would also like to thank Jason Fisk for encouraging me to grow and Jim Muller, Dr. Bob Lindeman, Doc Stoner, Scott Cambell, Dr. Blane Johnson and Jeff Caulder for being role models and mentors for so many years.
v


TABLE OF CONTENTS
Tables..................................................................... ix
Figures ................................................................... x
Chapter
1. Introduction............................................................... 1
2. Inspiration................................................................ 3
2.1 Previous Work....................................................... 3
2.1.1 A Little Music Please?........................................ 4
2.2 Signal Basics....................................................... 7
2.3 Synesthesia......................................................... 8
2.3.1 A Colored Hearing Theorem.................................... 13
3. Proof of Concept.......................................................... 16
3.1 Discovery and Approach ............................................ 16
3.1.1 A Time Domain Experiment .................................... 17
3.1.2 Initial Results.............................................. 20
3.1.3 Approach..................................................... 22
3.1.4 Animation Time Budget........................................ 23
4. Research and Development.................................................. 25
4.1 Fourier Analysis and Frequency Detection........................... 25
4.1.1 Understanding the DFT........................................ 27
4.1.2 A Parallelized Fourier Transform............................. 31
4.1.3 A Synthetic Test............................................. 34
4.2 Tone Detection and Characterization ............................... 40
4.2.1 A Detector Predicate......................................... 40
4.2.2 Musical Note Characterization ............................... 41
4.2.3 Rigorous Characterization Analysis........................... 48
4.3 Chord Detection and Characterization............................... 53
vi


4.3.1 A Chord Detection Algorithm.................................. 54
4.4 Melody Analysis..................................................... 59
4.4.1 A Generalized Parameterized Visualizer....................... 61
4.4.2 Mmmmmm, The Musical Melody Mathematical Modularity
Movement Manager ............................................ 63
5. Results................................................................... 69
5.1 Experimentation..................................................... 69
5.2 Survey Results...................................................... 71
5.3 Conclusions and Future Work ........................................ 73
5.3.1 Tangential Applications...................................... 74
5.3.2 The Lawnmower Filter......................................... 74
5.3.3 An Instrument Fingerprint ................................... 75
5.3.4 Conclusions.................................................. 77
References................................................................... 78
Appendix
A. Source Code............................................................... 82
A.0.5 Musical Detect Class.......................................... 83
A.0.6 Musical Note Class............................................ 85
A.0.7 Music Utility Class........................................... 87
A.0.8 Chord Class................................................... 95
A.0.9 Chord Detection Class......................................... 97
A.0.10 Sound Processing Bus Class.................................. 100
A.0.11 Media Utilities Class....................................... 108
A.0.12 PCM Info Class ............................................. 113
A.0.13 Melody Analysis............................................. 116
A.0.14 Fourier Transform Class..................................... 119
A.0.15 Synthesizer Class........................................... 122
vii


A.0.16 Complex Number Class......................................... 125
A.0.17 Vortex Visual................................................ 130
A.0.18 Visualizer Interface......................................... 134
A.0.19 Visualizer Space............................................. 135
A.0.20 Survey Results............................................... 137
viii


TABLES
Table
2.1 Music Intervals [25,43] ............................................. 5
2.2 (Synesthesia) Note Color Association ................................ 12
2.3 (Synesthesia Tone Color Mapping)..................................... 12
3.1 Time Domain Initial Parameterizations ............................... 19
3.2 Animation Time Budget ............................................... 24
4.1 Fundamental Frequencies [7,38]....................................... 26
4.2 DFT vs FFT Performance .............................................. 36
4.3 FFT Performance Extracting the Full Spectrum of Tones ............... 39
4.4 FFT Versus DFT Accuracy............................................... 40
4.5 Initial Note Guessing Results......................................... 45
4.6 Final Note Guessing Results ......................................... 48
4.7 Accuracy of Random Signals ........................................... 49
4.8 Detection & Characterization Results (Initial Metrics)................ 51
4.9 Detection & Characterization Results (Undetected) .................... 51
4.10 Detection & Characterization Corrected Results (Final Metrics) .... 53
4.11 Chord Detector Basic Test ........................................... 59
4.12 Visual Space Axioms ................................................. 63
5.1 Questions and Answers ............................................... 70
IX


FIGURES
Figure
2.1 The Five Features of Music......................................................... 6
2.2 (PCM) Time Series Graph ........................................................... 8
2.3 Hue, Saturation, Brightness Color Scale [23]...................................... 13
2.4 Tigger Stripes, 16 to 31 (Hz) (No Noise)......................................... 15
2.5 Tigger Stripes, 16 to 22.05 (Khz) (No Noise)..................................... 15
2.6 Tigger Stripes, 16 to 22.05 (Khz), No Noise (left), 50% Noise (right) . 15
3.1 Example Set of Parameterized Geometric Figures................................. 20
3.2 Symphony Sound Dogs ............................................................ 20
3.3 Symphony (Beethoven 12th Symphony,Beethoven Violin Sonata,Schuberts Moment
Musical) ........................................................................ 21
3.4 Techno Electronica (Termite Serenity,Nao Tokui,She nebula,Termite Neurology) . 21
3.5 Various Composers (Debussay Clair de Lune,Mozart Eine Kleine,Mozart Sonata,Chopin
Etude)........................................................................... 21
3.6 Research approach................................................................. 23
4.1 Cosine Function x = cos(t) 28
4.2 x = 2 cos(2nt) + 3 cos(2nt) = 5 cos(2nt) ......................................... 28
4.3 Random Waveform of More Than One Frequency Note: Not an accurate graph 29
4.4 Dividing Frequencies, a + bi E C ................................................. 29
4.5 Geometry of Complex Frequency, C = Constant Amplitude/Radius, k =
Frequency, n = Real valued coefficient.......................................... 30
4.6 Synthesized Signal at 27.5 2k {0 < k < 11}...................................... 35
4.7 Comparison of Standard DFT to Parallelized FFT ................................... 36
4.8 Hanning Window ................................................................... 37
4.9 Synthesized Signal {2049,8000,16000,22031} (Hz) Over 1 Second .... 39
4.10 Synthesized Tones @ 44.1Khz, {A,C,G%} Over 5 Seconds...................... 46
x


4.11 Comparison of Real vs Discrete Tigger Theorem (Note vs T(a))........... 46
4.12 Accuracy Plot (Note vs Harmonic vs Percent Accuracy (0 - 100%)) ... 50
4.13 Synthesized Random Sounds @ 44.1 Khz.................................... 50
4.14 Detection & Characterization Results (Run vs % Accuracy)................ 52
4.15 Detection & Characterization Results Corrected (Run vs % Accuracy) . 53
4.16 Use Case (New Note) .................................................... 55
4.17 Use Case (Match) ....................................................... 56
4.18 Use Case (Refresh) ..................................................... 56
4.19 Use Case (Residual) .................................................... 57
4.20 Use Case (Expiration)................................................... 57
4.21 Use Case (Kill)......................................................... 57
4.22 ZR Group Example ....................................................... 60
4.23 D\2 Pitch Class ........................................................ 60
4.24 Visual Space Example.................................................... 68
5.1 Beethoven Minuet in G ................................................... 69
5.2 Techno Electronica (She Nebula).......................................... 69
5.3 Survey Results by User (Music Genre vs Grade %).......................... 72
5.4 Survey Results, Average Grade by Genre................................... 72
5.5 Survey Results, Top and Bottom 5 Scores.................................. 73
5.6 Fingerprint Technique.................................................... 76
xi


1. Introduction
For centuries humans have strived to visualize. From cave paintings to modern artworks, we are beings of beauty and expression. Within the very nature of our language is the underlying desire to express what we feel in terms of pictographic imagery. The English language is ladled with terms such as let me see, or see what I mean, and rarely do we give the underlying meaning of terms a second thought. When given new information in the classroom we often desire a picture of the concept to solidify understanding. When we hear the words c squared equals a squared plus b squared they portray little intuition but when shown a right triangle something clicks. There is often a chasm between representation and intuition constantly being filled by new technology and ideas. It is within this chasm we begin our climb.
Our thesis is inspired by the idea of Synesthesia, which is defined as the cross modality of senses [37, 47] and we aim to devise a mathematical computer science capable of transforming the physical shape of a sound into an intuitive representation, agnostic of culture or background. For example, imagine a musician with an instrument connected to his or her computer and as the musician plays s/he sees amazing patterns, shapes and colors congruent to the harmony in real-time that represents the actual mood of the melody. Similarly, one may select a song from an mp3, mp4, or .wav file and play the music into an application capable of rendering sonorities1 as they emerge. Our research leverages Fourier Analysis, Signal Processing Detection/Characterization, Computer Graphics/Animation, Group Theory, Musical Geometry, Music Theory and Psychology. This task is daunting, it requires not only a profound understanding of a number of advanced scientific disciplines but the ability to integrate several research areas into a cohesive model involving theoretical, subjective and experimental methodologies.
1A term in music theory to describe a collection of pitches.
1


In order to provide a rigorous thesis and still be able to maintain a level of creativity we address three major fronts: a) the construction of a mathematical model and computer algorithms b) the substantiation and derivation of a philosophy involving the human perception of music c) the aesthetics of computer generated Art. Building from the works of Dmitri Tymoczko [42,43], Cytowic and Wood [36,37], Stephen W. Smith [38], James Cooly and John Tukey [17], Bello, De Poli, Oppenheim [6,22,30], Michal Levy [24] and Gethner, Steinmetz & Verbeke [9] to name a few. Our thesis takes one small step toward the derivation of a model (mathematics, algorithms, software and artistic creativity) capable of transforming the physical shape of a sound into imagery divorced from cultural subjectivity.
2


2. Inspiration
2.1 Previous Work
Since the dawn of the electronic era, mathematicians, physicists, computer scientists and electrical engineers have been attacking the seemingly unsolvable problem of blindly characterizing a time series h Whether we are parsing a doppler RADAR system, human speech or music we leverage much of the same mathematics and techniques. Our endeavour hinges on the ability to extract notes from a time series of raw energy impulses. In the 1970s MITs Alan Opppenheim pioneered some of first techniques in speech transcription and signals analysis. In 1977 the University of Michigans Martin Piszczalski and Bernard Galler implemented one of the first computer algorithms to transcribe monophonic tones. Later many experts such as Juan Pablo Bello and Giovanni De Poli added various methodologies to improve transcription of monophonic and polyphonic instruments, high frequency detection, peak detection, isolation and so on. The held of polyphonic music transcription serves as a guide to deriving a mathematical model and methodology. A tremendous amount of work has been done in the area of automatic transcription but sadly, there is no magic equation and the various approaches come down to their individual trade offs [6,18,21,22,28-30,34,38],
Unfortunately, (or fortunately depending on how you look at it) we must switch focus rapidly in our research because we draw from so many disciplines at once. We turn our attention now to the inspiration of musician and author Michal Levy [24] who suffers herself from a condition known as Synesthesia. In her beautiful, procedu-rally generated animations one can see choreographed imagery that mirrors the tempo and how of a song. Michal Levy constructed several animations to include the title Giant Steps designed to intuitively externalize her condition. Another famous contributor to music visuals is the composer and computer scientist Stephen Malinowski 1A series of energy impulses extracted from an analog signal (covered in Section 2.2).
3


who in the 1980s constructed a simplistic but effective visualizer which leverages encoded MIDI information to create injective animations. Strangely enough Malinowski was inspired by his experiments with animated graphical scores in 1974 after taking (LSD) and listening to Bach [26]. This is not only interesting but substantial because according to Psychological research (LSD) may induce synesthesia [36].
2.1.1 A Little Music Please?
It is well known that music is underpinned by a geometric structure [1,42,43]. For our research it is important to digest a small amount of music theory, especially when it comes to jargon. Engineers and scientists are known for the compulsion to name everything and musicians, as it turns out, are no different. One of the most commonly used terms is the term interval. An interval is nothing more than the distance between any two notes and should be no stranger to mathematicians as its meaning is consistent in music theory. However, musicians created confusing labels for each of the non-negative integers up to and including 12. A scientist could go insane trying to mnemonically associate the labels since the numeric value in the label has little to do with the actual interval. Table 2.1 describes the intervals and their names. Notice that seventh is actually a step of 10 or 11, and sixth is a step of 8 or 9. Wow!
Anyway, we must be aware of this nomenclature as it is vital to understanding
much of the psychological research and music theory research regarding tonality.
We turn now to the fundamental inspiration for our thesis, the work of the great
music theorist and mathematician Dimitri Tymzcko. To be fair to Dr. Tymzcko,
in his own words he states 1 am not a mathematician, [43] however his work in
music geometry contradicts such a claim. Tymoczkos work is the glue that holds
our suspicions in place. In the book A Geometry of Music, Tymzcko describes
a scientific model for the behavior of music. His claims, many substantiated and
some not, strongly suggests an objective characterization of melody and harmony,
4


Table 2.1: Music Intervals [25,43]
Step Name
0 unison
1 minor second
2 major second
3 minor third
4 major third
5 prefect fourth
6 diminished fifth
7 perfect fifth
8 minor sixth
9 major sixth
10 minor seventh
11 major seventh
12 octave
which boils down to five fundamental features: Conjunct Melodic Motion, Acoustic Consonance, Harmonic Consistency, Limited Macroharmony and Centricity. Each of these terms is sophisticated and difficult to understand without a background in music theory. We will attempt to define these terms intuitively while hopefully doing justice to Tymzckos work. Conjunct Melodic Motion means that some harmony does not differ in its interval between notes by too large of an amount. This is substantiated by the fact that changes in frequency, which are too small, are undetectable and changes in frequency, which are too large are offensive [43]. Acoustic Consonance is the term used to describe that consonant harmonies are preferred to dissonant harmonies and usually appear at stable points in the song. Harmonic Consistency is perhaps the most important to us as this suggest a sequence of tones whose geometric structure is similar within some frame of a sound. Limited Macroharmony can be thought of as the external distance of a passage of music before a noticeable transition occurs. Centricity should be comfortable to most mathematicians and engineers as it describes an inertial reference frame or
5


centroid to the sequence of music. Applying the law of large numbers to music theory, one can almost envision centricity as the expected value of the music. Figure 2.1 provides one interpretation of the five features, excluding consonance. Each frame represents a macroharmony and within each frame are two chords (represented by red dots). Notice that the second chord in both frames are a linear combination of the first (scaled equally).
Macroharmony Macroharmony
Figure 2.1: The Five Features of Music
We will leverage the ideas of Tymoczko in Section 4.4 where we attempt to garner a sense of behavior from a sequence of characterized tones in a time series. For now we continue onward by discussing the different models for music characterization. It is important to note that transcription is the primary source of our mathematical model but we are not attempting to transcribe music here. To bound the scope of this thesis we will be less concerned with the specific instrument, timbre or music notation than we are with the raw tones and chords present in any given second of a time series. Expanding upon both time fidelity and instrument identification may be part of future work. There are several models that are used when attacking the problem of music transcription, of them the most popular have been a) Bottom Up where the data flow goes from raw time to characterized notes b) Top Down where one begins with a posteriori knowledge of the underlying signal c) Connectionst, which acts like a human brain (or neural network) divided into cells each a primitive unit
6


that processes in parallel and attempts to detect links d) Blackboard Systems, which is a very popular and sophisticated system that allows for a scheduled opportunistic environment that has forward-esque flow of logic with a feedback mechanism based upon new information [22] [21]. It is the Bottom Bp approach we chose to leverage, largely due to the fact that we intend to process raw sound a priori, which is to say, independent of a posteriori knowledge. It is at this time we transition our discussion toward an area of study dedicated to the brain and perception; we speak of course of Psychology and though it is a small portion of our thesis, it is of major influence.
2.2 Signal Basics
Our research depends upon the behavior of a sound wave. Sound waves travel through the atmosphere and generally range from 25Hz to 25Khz [6, 38] or in other words 25 cycles per second to 25 thousand cycles per second. As we age the range of human hearing decreases because our ear fibers become brittle over time and can no longer sense changes at such a high rate [38]. The human ear perceives sound by the changes in pressure generated by the frequency on both the up and down cycle of the wave [38,42], The speed at which that pressure changes (e.g., the frequency) is the way in which a brain interprets information as sound. The faster the change (higher frequency) the higher the pitch and vice versa. The relationship so described provides a conduit to decomposing the raw information into its basic parts and in turn algorithmically interpreting and processing information about sound. Most users generally listen to music in the form of a Compact Disc, MP3 Player, or from a television or other stereo source; all of these systems use an encoding scheme called PCM. PCM stands for Pulse Code Modulation and is the preferred means of transmitting and storing digital sound information electronically [2,39]. Figure 2.2 illustrates a simple time domain signal.
If one observes the red line as a measurement of how intense a sound is recorded
over a period of time (going from left to right on our graph) then one can gain a good
7


Figure 2.2: (PCM) Time Series Graph
idea of how a sound wave is received. In Figure 2.2 the small blue dots are discrete points identified along the curve, in this case a 3 Hz analog wave. These Sample points are where a microprocessor system, such as an analog to digital converter, would measure the sound wave height and store it for use. The number of times sound is sampled determines how accurately the digital copy represents the real sound. The Shannon-Nyquist Theorem [38] states that to accurately represent a signal in digital form one must sample at least two times the maximum frequency. A common sample rate found in mp3 files is 44.IKhz. Since normal human hearing tends to run between 20Hz and 20Khz [38] this makes for a good sample rate because by the Nyquist rule we have 1/2 44100 ~ 22Khz being the maximum audible frequency. Such an encoding provides us with a well defined discretization of an analog sound wave. We now have enough information to break down the original sound.
2.3 Synesthesia
The psychological condition known as synesthesia involves what is known as a cross modality of senses [36], where the so-called Synesthete experiences a (usually) involuntary overlap between some combination of hearing, sight and taste. For example one may quite literally taste color, smell sound or more importantly see sound.
8


We do not offer a rigorous study in Psychology, moreover we intend to leverage elements of research, particularity in the visualization of sound, as a road map toward what may be a more scientific approach to visualizing music. Some compelling results lend credence to our thesis that there exists a universal interpretation of sound. The core of our conjecture is the idea that one can see the physical shape of music and experience visuals in a culturally neutral fashion. The research, as it turns out, seems to support such an idea. The work of Cytowic and Wood in the 1970s suggests a relationship between synesthetes and so-called normals (those without synesthesia) [36,37]. Their research demonstrates a likely connection between synesthetes and normals, which suggests that colors are intuitively associated with certain sounds. Zellner et al. suggests that a version called weak synesthesia is experienced by most people as opposed to strong synesthesia only experienced by Synesthetes [16]. In addition to color, Synesthetes typically visualize a shape, which is referred to as a photism [37]. The term photism is used often in the psychological circles and refers to the geometries and color seen by synesthetes when hearing a particular frequency or melody. A photism is defined as a hallucinated patch of light [3]. Amongst the research of synesthetes, a photism is used to describe the stimuli when presented with a tone, chord or complex melody.
9


A note about the author
The concept of photism can be difficult to explain. When I was young I used to experience a stunningly choreographed array of color and shapes whenever I would hear any of my favorite music. Most commonly I would experience a vortex of spinning gradients that changed in brightness and hue synchronized to the tempo. Sometimes these vortices would change ho-momorphically into other polytopes. The experience was involuntary and intense for several years but began to slowly fade as I aged. I had not been convinced I had experienced Synesthesia until I read the works of Carol Bergfeld Mills et al. [5] where the exact conditions I experienced were reported by others. At the start of our research I could not help but feel unsurprised when learning which colors would commonly be chosen to represent various frequencies. I was equally unsurprised when presented with evidence of which shapes were most commonly selected. Now, I understand why. Although, I no longer possess an involuntary response, I do often purposefully visualize similar photisms when hearing a pleasing macroharmony.
As it turns out the idea of visualizing sound is not new, in the f930s Otto Ort-mann mapped out several charts attempting to define ranges and tones and their corresponding color leveraging the work of Erasmus Darwin (f 790) and Isaac Newton (1704) who both predicted the existence of a pitch color scale [32,36]. We focus much of our research on the association of tone and frequency that we focus much of our research. Colored hearing as it is called, is believed to be the most common [36] and most often presents when played some tone that lasts for more than 3 seconds [5]. Throughout much of the published works on colored hearing there is a common theme, that is to say, without prior knowledge a substantial portion of synesthetes experi-
10


ence a common scale of color ranging from 26% to 82% concurrence and photisms whose size and scale s as high as 98% concurrence between synesthetes [5,8,16,31]. Conveniently, the generalized experience in color and frequency can be heuristically mapped to a simple algorithm. Prom the works of Konstantina Orlandatou [31] we have the ranges of 0 50Hz as mostly black, 50 700Hz as mostly white, and 700Hz to 3Khz is mostly yellow. Orlandatou also notes that there is a clear association between pure tone and singular coloring and vice versa. We see from Lawrence Marks that there is a quantifiable relationship between amplitude and dimension of a pho-tism [27]. It has also been observed that noise is generally achromatic 2 [5,31]. We also see in multiple studies [16,31] that pure tones are often seen as yellow and red whereas a sawtooth tone is seen as green or brown. This can be associated with the harmonic steps in a melody. Finally, we see that there is a mood associated with some coloring, which may present a challenge if not for the work of Dimitri Tymoczko (Section 2.1) wherein the overall behavior of a macroharmony can be decoded with the use of some music geometry. It is from these works of Psychology and Synesthesia that we derive the following tables, which will act as our algorithm guide going forward. The contents of Tables 2.2 and 2.3 have been constructed by consolidating the works of [5,8,16,19,27,31,32,36,37]. These mappings represent direct psychological experiments with our own interpolations and statistical averaging of Synesthete responses.
2Being without color or black/white.
11


Table 2.2: (Synesthesia) Note Color Association
Base Frequency Map N(a)
Note RGB Color
C/C# blue
D/D# red
E yellow
F/F# brown
G/G# green
A/A# green
B black
Table 2.3: (Synesthesia Tone Color Mapping) Additive Frequency Map F(a) Pattern Map P(a)
Frequency Range Note Color
0-50 (Hz) All black
50 700 (Hz) All white
700 22 (KHz) All yellow
Frequency Pattern Color Effect
Harmonic Stair Step green
12


2.3.1 A Colored Hearing Theorem
We will now derive a few theorems allowing us to extract a numeric color value from a frequency.
Theorem 1 (The Roswell Theorem). There exists a proportionate mapping such that an increase in noise takes any color toward the gray color scale.
Proof: Recall the linear interpolation equation {l t)P\ +tP2 with P\, P2 vectors in Rm. The value of t ranges from 0 to 1 being a percentage of the total distance between vectors P\ and P2. Let P\ be an R3 vector of the form (r,g,b) where the values r,g,b range from 0 to 255 and P2 = (128,128,128) (the gray color). Let cr represent the total noise level of a signal. If we compute t = then we have a
ratio of 0 to 1 over the range of the noise. If we substitute t back into the interpolation equation (1 t)P\ +i(128,128,128) we have a linear interpolation, which transitions any color toward gray as noise increases.
Figure 2.3: Hue, Saturation, Brightness Color Scale [23]
13


Theorem 2 (The Stripes Theorem). Define the binary operators O and o to be additive color3 and color intensity operations, respectively. Any musical frequency a can be mapped to a color consistent with Colored Hearing using the equation
(r, g, b) = (1 t) [H(a) o (N(a) o F(a))] + t(128,128,128)
where
H ('2.D
max(\a\) and cr is the noise level.
Proof: Let H(a) be a mapping of the fundamental frequency to an HSV4 color from Table 2.2 and F(a) be a mapping from any frequency range to an HSV color (h, s,v) in Table 2.3. When we mix colors with
(h, s, v) o (j, t, w) = ((h + j)/2 (mod 360), (s +t)/2 (mod l + t),{v + w)/2 (mod 1 + e))
(2.2)
according to the surface of the cone in Figure 2.3, then scale the result of (2.2) according to harmonic k = H(a) such that
k o (h, s, v) = (h, (1 k)/max(k), k/max(k)). (2.3)
The result of 2.3 is a color combination of the observed note-color and general frequency-color matching that is brighter for higher frequencies and darker for lower. Assuming that o maps to an RGB5 value, we plug the result of (2.3) into Theorem 1 and the output is a color that simulates a Synesthetes Colored Hearing response. Figures 2.4,2.5 and 2.6 illustrate color samples using the Stripes Theorem", which were generated over the frequency range 16 to 22.05 (Khz). In Figure 2.6 we see a darkened version of the blended colors over the ranges of 16 to 31 (Hz). We
4HSV or HSB is a hue, saturation, brightness color scale where 0 < saturation, brightness < 1 and 0 < hue < 360 [15].
5 RGB is a red, green, blue color scale used commonly in computer graphics where 0 < r, g,b < 255
[35].
14


Figure 2.4: Tigger Stripes, 16 to 31 (Hz) (No Noise)
Figure 2.5: Tigger Stripes, 16 to 22.05 (Khz) (No Noise)
Figure 2.6: Tigger Stripes, 16 to 22.05 (Khz), No Noise (left), 50% Noise (right)
then generate colors for 22 (Khz) shown in Figure 2.5 where we notice a clear transition from darker to lighter tones, which is consistent with our proof. Finally, in Figure 2.6 we add 50% noise to the frequency spectrum (right side), which causes a clear transition toward the gray scale consistent with Theorem 1.
15


3. Proof of Concept
3.1 Discovery and Approach
It requires so many technical elements to solve the problem of visualizing sound, so much so that the question of how to begin poses a significant challenge. We must contend with the logistics of parsing and interpreting PCM, reading from various devices, the means with which we can display graphical information and all of the structures and utilities necessary to calculate and render. As with any journey, we must take a first step and what better step to take than a simple end-to-end prototype that can read a sound hie and generate some sort of mapped graphical imagery using only the time domain information. This prototype, or proof of concept if you will, allows us to learn a few things about our data. We construct a sound processing framework in the Microsoft Windows environment using C# .Net. This language was chosen primarily for its high performance capability and flexible syntax which allows us to leverage operator overloading to more easily handle mathematical structures. We are also able to utilize the raw struct syntax that provides on stack memory allocation as opposed to dynamic memory that provides significant performance decrease when dealing with random access. The SoundBus Framework, as we call it, is a processing framework that utilizes an animation plug-in system where each animation plug-in acts as an interface that can receive both sound data messages and requests to render their current content. This allows us to experiment with different algorithms without losing any previous work. We employ the Microsoft XNA as a graphics application program interface API that allows us to speak to the graphics processing unit (GPU). The NAudio sound processing package that acts an API connecting our system to the sound input device, frees us from having to implement a device driver or decompression algorithm. Our implementation is capable of seamlessly processing raw pulse data from a raw MP3, .wav hie, or direct microphone input. At first, we intended on providing a one-to-one mapping of impulse to graphical element. There
16


are several challenges when dealing with an attempt to synchronize the visualization of sound and imagery. At 44K impulses per second, a 100Hz refresh and drawing one image per frame the backlog grows arithmetically as [5it) = (44100)i + (10000)i. Even if we increase to 100 images per frame after 10 seconds we have a backlog of /3(10) = 341000 images to be drawn. As a consequence we can never keep up with the sound that is playing in real-time without creating tremendous clutter on screen or simply rendering an image the user never actually sees. Thus, we abandon the one-to-one rasterization1 but not the one-to-one calculation. Our framework provides a stream of sound information to an interface designed to process individual samples at a time. Simultaneously there exists another interface mechanism that is called on a 30FPS interval to refresh the computer display. In order to keep the different streams synchronized we employ a timer system that calculates the current temporal backlog and flushes data to the graphics calculation. The processing implementation system then chews off individual data blocks and continuously incorporates the data into a set of running parameters for our geometric figures. At any given time, the interface is asked to render itself in its current state. The end result is that we receive a fluid animation that generally mirrors the pace of the sound and neither gets too far behind, nor too far ahead if reading from a sound hie.
3.1.1 A Time Domain Experiment
The images shown in the upcoming results section are created using the following approach: we primarily take advantage of statistical characteristics of a sound wave in the time domain. The implementation receives the PCM over time and parameterizes a set of simple dihedral geometric figures whose edges are drawn in stages over time based upon initial parameters. We begin with the set X = {x | x G Z, 2fc < x < 2k} that describes the amplitude data. To minimize clutter we limit the total number of animations on screen at any time to n G Z+. In our time sampling we deal with
A term in computer graphics that describes converting memory elements to screen pixels [35].
17


hundreds of thousands of samples in just a few seconds. We must limit the elements generated so as to not overburden the graphics system and our CPU. Experimentally, we chose a hard limit of 200 items which we shall adjust later as needed. Let S = {sk | 0 < k < n} represent our parameterized animation elements such that Sk = (9,(f),a, P,r,m,v) with 0<9< 2xir with 9, being the starting and ending angle, a the current angle of rotation, P E Z3 the centroid, and r, m are the radius and color respectively (the color here is the integer form of a bitwise combined RGB value). Finally, v represents the step that determines the number of vertices in the geometric figure. We then define a set of mappings fp : X > S that map time parameters to an animation element, fd : S > 1R3, that maps an animation element to the display (a 2 x 2 x 2 bounded region in M3) and fn : M. > Z+ where fn(x) = 1 + |x/max(x)| 2, that normalizes data to screen bounds. Table 3.1 contains the initialization parameters that maps elements in X to elements in S.
As a new amplitude is received, an initial state that represents the signal at that time is constructed by way of the parameters defined in Table 3.1. As mentioned in Section 3.1 there are two key stages consisting of a paint interval and time step. When a paint interval occurs the elements in S are rendered as a curve extrapolated from the set of rotations R = {i\i E Z+, i0 = 9, ik+1 = U + (4> ~ 9)/60, i < a}. The shape is then mapped to the display with a simple linear transformation fd(sk,i) = (r cos(i) + P(sk)x,r sin(i) + P(sk)y, P(sk)z)', this essentially connects the vertices of some partially, or fully formed regular polygon on screen over time. Depending on the current rotational perspective we also paint a disc at the centroid of a geometric figure whose size is determined by r (a 9)/rep where r,(f> ^ 0 and that produces a visual singularity type effect. Simultaneously, at each time step we increment the current angle a of each element Sk by a = a + v. An animation reaches its lifes end when a > cf> at which time it is purged. The size, color and vertex count of an animation element is a direct representation of the shape of the pulse waveform at
18


Table 3.1: Time Domain Initial Parameterizations
Parameter Value Description
X Current amplitude
%k 1 Previous amplitude
(T Signal to Noise Ratio
9 Gain
e |:r/255* 2tt| Starting angle
0 + |xfc_i/255 27r| Ending angle
a 0 Current angle
r x/(max(xk) 2) Radius
V (4>-0)/30 Rotational velocity
Color Red fn(x) (mod255) RGB Red Value
ColorGreen Xk-i (mod 255) RGB Green Value
Color Blue a 255 (mod 255) RGB Blue Value
Px fn(rand() + 2g 1) Centroid X
Py fn(rand() + 2g 1) Centroid Y
Pz fn(rand() + 4y 1) Centroid Z
the time it is created. As an additional visual element we also set a gradient tone for the background based upon the current signal strength where Background RGB = (0,0, fn((E[X\k E[X]k-i)/E[X]k)) (mod 128) with E[X\ being the expected value. Note that our display rotates the entire view matrix about the y-axis (assuming y points north) very slowly in a counterclockwise direction. The rotation angle is associated with an average of a subset consisting of recent amplitudes in ratio to the maximum.
19


Figure 3.1: Example Set of Parameterized Geometric Figures
3.1.2 Initial Results
Our application was run against a handful of music files which, in this case came from symphony music downloaded from the internet. To use the application one simply selects the input source, in this case an MP3 hie, and then presses play.
Figure 3.2: Symphony Sound Dogs
20


Figure 3.3: Symphony (Beethoven 12th Symphony,Beethoven Violin Sonata,Schuberts Moment Musical)
Figure 3.4: Techno Electronica (Termite Serenity,Nao Tokui,She nebula,Termite Neurology)
Figure 3.5: Various Composers (Debussay Clair de Lune,Mozart Eine Kleine,Mozart
Sonata,Chopin Etude)
21


Figures 3.2,3.3,3.4 and 3.5 illustrate screen captures taken from our application while playing the specified music. If one observes the images, particularly Schuberts Sonata from Figure 3.3 one can see the formation of a conic structure composed of successive geometric figures. We conjecture we are seeing the physical shape of the waveform over some duration. This behavior manifests itself throughout most songs. Though mathematically we have shown that our visualizations are in fact a direct result of the shape and behavior of the raw time series it is very difficult to qualify that we are seeing any behavior that mirrors the underlying tonality or melody. The imagery is captivating and interspersed with brief moments of melodic mimicry and synchronization but it is not a sufficient demonstration of our thesis. We did however accomplish the initial goal of constructing a software framework for moving forward. Portions of the processing source code can be found in appendices A.0.10, A.0.11 and A.0.12.
3.1.3 Approach
It is time to leverage our research and continue trying to extract what makes the
music behave the way it does. Thus, we may attempt to incorporate such behavior
into our visuals. In order to do this, we devise a plan for our overall model illustrated
as a road map in Figure 3.6. The plan in Figure 3.6 allows us to handle each stage
of the transformation with the level of rigor we deem necessary or possible within
the scope of this thesis. We offer a modular approach to the processing pipeline
insofar as we break our algorithm into several parts. Each stage yields a clear output
that acts as input to the next stage. We do this with the knowledge that we can
both improve each stage independently and add tuning and parameter adjustment
for known shortcomings. We do not abandon a cohesive mathematical model across
modules, nor do we assume that inputs/outputs are mutually exclusive. We merely
strive to break apart the challenges and allow independent research and improvement.
The idea being, when one component of the algorithm pipeline improves, others do
22


Sound Wave Input > Digital Pulse Information
t
RF Transform
i
Frequencies
ChordC s') N
Dihedral Group Analysi Engine
Tone 2
Tone 3
Melody Analysis Engine
Graphical Display
Figure 3.6: Research approach
as well. Garbage in, garbage out and vice versa, so to speak. The real beauty is that we can find partial solutions to an algorithm and still move on to the next algorithm. This is very important because there are no perfect solutions with detection and characterization, discussed in Sections 4.1 and 4.2.
3.1.4 Animation Time Budget
As noted earlier, we must be able to process data in realtime and this must be done quickly enough so as to not to lag too far behind the music. We propose an initial time budget consisting of 1.5 seconds from time sample to visualization. The time budget acts as a guide on how to tune performance and accuracy of an algorithm. For example, we may sacrifice accuracy in a calculation that takes minutes and is done rarely but is very slow. The concept of the time budget is not new; the Microsoft XNA graphics environment provides a game clock where paths in the code can make choices to skip actions on the current cycle, or double up if lots of time is available. Our time budget, shown in Table 3.2, is a guesstimate based upon experimentation from Section 3.1 and research regarding the Fourier transform. The time budget is a soft requirement that allows us to think of the entire processing pipeline as a cohesive function so we do not lose sight of where we are in time. It is easy to drift off when focusing on one area and forget that it is a small aggregate of a larger calculation.
23


Ultimately, we expect to see large deviations from our initial guess but we must begin somewhere.
Table 3.2: Animation Time Budget
Step Allotted Time (milliseconds)
Frequency Detection 600
Note Characterization too
Chord Characterization 200
Geometry Analysis too
Melody Analysis too
Graphics Processing/Rendering 200
Total 1.3 Seconds
24


4. Research and Development
4.1 Fourier Analysis and Frequency Detection
To detect, isolate and characterize the contents of a music signal we must be able extract the tones from a time series. This implies that we require frequency information from the time data. The basis for this assumption is from a simple concept proposed by Bello [22] that implies we must be able to determine two major properties of any time distribution. First, we identify the significant frequencies within the sample distribution, specifically those associated with musical tones. Second, we identify the event time of each frequency. Bello proposes three key values pitch, onset, and duration which, we will utilize implicitly later on. For the time being, we constrain our analysis to a one second interval and use the Discrete Time Fourier Transform (DTFT) [6,38] to extract frequency information from a time series. We choose our time interval to be one second which, is based upon Synesthesia research stating that sub-second intervals of tone are unlikely to invoke a response and longer intervals do not change the response [33]. This also decreases our mathematical complexity and implementation complexity. The model for extracting tones that includes all frequencies of the form
fn(a) = cl (2) where a E R+, n E Z+ (4.1)
is every value of n that produces a harmonic of the fundamental frequency a. A list of all the fundamental frequencies is shown in Table 4.1. We refer to the fundamental for a particular note as /0 but we must introduce a few new forms of notation. The following is more computer science than mathematical and is how we will reference a named note as a function fo('A') = 27.5, fi('Afi') = 58.28 and so on. We will also use the more common notation for a harmonic frequency commonly found in music texts which is < Note >< Harmonic >, (eg: A4 which implies 27.5 24) h
1A4 = ffi'A') is the fourth harmonic of the set of frequencies fk('A') = {440 (2)fc/12 | 48 < k < 39} [22] [38],
25


Table 4.1: Fundamental Frequencies [7,38]
Note Frequency (Hz) Wavelength
Co 16.35 2109.89
C'# o 17.32 1991.47
Do 18.35 1879.69
D# o 19.45 1774.20
Eq 20.60 1674.62
F0 21.83 1580.63
F# o 23.12 1491.91
Go 24.50 1408.18
G# o 25.96 1329.14
7lo 27.50 1254.55
^4#o 29.14 1184.13
Bo 30.87 1117.67
26


Definition 1 (Music Fundamental Set). Let F be the set of fundamental frequencies where F = {fo('C'), foi'Cff'),..., fo('B')} as shown in Table 4.1.
Definition 2 (Musical Harmonic Set). Let M be the set of all musical harmonic frequencies where M = {fo('C') 2k, /0(,C'#/) 2k,fo('B') 2k, Mk > 0}.
Using our definitions2 and knowing the specific ranges we are targeting, we can design our analysis in such a way as to extract a specific subset of the time domain. We will be dealing with unmodulated data (ie: there is no carrier ware or shift keying) and assume a maximum sample rate of 44,f00 Hz [2] [38].
4.1.1 Understanding the DFT
Mathematically speaking, what is a wave? To answer this we must first examine the cosine function. Back in Figure 2.2 we saw a 3 (Hz) wave. If we inspect the cosine function over some period of time t you have the mapping / : M. > M. where x = f(t) = a cos{t) with {x| 1 < x < 1} and the peak and trough of the function are a maximum and minimum of a, illustrated in Figure 4.1 with a = 1. Remember the cosine function rises and falls symmetrically over the abscissa and peak to peak measurements are congruent. Now examine Figure 4.2, which illustrates a very simple example of how a series of cosine values being added can produce a new waveform.
Suppose that instead of /(f) = cos(t) you had t = 2kTin or f(n) = cos(2k7rn) where k is some constant (the frequency) and n is some real number that iterates over all possible values of the function for that desired frequency. For the purposes of Figure 4.2, k would be equal to unity thus yielding x = cos(2 1 tt). Interestingly enough we have added three waves together, each wave being a single cycle/frequency of amplitudes 2,3,5 respectively. In general 2,3 and 5 could be measurements of
voltage, power or some other ratio of change between two values such as Decibels
2To assist the reader we try to maintain consistency and definition mnemonic. Notice that F is the fundamental frequency set and M is all musical frequencies. Whenever possible, we stick with consistent variable names for frequency, iterants, sets, etc...
27


Figure 4.1: Cosine Function x = c.os(t)
Figure 4.2: x = 2 cos(27rt) + 3 cos(27rt) = 5 cos(27rt)
where a = 10 log\o(x/5x) [38]. For the moment we ignore intensity and assume our power levels are at unit. Figure 4.3 is a pictographic representation of a random wave. Note that this wave is not accurate in terms of its structure but for intuition only. This image depicts how two waves of different frequencies are able to be added together to form a new wave just as in the previous example. Using Figure 4.2 as a starting point one might see that we can reverse engineer the presence of any original wave within another. Flow do we do this? By dividing out each frequency we perceive to be present in the original time series as illustrated in Figure 4.4.
Thus, take the set X = {cos(27m) + cos(47m), 0 < n < oo}. Flow would we determine if a sinusoid of frequency 2 (Flz) lives within this wave? Essentially, we would want to ask the wave at every point how much the function /(??,) = cos(2 27m)
28


Figure 4.3: Random Waveform of More Than One Frequency Note: Not an accurate graph
Figure 4.4: Dividing Frequencies, a + bi G C
matches in what amounts to a cross correlation between the time series and the complex function. Remember, the complex function iterates over all possible values of a wave at a particular frequency so comparing it to the original time series X at the same intervals results in a correlation response. Flowever, we do not subtract, we divide by our search wave at each interval and sum the results. This generates what is in effect a correlation value whose magnitude is a measure of the presence of our desired frequency. For example if we were looking for 2 (Hz) signals in the original wave we could perform the following
Presence (-2Hz) =
Recall Eulers formula elx = cos(x) + i sin(x) where i is the imaginary unit. Take note that if we were to divide by the cosine only wed have our original Equation 4.2. However Figure 4.5 depicts the complex plane and the behavior of the Euliers foru-mula. The function Z = traces a curve in the complex plane, which results
in a perfect circle about the origin of radius R. Within the complex plane itself, it is
X
alln
xra
cos(2 27m)
(4.2)
29


not apparent how k affects the wave thus we extend along a 4th axis with k = fot. We then use an orthogonal projection onto some affine plane parallel with the imaginary axis. One can see that the image under the projection is the 2D sinusoidal waveform similar to Equation 4.2, the difference being that now we have encoded phase information into the result of our division.
Figure 4.5: Geometry of Complex Frequency, C = Constant Amplitude/Radius, k = Frequency, n = Real valued coefficient.
From this model it follows naturally to substitute complex division for our real division, and the resulting mapping is known as the Discrete Fourier Transform or (DFT). Then
N-1
= (4-3)
ra=0
where Xj. is a set of complex numbers whose magnitudes signify the presence of a frequency k in the original signal. Additional references for the various flavors of the DFT and FFT can be found in [4,6,11,17,38,40,41],
30


4.1.2 A Parallelized Fourier Transform
The Fast Fourier Transform is predicated on a number theoretic approach to factorizing the DFT into smaller pieces. In general this allows us to improve the computational complexity by extracting factors from an inner summand and performing those multiplications a single time thus reducing our flops from 0(N2) to 2Nlog2 N [17]. Although there are several libraries that have full implementations of the FFT we attempt to derive our own mathematical model here. In part, allowing us to explore the behavior of the computation, but also we assume we may need to fine tune the computation addressing specifics of our detection algorithm. To maximize performance on modern hardware we take advantage of this idea on two fronts: a) by removing factors from the inner summand and b) creating a computation that can be parallelized. When dealing with the Fourier transform it is common to define
27r
a constant Wn = e~' which, allows us to deal with a compact form of the DFT as Xk = J2n=o Xn W^k. Taking ideas from the work by James Cooley and John Tukey [17] and others [4,10,41] we derive a simpler version of the Fast Fourier Transform (FFT) in Theorem 3.
Theorem 3 (A Parallelizable Fast Fourier Transform).
v2-i
fb = f(a, N1} N2, k) = Y, Xa+m2WaNk
a=0
Wi-l
6=0
(4.4)
Proof: Let N,Ni,N2 Z. We obtain Ni,N2 by factoring N such that N = Ni N2. Using the n, nth roots of unity Wff let n = a + b N2 with integers a, b. By the division algorithm, we have mappings in a = {0,1,2,...,N2 1} and b =
31


{0,1,2,N\ 1}. The key is to notice that a is cyclic in N2 and b is cyclic in N\ [17]. This is essentially the same as a two dimensional expansion of the single dimensioned value n. Observe, when we break apart the DFT using our two dimensional index scheme we have
Xk
sr^Ni l 2 1 y TTr(a+bN2)k
2^6=0 Xa=0 Aa+bN2 N
If we apply some basic algebra and factor we end up with
wW,N,)k = wg wm2k whjchi ultimately yeilds
W = Efi_I ES1 X'W'Wtfw,
N
Notice, the only changes in the inner summand are a and k which, means we can extract a function
w2-i
fb = f(a,N1,N2,k)= Y.Xa+bN.W,
ak
N
(4.5)
a=0
If we plug our function back in, and normalize the result by the total sample count 1/iV we get
1 W1-1
V = w J2 A (4-6)
6=0

Our derived FFT is approximately 7N2 + 7Ah with this factorization and is computationally more complex than the standard DFT. If we let 1^2) approximate
6 flops, assuming Eulers Equation elx = cos(x) + i sin(x) [48]. The original DFT ranges over N elements and N frequencies that implies (6 + l)iV N = 7N2. Observe that Equation 4.5 is approximately 6 + 1 flops ranging over N2 and 6 + 1 flops ranging over the outer summand Ni which yields Ni(N27) + 7Ni from the outer summand.
Given N search frequencies we have N(Ni(7N2)) + 7Ah 7N2 + 7 Ah total flops.
32


Although the increase in flops seems to be a computational loss we have an an overall gain as we have extracted a memory contiguous inner summand with no external dependencies. This calculation allows us to provide equal sized contiguous blocks of RAM to independent threads (providing we factor evenly). Contiguous blocks of independent data per thread minimizes cache contention and decreases overall overhead associated with locking and context switching [20]. Algorithm 1 approaches the parallelization of Equation 4.6 by creating a process/thread to execute N2 multiplications and additions each and returning the partial summand that we then add to a synchronized accumulation variable.
33


Algorithm 1 (Parallel Fourier Transform).
Shared searchRf {Search Frequencies}, X {PCM}, c 0 Private N Length(X), {Ni,N2} Factor(N)
For i 0 To Length (searchRf)
C 4 0
k searchRf[i]
For a 0 To N\ 1 Parallel Private t 0 Consume c into t
_ __ ^ x27raN?k
Produce c t + fb e n End Parallel Next Barrier
X[[k\] 4r- C* 1/N Next
4.1.3 A Synthetic Test
Before we empirically analyze the performance, we will verify our parallelized fourier model will correctly extract a frequency distribution. First, we construct a signal synthesizer utility that generates an artificial signal at the desired frequency values F = {/i,/2,--- ,/&} Then, using Equation 4.1 and Equation 4.7 we can
34


interlace those frequencies into a time series. Given aeR, k, n G Z
\ - .2ttFk*n.
Xn = }_^a* C0S(---------) (4.7)
all k
where a is the amplitude, n is current sample and N is the sample rate. Figure 4.6 shows our target signal, fabricated with only a single fundamental /o(M/) and 12 harmonics. The source code for the Fourier transform and Synthesizer can be found in Appendix A.0.15 and A.0.14.
(Time Domain) f0 = 27.5
-icn-
Figure 4.6: Synthesized Signal at 27.5 2fc {0 < k < 11}
We compare our new Fourier model in Equation 4.6 with the standard DFT using
the signal in Figure 4.6 to determine if the results are equivalent. Figure 4.7 shows
amplitude measurements for each known frequency are the same between the DFT
35


and the new model. Confident that our algorithms are producing the same peaks we now contrast their overall performance. We executed each algorithm 5 times and averaged their speed using a high precision software timer. The results are shown in Table 4.2.
DFT
Figure 4.7: Comparison of Standard DFT to Parallelized FFT
Table 4.2: DFT vs FFT Performance
Algorithm Time (ms)
Parallelized FFT 72.2
Standard DFT 192.4
Table 4.2 was computed with /0(M/) and 12 Harmonics on an Intel 3.5Ghz i7 with
16GB SDRAM on Windows 7 Professional 64bit. We compute the overall performance
with Speedup = ^ [20], which yields 192.4/72.2 2.66 thus our algorithm is ~ 166%
36


faster than the standard DFT. This will be helpful when trying to extract a large number of frequencies in a timely fashion.
Before further analysis, we must include another calculation in our Fourier model in order to deal with issues that arise during the sub-sampling of a time distribution. We apply a technique known as windowing, which minimizes the effect called spectral leakage [6,22,30]. The leakage is in response to the transform being applied to a time series in partial chunks, in our case 1 second intervals. When we sample a portion of a time series it has been observed that high frequency aliasing may appear at the seams of the sample space [6,30]. To compensate, the window function allows us to partially repair this leakage by smoothing the transition. This technique takes many possible forms and we have chosen the Hanning or Hann window. Figure 4.8 illustrates the graph of a Hanning window, which is essentially the haversine function. Although there are many different types of windowing functions this one is known to be effective in musical transcription [6,30]. Equation 4.8 shows the calculation adjusted for our factorization and when we incorporate u(a, b) into Equation 4.5 we have Equation 4.9.
Figure 4.8: Flanning Window
37


u(a, b)
1 cos
2tt (a + bN2) N 1
(4.8)
= (
2 \ V iv i II
n2-i
a=0
U,a = O 1 CS
27r(a + bN2)
W- 1
xa+bN2w*k
Ni-1
/* -wi
6=0
(4.9)
It follows that we must determine if our algorithm will be effective against the full spectrum of tones fo('C') 2k through fo('B') 2k. We include the Hanning window in all the following calculations. In order to verify accuracy and performance we ran Equation 4.9 against a set of synthetic signals, which included a minimum of 12 harmonic steps for every frequency up to and including the Nyquest. The results are shown in Table 4.3.
We are approaching a key threshold in our frequency detection performance. Recall from Section 3.1.4 that our time budget allows for 500 ms transform time. Instead of jumping to a very complicated musical score, we will continue to incrementally complexify our signal and evaluate our analysis as we go. You can see another signal pattern (Figure 4.9) and the resultant FFT values for the specified frequencies in Table 4.4.
An interesting result is that the DFT and FFT now differ slightly in their real and imaginary components. Initially we assumed this was caused by numerical precision issues in our algorithm but as it turns out we were correct in our assumption. In Section 4.2 we discovered certain frequency ranges were not being detected properly and this was due to the fact that we had mixed the frequency search variable type
t ., - 27r((in£)n)fc
between integer and floating point causing the n computation to return
rounded results. After correcting the problem the output is exact between the DFT and our parallelized transform.
38


Table 4.3: FFT Performance Extracting the Full Spectrum of Tones
Sample Rate Num Freqs Ave Time (ms)
8000 96 156.2
16000 108 219.2
22050 120 422.6
32000 120 452
37800 132 569.2
44056 132 624
44100 132 640.4
47250 132 696
48000 132 698.2
50000 132 719.8
50400 132 724.4
88200 144 1334
Figure 4.9: Synthesized Signal {2049,8000,16000,22031} (Hz) Over 1 Second
4.2 Tone Detection and Characterization
39


Table 4.4: FFT Versus DFT Accuracy
Rf (Hz) DFT 2-norm FFT 2-norm
2049 2.500003 2.488039
8000 2.499994 2.488039
16000 2.499994 2.488036
22031 2.49998 2.488338
With our raw frequency detection technique in place we must determine an efficient and accurate way of characterizing the musical signal in terms of its musical notes (eg: A, AC, etc.). We do not expect to find a perfect technique, especially when it comes to complex scores, noise and/or percussion instruments generating anti-tonal sound. We also have to contend with discrete samples of a continuous wave. It is understood that we will encounter frequency aliasing, frequency loss and detection ambiguity [6,22,29,30,34,38]. The following detection algorithm began by borrowing a technique proposed by Jehan [18] which leverages a scaled, homogeneous, mean squared error of the spectral graph. During our experimentation we we able to simplify the response detection for our purposes of only extracting dominant tones.
4.2.1 A Detector Predicate
Provided with an amplitude in relation to a particular frequency we must be able to estimate the musical note, however there is an additional challenge. Recall the DFT requires us to impose the mathematical floor of real values, thus accommodating situations where a frequency such as 27.5 (Hz) exists in the original signal but our analysis produces a peak at 27 (Hz) or potentially 26 through 28 (Hz) or some similar combination. This could be troublesome going forward however all our frequencies differ by at least .5 (Hz), which means we can truncate each frequency to the nearest integer without too much trouble. Given a complex valued frequency distribution X and a E C we can detect a frequency peak using Equation 4.10.
40


S(a)
|a|
max(|X|)
D(a)
True S(a) > t False S(a) < t
(4.10)
The strength of a signal is defined by the expected value of the distribution as opposed to noise, which is defined to be the standard deviation [38]. This concept is generally used to detect a signal within the noise floor [6,38]. We approach it somewhat differently here because our methodology is to only go after frequencies that we want. We do not transform the entire space of frequencies from 1 to 1/2 Nyquist thus our impulse response is constrained to the domain of the ~ 144 frequencies we care about. We then match the impulse response of the known frequencies against the maximum impulse of known frequencies; we then trigger a boolean response of True whenever a frequency exceeds the average signal strength by some tolerance 0 < t < 1, which can be tuned later.
4.2.2 Musical Note Characterization
We require another tool in our toolbox for determining which note is played after a successful detection. After some experimentation we observe that Equation 4.1 bounces the fundamental frequency over a step function toward its harmonic. From this observation we construct the following theorem.
41


Theorem 4 (The Tigger Theorem). Every element in M can be mapped to a distinct integer of the form
r= LiooT^T^-ih
(4.11)
Proof: Given the equation
2Ufl2(/)J'
(4.12)
Since 2Lj0g^(/)J = for some integer c it implies k + c = [log2(f)\, which implies (4.12) maps any frequency to its fundamental divided by another c factors of 2. If we compute the values for each fundamental frequency we get T = {1.021875, 1.0825, 1.146875, 1.215625, 1.2875, 1.364375, 1.445, 1.53125, 1.6225, 1.71875, 1.82125, 1.929375}, ordering the set ascending by the corresponding fundamental. When we subtract 1 from each element we get {.021875, .0825, .146875, .215625, .2875, .364375, .445, .53125,.6225, .71875, .82125, .929375}. When we multiply by 100 and take the mathematical floor we have the set Q = {2, 8, 14, 21, 28, 36, 44, 53, 62, 71, 82, 92}. By inspection every element in Q is unique therefore (4.12) maps every element in M to a distinct integer.
Definition 3 (Tigger Harmonic Set). Given the Tigger Theorem mapping T : M > Z; define t to be the set of all Tigger Harmonic values such thatr = {T(m), m E M}.
Now that we have a detection and characterization technique we devise an algorithm, which allows us to process any audio input stream and guess the notes within that stream. We construct our algorithm as follows, Equation 4.6 extracts a portion of a frequency distribution of a musical time series, Equation 4.10 allows us to detect the presence of a frequency and Theorem 4 allows us to map that detected frequency to a musical note. The entire approach is defined in Algorithm 2 and Listing 4.1 and
42


4.2 shows the main structures and methods used by the algorithm.
^^1
Algorithm 2 (Note Characterizer).
Private X ^ { Sample Inputs }, SearchRf { f1, f2, ... }, Detected-
Notes ()
Private N Length (SearchRf), i 0
X <- FFT(X, SearchRf)
For k 0 To N-l
If D(X[k\) Then
DetectedNotes[i] T(k)
i i + 1
End If
Next
Return DetectedNotes
1 J
43


Listing 4.1: Characterization Structures
// Immutable structure that holds info about // a fundamental tone. struct MusicalNote {
// Note enumeration value public eMusicNote myNote;
// The fundamental Rf for this note
public float myRf;
// The tigger mapped value.
public float myTiggerRf;
}
// Mutable structure that holds information
// about an observable tone struct MusicalDetect {
// The Starting second the note was seen.
public long myTimeOn;
// The observed frequency
public float myRf;
// The observed amplitude
public float myAmp;
// The note detected.
public MusicalNote myNote;
// The duration the tone was played in milliseconds
public float myDuration ;
// The harmonic of the fundamental in myNote.
public int myHarmonic;
}
44


Listing 4.2: Characterization Methods
// Determines the fundamental note from any frequency.
// f := The frequency for which to guess the note.
MusicalNote Characterize ( float f);
// Determines which notes exist within the given time // series starting from offset.
// X := Set of time series samples (PCM)
// sampleRate := The samples per second in X
// offset := Where in X to begin guessing.
MusicalDetect [ ] GuessNot.es (Z2 [ ] X, int sampleRate, int offset);
It is wise at this point to experiment with the algorithm and determine its effectiveness. Figure 4.10 depicts the time plot of a more complicated (albeit slightly unrealistic) signal generated with the synthesizer. The signal is composed of three tones A, C, G# at values of (27.5,1000,2),(17.32, 5000,1), (25.96,7000,2) frequency, power and time (seconds) respectively. We execute Algorithm 2 on this signal and demonstrate the results in Table 4.5.
Table 4.5: Initial Note Guessing Results
Time Found Expected Amp
0 A A 248.0868
1 A A 248.0868
2 C C 1242.433
3 G G# 1736.93
4 G G# 1741.317
45


Figure 4.10: Synthesized Tones @ 44.1 Khz, {A,C,G^t} Over 5 Seconds
The results of our detection and characterization are promising, however there are obviously not exact. Recall that the FFT must work with discrete frequency values thus providing a challenge to the accuracy of the Tigger Theorem. Figure 4.11 illustrates a comparison of the rational valued harmonics versus the discrete harmonics in the 44,100 Nyquist range. The vertical axis is the image under the Tigger mapping and the horizontal axis is the frequency (denoted by the corresponding musical note).
Figure 4.11: Comparison of Real vs Discrete Tigger Theorem (Note vs T(a))


The results are surprisingly similar, however we notice clear deviations that are most prominent at the fundamentals. We shall attempt to remedy this error by adding a new characterization step. We require a new set of integer values directly mapped to the fundamental frequencies and another set directly mapped to the harmonics.
Definition 4 (Fundamental Integer Set). Define Fi to be the set of integers bijective to F such that Fj = {[WJ}.
Definition 5 (Harmonic Integer Set). Define Mi to be the set of integers bijective to M such that Mi = {|_Af_|}.
Theorem 5 (The Discrete Tigger Theorem).
Any fundamental frequency can be mapped to a unique integer.
Proof: We compute by exhaustion the set Fj = {16,17,18,19,20,21,23,24,25,27,29,30}. By observation all elements of Fj are unique.
Theorems 4 and 5 allow us to derive a slightly more accurate approach in the numerical environment. Given any music frequency value a G Mi we attempt our characterization by determining if the frequency is an element of Fj and using the corresponding note if a match is found. If no match is found in the fundamentals we compute the Tigger Harmonic and search for a match in the Tigger Harmonic Set t so as to minimize \T(a) Tk|. We execute this new technique on the same signal and display the results in Table 4.6. The results are perfectly accurate for our very simple signal. The source code for the note detection/characterization can be found in Appendix A.0.7 in the function GuessNotesQ.
47


Table 4.6: Final Note Guessing Results
Time Found Expected Amp
0 A A 248.0868
1 A A 248.0868
2 C C 1242.433
3 G# G# 1736.93
4 G# G# 1741.317
4.2.3 Rigorous Characterization Analysis
We have successfully fine tuned our model to perfectly detect the sequence A, C, G% so we proceed to perforin more rigorous analysis. We start by synthesizing a sound that appends every harmonic for every note into a time series and execute Algorithm 2 on the time series. Figure 4.12 clearly illustrates the results.
The algorithm appears to break down in the upper harmonics, which has been observed by bello [22] and others. A high frequency detection method is supplied by Bello, which could possibly be applied at a later date. For the time being, the llth and 12th harmonics remain 45% accurate while the rest of the spectrum is 100% accurate. As a final test we synthesize signals, which consist of random tones at random harmonics for random intervals, creating a more complex sound of approximately 5-14 seconds. Figure 4.13 illustrates one example of the random sound used in this analysis. We keep track of the ground truth information and use it to measure the expected output against the guessing algorithm. At first we generate only a few signals and run them through the detector. The output can be seen in Table 4.7.
48


Table 4.7: Accuracy of Random Signals
Time Found Expected Amp
0 G G 983.947 Time Found Expected Amp
1 G G 983.947 0 F F 2219.488
2 F# F# 2487.615 1 C C 2048.204
3 A# A# 963.4857 2 c# c# 2557.707
4 G# G# 2223.319 3 G B 10.61324
5 G# G# 2223.319 4 A A 2507.143
6 F F 1707.491 5 A A 2507.143
7 D D 1086.08 6 C C 1920.38
8 G G 1142.743 7 G G 2351.047
9 G G 2419.794 8 G G 2351.047
10 D D 2255.06 9 Undetected F# 5470.793
11 B B 2432.167 10 Undetected F# 5470.793
12 B B 2432.167
Time Found Expected Amp
0 F F 1927.708
1 F F 1927.708
2 E E 1222.492
3 E E 1222.492
Time Found Expected Amp
0 A# A# 1157.51
1 F# F# 972.2834
2 D# D# 564.4082
3 A# A# 513.2103
4 C# C# 1556.932
49


Figure 4.12: Accuracy Plot (Note vs Harmonic vs Percent Accuracy (0 100%))
Figure 4.13: Synthesized Random Sounds @ 44.1 Khz
As expected, the guesses are fairly accurate but we we need more information to determine how accurate, so we proceed to automate a very rigorous test over hundreds of random signals. We tally the results into three areas, Figure 4.14 shows the success as a percentage of accuracy over time, Table 4.8 demonstrates the overall performance and Table 4.9 shows us the tones and ranges that were undetected.
The execution time is well below what was expected since ~ 8 seconds of time
only takes ~ 1.6 seconds to guess the tones. This leaves us a substantial amount of
50


Table 4.8: Detection & Characterization Results (Initial Metrics)
Total Runs Ave Sample Time Ave Guess Time Overall Accuracy
200 8.435 (s) 1675.49 (ms) 86.60344%
Table 4.9: Detection & Characterization Results (Undetected)
Undetected Note Times Frequency Amplitude
C 17 35471.36 35471.36 (Hz) 2557.409 9612.205
C 41 28160 56320 (Hz) 1008.208 10325.92
C 17 26583.04 53166.08 (Hz) 1367.82 10405.38
C 22 22353.92 44707.84 (Hz) 1348.063 10926.15
C 21 23674.88 47349.76 (Hz) 1836.552 10948.71
C 30 31610.88 63221.76 (Hz) 1796.666 10658.98
C 10 33484.8 33484.8 (Hz) 1145.468 7371.489
C 29 25088 50176 (Hz) 1211.848 10403.89
C 24 29839.36 59678.72 (Hz) 1711.161 10979.81
C 9 37580.8 37580.8 (Hz) 3221.58 10981.38
C 10 42188.8 42188.8 (Hz) 1913.405 10745.18
C 8 39833.6 39833.6 (Hz) 1283.639 10054.6
51


120
Figure 4.14: Detection & Characterization Results (Run vs % Accuracy)
processing slack for melody analysis. However, the accuracy of ~ 86% is fairly good but does not seem to follow with our 100% accuracy in Table 4.5. After careful observation of Table 4.9 we discover a phenomenon. Notice that all undetected instances of C are above the Nyquist frequency of 22.050 (Khz). Recall we are sampling at 44.1 (Khz), which means that 22.050 (Khz) is the maximum detectable frequency. As it turns out our tone generator has a very simple but significant defect. At octaves above 12 it creates frequencies above the Nyquist range. We correct the issue by limiting the synthesized frequencies to the Nyquist range and re-run our analysis. The results are excellent. Table 4.10 and Figure 4.15 shows that we are now at 100% accuracy running 500% times faster than real time. As previously stated, no algorithm will be perfect, thus we expect to see a margin of error when processing real music, but for the time being we are confident enough to move forward.
52


Table 4.10: Detection & Characterization Corrected Results (Final Metrics)
Total Runs Ave Sample Time Ave Guess Time Overall Accuracy
200 8.855 (s) 1814.64 (ms) 100%
100
90 80 70 60 50 40 30 20 10 0
Figure 4.15: Detection & Characterization Results Corrected (Run vs % Accuracy)
4.3 Chord Detection and Characterization
A musical chord can be difficult to clearly define, especially when we diverge from western music and include the idea of inharmonic or dissonant tones. Informally, a chord is a set of notes played at the same time but to limit confusion and enhance scientific rigor we adopt Tymoczkos algebraic definition of a musical object that is an ordered sequence of pitches [44] whose elements are vertices of an element in the DV2 group3. To precisely detect and characterize a chord is very difficult. Our tone characterization provides us with a set of musical notes but we must determine the starting time and duration of groups of notes which form a chord. This is especially
3The dihedral group on 12 vertices. Discussed in detail in Section 4.4.
53


tricky when dealing with sub-band4 onset of multiple tones combined with low latency. Remember from Section 2.1 that a synesthetic response is triggered by a tone being played for at least 3 seconds. We wish to elicit this same response in normals and we conjecture that a preferred response in normals is more likely when mirroring the stimuli of synesthetes. The reason being that all humans share common neurological components in auditory and visual cognition and most people experience some kind of synesthesia [16,37,38]. Nevertheless, at this point one would prefer a sub-second knowledge of tone onset to aid in chord identification. For example, suppose we had a two second interval and our characterization algorithm produces the notes {A,C,G%} and {A,D,E}. We dont know at what times within the second any of the tones started only they exist somewhere within the second. Nothing prohibits an 447 note from playing for 25% of the first second and 50% of the next second and our detection would not specify which portion of the second it began or ended. We shall add a high fidelity time model to our list of future work and press onward with our 1 second fidelity.
4.3.1 A Chord Detection Algorithm
In order to identify a chord we employ a straightforward technique but first we must discuss some terminology. The term residual as used in this section, will refer to any unison tone not immediately paired with an existing chord. We use the term minimum duration to represent the minimum time (milliseconds) a note is played. Figure(s) 4.16, 4.18, 4.21, 4.19, and 4.17 show the Use Cases5 for our algorithm. It should be noted that our algorithm is agnostic to time fidelity so future work will employ the same algorithm even if we adjust sub-band frequency extraction. We break our cases into New Notes, Match, Refresh Chord, Residuals, Chord Expiration
4 We use this term in this context to refer to space in the sampling less than the current Nyquist sample rate.
5In Software Engineering a Use Case defines the concept of operation of an actor and the
interaction with a particular interface, machine or domain.
54


Use Case (New Notes:)
Assumption: None.
1 A set of new notes are characterized.
2 If no Match: is found Residuals:.
3 If Match: is found Refresh Chord:.
3 Increment step time to the maximum of all notes-onset time.
4 Run Chord Expiration:.
Figure 4.16: Use Case (New Note)
and Chord Kill. Each case is a sequential portion of the grander algorithm with New Note being the entry point and Match being a utility for comparison of notes. We use the class structure in Listing 4.3 to manage the chord construction. We call this class the ChordDetector and it acts as a state machine that behaves according to the algorithms described by Use Cases 4.16, 4.18, 4.21, 4.19, and 4.17. The chord detector class accepts one to many tones in the form of a MusicalDetect array (Listing f.l). ChordDetector makes no assumptions about the incoming tones and attempts to merge items by time and duration into appropriate chords or unison tones. The source code for the chord detector can be found in Appendix A.0.9, and A.0.8.
55


Use Case (Match:)
Assumption: Let n be any existing note and m be the incoming note, T(- ) is the time onset and D{- ) is the last known duration of a note.
1 If /o(V) == and if /fc(V) == fk('m') for some k and if T(n) <
T(m) < T(n) + D(n) we have a match.
2 Otherwise no match.
Figure 4.17: Use Case (Match)
Use Case (Refresh Chord:)
Assumption Matching note found.
1 If a matching note is found in an existing chord and increment duration by minimum duration.
2 Else if a note with the same onset time is found add note to chord.
Figure 4.18: Use Case (Refresh)
56


Use Case (Residuals:)
Assumption: No matches to the note.
1 Start a new chord.
2 Add note to chord.
Figure 4.19: Use Case (Residual)
Use Case (Chord Expiration:)
Assumption: Let n be any existing note and T(- ) is the time onset and D{ ) is the last known duration of a note. Step Time is a counter that holds each interval of minimum duration that has passed.
1 If T{n) + D(n) < (Step Time) for any note, remove that note.
2 If all notes expired Kill Chord:.
Figure 4.20: Use Case (Expiration)
Use Case (Kill Chord:)
Assumption: All notes expired in chord.
1 Remove the chord from the dictionary.
Figure 4.21: Use Case (Kill)
57


Listing 4.3: Chord Detector Class
// A chord structure that holds notes played at the same time class Chord {
// List of notes in this chord.
List myNotes ;
}
// Manages a set of chords detected in real time. class ChordDetector {
// The current time step in units of myMinDuration.
// incremented on each call to New Notes. private long myTimeStep = 0;
// The list of currently held chords private Chord myChord;
// Add new notes to be detected and added to a chord // or create new chords.
public void NewNotes ( MusicalDetect [ ] newNotes);
// Request all currently held chord structures public Chord GetChordQ;
}
Fortunately, it is easier to verify this algorithm because the data space is smaller and the algorithm is in fact deterministic. Table 4.11 demonstrates the input series of notes manually constructed to try to fool our algorithm. Notice that from time 1 to time 2 our algorithm is not fooled and properly removes tones A2 and Gff 3. It also compresses BA into the same note and properly disposes of C2 from time 2 to time 3. If we experience issues later in our implementation we will revisit a more rigorous
58


Table 4.11: Chord Detector Basic Test
Time 0 1 2 3 4
Notes C2 A2 G#3 C2 A2 G#3 C2 B4 B4 B4 A4 G#4 B4 A4 G#4
Expected {C2,A2,G#3} {C2,A2,G#3} {C2, B4} {B4,A4,G#4} {B4,A4,G#4}
Output {C2,A2,G#3} {C2,A2,G#3} {C2, B4} {B4,A4,G#4} {B4,A4,G#4}
test of the chord characterizer.
4.4 Melody Analysis
In this section we explore the analysis of the music as a cohesive set of chord progressions wherein a chord may consist of a single note. We treat each chord as a geometric musical object so before we begin the melody analysis using the five features of music in Section 2.1. Our melody mapping is based upon the notion that music has some kind of center; musical objects that move in small increments and are consonant are more pleasing and that similarly of chord structures are expected to appear often.
We must quickly mention the algebraic groups, specifically the dihedral group. The dihedral group is the group of symmetries of a regular polygon in n vertices [12]. To bound the scope of our thesis we do not offer a rigorous education in group theory but we do offer a simple intuitive explanation of the dihedral group. Suppose you have a planar graph with four vertices. You embed the vertices in the 2D cartesian plane such that all vertices are symmetric and equidistant from (0,0). If we arbitrarily label each vertex as in Figure 4.22 we have the dihedral four group often referred to as D4. The first image of Figure 4.22 shows position 0, which we deem the identity. The second image is a clockwise rotation about the origin of 90 degrees. If we continue to superimpose the vertices in every permutation and label each configuration, those labels combined with a binary operation form the group. It is called an algebraic
59


0
rO rl r2
Figure 4.22: Dy Group Example
Figure 4.23: Dr2 Pitch Class
group because it can be shown to possess closure6, associativity, inverse and identity of the binary operation and its elements [13]. In the case of the symmetric group we use the o operator to denote the binary operation. For example rOorl = rl = rlorO, giving us the identity in rO. Similarly, rl o rl = r2 and rl o rl3 = rl o rl-1 = rO. Because we return to the identity element rl3 is the inverse of rl. This can be thought of as taking the value of 90 deg +(90 deg)3 = 0 (mod 360). One final way to envision a group inverse and identity is using the familiar real numbers. Notice that the inverse of the value 2 is 2 ^ = 2 2_1 = 1 whereas the identity of 2 is 2 1 = 1-2 = 2. The algebraic dihedral group simply uses a new fancy label for familiar concepts such as inverses and identities. It is within these group structures that we determine the behavior of the music.
We analyze the melody according to the Pitch class, a term in music theory referring to the DV2 group with musical notes shown in Figure 4.23 [45] (throughout this section we will interchange the terms pitch class and DV2). Using the five tenets of music we construct a pattern analysis designed to capture the flow of the melody. Our Chord Detector can extract enough information to assign a value of the DV2 group
6 Closure of a group implies that the binary operation on two elements results in another element within the same group.
60


to each note of a chord, which forms a subgroup of the group. There are many changes to a subgraph that can occur but two key changes are known as distance preserving functions transposition and inversion. Transposition and inversion are analogous to the geometric operations of translation and reflection [46]. The smaller the translation value the harder it is to detect the change [46] and vice versa. Inversion, according to Tymoczko, has a similarity property wherein inversely related chords sound similar. We will use these ideas to help guide our mappings.
4.4.1 A Generalized Parameterized Visualizer
Much like our proof of concept in Section 3.1 we want to produce similar output on screen but using more rigorous musical information. Remember from Table 3.1 that the initial implementation is directly linked to the time series. In order to facilitate the use of the five features of tonality we must break the time series association, refactor our code and generate a robust and re-useable animation widget. Constructing various animation components can be very time consuming and complicated. In future work we intend to experiment with a myriad of visual behaviors but for our current scope we will re-use the current work. This means we want to be able to plug in a new animation without having to re-wire visual calculations or depend on visuals tied closely to the type of data. We created a class structure and interface designed to help us de-couple the renderings from the analysis logic. To that end, we define a new space.
61


Definition 6 (Value Space). A value space is a vector in Qm whose elements are values ranging from 0 to 1, ordered by the importance of the value from lowest to highest.
Definition 7 (Color Space). A color space is a set of integer RGB values ordered from lowest to highest importance.
Definition 8 (Focal Space). A focal space is vector in Qm whose elements are values from 0 to 1, suggesting a reference point or centroid of activity somewhere within the value space.
Definitions 6,7, and 8 allow us to discuss some geometry with very specific values, without having to know specifically what that geometry will be. When we combine all these spaces together we have our visual space, which can be used to parameterize the drawing or animation engine.
Definition 9 (Visual Space). A visual space is an object that contains a value space, focal space and color-space along with
Spectacle - A value from 0 to 1, with 1 being
more fancy and 0 being dull.
Maximum elements - A value that specifies a hard limit on
the number of generated elements.
We create a new interface called IVisualizer meant to be implemented by an animation engine that draws things on screen. We need a way to tell the visualizer how to draw according to our music, so we provide a class structure called VisualSpace that conforms to our definitions. The visual space provides intuitive information that
62


Table 4.12: Visual Space Axioms
Axiom Description
a All values are considered to be homogeneous to one another and all data going forward, b Whenever possible values should range from 0 to 1.
c Unless not supplied, all supplied colors are to be used and derived colors are to be
gradients of supplied colors.
a knowledgable algorithm can compute ahead of time and offer the animator. The visual space decouples the animation implementation from engineering units or the type of data. There is an implicit agreement between the visual space and animation engine, which includes the tenets of Table 4.12.
As we progress we may add more parameters to the visual space but for now we move on to the melody and how we intend to extract useable values that can be fed the visual space. We attempt to describe the heuristics without getting too far into the source code, but the full melody analysis source can be found in Appendix A.0.13. The source code for the IVisualizer and our initial animation engine (derived from the time domain animation) can be found in the Appendix A.0.19, A.0.17 and A.0.18.
4.4.2 Mmmmmm, The Musical Melody Mathematical Modularity Movement Manager
If you are not chuckling you may not be ready to digest this approach. The technique we propose requires us to leverage everything weve considered up to this point. We approach transformation of melody using Tymoczkos five properties as a guide and exploit transposition and inversion to help create reasonable values. From Section 4.4.1 we need to parameterize the visual space in a way that makes sense. Unfortunately we must bombard the reader with several more definitions and theorems; however each definition is a critical component in calculating the visual space.
63


Theorem 6 (Angry Tigger Theorem). Given a chord C with N tones each having frequency a, we can approximate a value of consonance ranging from 0 to 1
x
1
N-1N-1
EE
j=0 i=0
X
1 if t\ < x > t2
fx~ei otherwise
with 1 being most consonant.
(4.13)
Proof: It has been observed that two notes being played, which are very close together in frequency but not exact causes a dissonant tone [38,43]. Given the frequencies of a chord {oq, a2, an} we compute the sum of |cq af for all i,j < N. We then take the average by multiplying jjfzN subtracting N in the denominator to exclude items compared with themselves (which are necessarily zero) and assign this value to x. Let be the difference in frequency between the A and Aff notes, and e2 be the difference between the A and B notes. Let P0 = hf and P\ = e2 such that x = (1 t)Po + tP\. Note: we divide e2 by 2 to ensure we have a value that is very close, but not exactly equal. If x is less than t\ or x is greater than e2 we deem it consonant and return 1, otherwise we have a linear interpolation between P0 and P\ as a function of t thus x = (1 t)y + fe2 => t = = which yields t as a
function of x. Since t is a value from 0 to 1 where 0 is equal to ^ and 1 is equal to e2, we have a measure of consonance from 0 to 1.
64


Definition 10 (Magnitude of a Chord). Given chord C = {ci, c2, cn} where n is a named note, the magnitude of a chord is defined to be the average frequency scaled by the average harmonic value
1 N N
v = h(cj) (4-14)
j=i fc=i
Definition 11 (Centricity of a Chord). The centricity of a chord C = {ci, c2, The Harmonic Centricity
i N
V k= 1
The Fundamental Centricity
1 N
0f = <4-16)
V k= 1
Dehnition 11 describes centricity with respect to a single chord. We must also maintain the overall centricity of melody across all chords as they progress with respect to both harmonic value and pitch. We will refer to these values as 0# and @f- We have the ability to measure things about a chord, but we need the ability to analyze change, this means additive change in regards to octave and pitch class. Technically speaking, the computer system does not support group operations of D12 since they are non-numeric/symbolic. To solve this we assign a unique integer value to each vertex as shown in Figure 4.23. This yields the (Z12, +) group, which is isomorphic to the cyclic subgroup < rq >= {r0, rq, r2, ,rn}, which is all the rotations of (Di2, +) We leverage the fact that any finite cyclic group of order n is isomorphic
65


to (Zra, +) [14], thus we can deal with translations of our notes as positive integers modulo 12.
Definition 12 (Change of a Chord). The change of a chord is a value from 0 to 1 where 1 means a large change in tonal structure. Let C, D be two chords whose elements are from Zi2 sorted in ascending order. Given m = max(\C\, \D\) where \ \ is the order of the set, we define X = {C, Oi, 0*,, D, 0i, 0|} where k < \C\m and l < \D\ m, as the joined notes of C and D whose order is divisible by 2 vnth zero fill respectively. Let Y = { \Xt Xt+m\ } vnth 0 < t < m be the differences between respective notes of C and D, vMch works because we have zero filled for uneven chords. If a is the standard deviation and E the expected value, we compute the change of a chord as
E(Y) / 0
(4.17)
Definition 13 (Spectacle of a Chord). Given successive chords C and D, whose tones are mapped to Z12 the spectacle of a chord is a measure from 0 to 1 where
Sk+i
N
(N-1)-Sk +
\Ck Dk |
144
(4.18)
Although mathematically straightforward, Definition 12 can be tricky to envision. We are taking the differences between notes in two chords to determine to what extent they have changed uniformly. If the differences between the notes of any two chords are uniform (ie: a transposition) we have a value of 1 and something less than 1 otherwise. When we combine everything weve discussed so far we can begin to see the formation of the view space of a chord. Based upon the previous definitions and
theorems, we propose computations allowing us to transform a series of chords and
66


subsequently their behaviors, into the visual space. We call this Tiggers Roar.
Definition 14 (Tiggers Roar). Given a series of chords processed in time order the view space of a chord is defined to be
Values:
(Vi,v2,v3) = (h,h + A,6h)
Focus:
(Ci,c2,c3) = (0,0, A)
Colors:
(RGB\, RGB2, RGBfi) = The Stripes Theorem Spectacle:
Definition 13 Elements:
E = 30 + (1 %) 50 + Spectacle 50
The notation in Definition 14 may be confusing. It is assumed all calculations are done with respect to the appropriate centricity and the most recent chord in the sequence. Definition 14 is really an amalgamation of our work combining the properties of music, Synesthetes response, Fourier analysis, note and chord detection and changes to melody over time. It is our final calculation and there is not much at this point that can proven mathematically.Figure 4.24 illustrates the visual space computed using Definition 14 against 100 seconds of symphony music. It appears we have, at least in part, achieved our goal of a view space that represents the changes of the melody. As desired the primary focus and primary value demonstrate a strong correlation, which means we have captured harmonic consistency. It would also seem we have addressed centricity in the focus space because the secondary and tertiary foci are stable and consistent, which follows the theory of conjunct melodic motion.
67


Figure 4.24: Visual Space Example
It is encouraging to see that the third element of the value space spikes only a few times. Recall that this is the chord magnitude scaled by a dissonance factor so we would not expect to see this value grow often. Unfortunately, it is impossible to truly deduce how the animation engine will behave from this graph.
68


5. Results
5.1 Experimentation
At this point we can only speculate as to how well our data will reflect the music. We must press onward and integrate the view space algorithm into our original time domain engine and visually examine the results. Figures 5.1 and 5.2 illustrate output using the melody-parameterized view space from Section 4.4. We will leave a more rigorous study and discussion in the various animation techniques for future work; for the time being one can see our current implementation in Appendix A.0.17. The imagery is stunning and we observe consistent connections between tone change and coloring as well as a coarse association between the geometry and melody. Unfortunately, it is impossible to capture the behavior of a fluid animation sequence in a document.
Figure 5.1: Beethoven Minuet Figure 5.2: Techno Electronica
in G (She Nebula)
Our thesis addresses the construction of a processing framework capable of detecting frequencies, characterizing tones and generating a parameterized geometry from the music. At this point we have only scratched the surface in terms of the ability to
visualize melody but our prototype demonstrates interesting behavior. For the time
69


being we would like feedback on our progress so we devise a survey and supply it to a small number of individuals. We created an application and framework that plays several songs to a listener. Each song is followed by a questionnaire whose results are tallied and emailed to us upon completion. The questions are based upon a weighted system where we define either positive or negative results from the response. The areas of discovery are as follows; a) neutralize any bias toward the music genre b) neutralize a bias toward the aesthetics c) heavily weight positive associations of mood and synchronization. To that end we devised the questions and answers in Table 5.1.
Table 5.1: Questions and Answers
Importance Question
2 Do you like this type of music? Weight Answer
3 Were the visuals aesthetically pleasing? 1 Unsure
8 How much do you feel the visuals matched the genre of the song? 5 Strongly Agree
8 Were the visuals in sync with the music? 4 Agree
10 Do you feel the visuals captured the mood of the music? 3 Disagree
10 Did you see and hear enough to answer questions? 2 Strongly Disagree
5 Did the visuals keep your attention?
To calculate the score per module, where each module is associated to one song, we let Q = {qi, q-2,..., qk} be the importance of each question based on Table 5.1, A = {oq, a-2,- cik} be the answers to each question. We compute the best possible answer based upon the following; the subject strongly dislikes the music type; strongly dislikes the visuals; strongly believes the visuals matched the type of music; strongly found the visuals to be synchronized with the music; strongly believes the visuals captured the mood; strongly feels they saw enough to make a decision and was strongly focused on the experiment, which yields
Definition 15 (Best Score). B = {2, 2, 5, 5, 5, 5, 5}
70


This implies that the worst possible answer to be based upon; the subject strongly likes the music type; strongly likes the visuals; strongly believes the visuals did not match the type of music; strongly believes visuals to be out of synch; strongly believes the visuals did not capture the mood; strongly believes they did not see enough to make a decision and strongly believes they were distracted, which yields
Definition 16 (Worst Score). W = {5,5, 2, 2, 2, 2, 2}
Equation 5.1 calculates the score by creating a weighted sum of the best answer corresponding to the points for that answer.
Let an inverse score be 5 cii
Si = -Qi
5
Let a direct score be
U = Qi (5.1)
Then we compute
score(5) = si + s2 + h + t4 + t5 + t6 + t7
, score(S)
grade =-------7^\
score{B)
Since the best possible grade is score(B) = 45 we divide the users score from Equation 5.1 by 45 and get a grade for a song. The source code for processing the survey results can be found in A.0.20. In future work this foundation allows us to continue our research as the implementation grows more sophisticated.
5.2 Survey Results
The survey consisted of 11 people ranging from age 9 to 75 with both male and
female participants. Figure 5.3 shows that Techno Electronica and Symphony music
scored the highest. This is not at all surprising being that both were commonly used
during development. The averages in Figure 5.4 seem low, but are much better than
71



Figure 5.3: Survey Results by User (Music Genre vs Grade %)
COUNTRY
MIDDL EE ASTERN
POLKA
SYMPHONY
TECHNO
Figure 5.4: Survey Results, Average Grade by Genre
we had expected. When we compare the scores in Figure 5.4 with Figure 5.5 the dramatic shift within the same genre underlines the challenge we face with regard to subjectivity. It would be premature to draw any solid conclusions from this experiment, however we can state that we have a quantifiable measure of success and clearer understanding of external perception.
72


SYMPHONY
TECHNO
SYMPHONY
POLKA
TECHNO
MIDDL EE ASTERN COUNTRY
POLKA SYMPHONY TECHNO
Figure 5.5: Survey Results, Top and Bottom 5 Scores
5.3 Conclusions and Future Work
As we have mentioned, time fidelity poses a real challenge in our ability to accurately model sound. Not only do we intend to evaluate our detection accuracy with realistic data, we intend to increase the time fidelity and extract sub-second peaks to improve characterization. Because we have observed that the Fourier transform is sensitive to the sample size, we must research adjusting all calculations. It follows that we will need a more effective detection; we may possibly use the technique proposed by Jehan [18]. We would also like to consolidate the entire model (end-to-end) in a nicely compacted algebraic factorization; the idea is to find a closed-form computation, which gets us closer to being able to prove an isomorphism. The issue of melody is very complex and still unclear how important it will be in our effort, but we shall continue to research this. We intend to continue focusing on the geometry of music and on mathematical representations of melody. Although we have captured a
73


good amount thus far, we want to continue researching what it means to be mathematically uplifting or depressing in terms of a pitch class operation. We also intend to expand upon our initial work with transposition and inversion and unlock a more sophisticated relationship within the musical chord progressions. Most importantly, now that we have a solid framework to expand upon, we plan on experimenting with many forms of geometry and animation to include fractals, fluid dynamics and other visuals that may be more conducive to intuitive representation.
5.3.1 Tangential Applications
There are many directions we can take this research and many of them are practical. One of the most uplifting, is the ability to allow a hearing impaired person to see what their environment sounds like. Not only for entertainment purposes but for safety and comprehension. Suppose that a deaf person has a dog and that person has never heard the animals voice. Our research may allow them to associate unique patterns of imagery with the mood or timbre of the animal allowing them to discern anger, happiness, concern or what have you. This may also be extended to include common sounds around the household. Imagine that a microwave ding goes off, or the oven timer or the washer/dryer. More importantly, imagine that someone breaks open the front door or a window or is yelling for help. A large panel on various walls of the house, or even a hand held device could alert the user to the sounds of the environment. With the proper mathematics the images would be consistent and therefore distinctly identifiable. You could say its like braille for the eyes!
5.3.2 The Lawnmower Filter
During our analysis we identified a few mathematical techniques, which warrant further research and may prove useful going forward. The first technique involves instant filtering of unwanted frequency information using the matrix expansion of an exponential series. Recall the well known series ex = 1 + x + ^ cos(x) =
74


1 £2 + fr and sin(x) = x |y Recall Eulers Equation elx = cos(x) +i sin(x). If we replace x with a matrix A and insert it into the exponential series we have eA = I + A + ^ Using Eulers Equation we have etA = cos(A) + i sin(A) and we can solve for the cosine and sine using the series expansion and some convergence technique. Now suppose that A is a diagonal matrix whose elements An are of the form { 27r/q, 2ttfc2, }. If we substitute the matrix form of the exponential function and multiply the exponent with the imaginary unit we have a simultaneous Fourier transform that checks for multiple frequencies at once Y = Yln=o Xn etA'n. The risk here is that our response to multiple frequencies is bound together and likely to be inaccurate with respect to the current sample. Lets reverse our thinking from detection to filtering. We have the nth roots of unity but not necessarily a valid coefficient. If we select some magnitude p such that
N-1
Xk = Y,Xn-p-\\etA'n\\p (5.2)
n=0
we may just be able to delete unwanted noise or other frequencies from the signal enough to see some desired pattern.
5.3.3 An Instrument Fingerprint
One topic we have not yet breached in our research is the ability to identify specific instruments. It is unclear how much this will affect our overall goal but some of the Synesthesia research demonstrates specific colors and shapes chosen based upon musical instrument [32], We propose one possible technique to identify the signature, or fingerprint if you will, of an instrument. Our approach is quite simple, we anchor a value at the first point in the function where the slope is positive. We draw a curve (line) to each successive point from the anchor making little triangles and compute the angle off the x-axis for each triangle. We do this until the slope goes negative. If we add the results together for some portion of the entire wave, we have a value
for each positively monotonic segment of the function. Figure 5.6 illustrates a crude
75


depiction of the geometry. The hope is that each instrument can theoretically be modeled by some function f(x) whose derivative identifies a unique signature to that instrument.
Figure 5.6: Fingerprint Technique
One possible solution might be to integrate over each segment of positive slope
0 f(x) < 0
and add all the angles together. Let v(x)
then the function
1 /'(*)> 0
v(x)=0

ton-1(ll/('r)l,)d.r I ;Xo I
(5.3)
v(x)^0
Equation 5.3 provides a continuous model of this approach that aggregates the front side slope of some series of impulses over a desired region of the wave. We simply derive a discrete version of this function, combine it with Equation 5.2 to reduce noise and voila! a unique value that describes the instrument. If we take the expected value over several samples and create a range of tolerance we have a min/max bounds for detecting the instrument.
76


5.3.4 Conclusions
Although we made significant progress, our research is far from over. We have learned a number of things about the nature of processing sound and the challenges of precision and timeliness. We successfully melded the neurological response of colored hearing with the computer systems ability to generate imagery. We demonstrated a mathematical mapping from sound to pictograph and successfully implemented a flexible framework capable of processing any sound in real time. We have demonstrated a substantial improvement in capturing the melody in the frequency domain versus the time domain. We have proposed and implemented a unique technique for musical frequency detection and note characterization. We have exposed our implementation to human subjects and seen encouraging results. All in all, our progress opens the door to a larger world and lays the foundation for more possibilities in extending human sensory perception.
77


REFERENCES
[1] Thomas M. Fiore Alissa S. Crans and Ramon Satyendra. Musical actions of dihedral groups. The Mathematical Association of America, June July, 2009.
[2] MIDI Manufacturers Association. History of midi, http:/www.midi .org/ aboutmidi/tut_history.php, 2015.
[3] Jan Dirk Blom. A Dictionary of Hallucinations, page 73. Springer Sci-ence+Business Media, 2010.
[4] C. Sidney Burras. Index mappings for multidimensional formulation of the DFT and convolution. IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-25 (3): 239-241, 1977.
[5] Glenda K Larcombe Carol Bergfeld Mills, Edith Howell Boteler. Seeing things in my head: A synesthetes images for music and notes. Perception, uolume 32, pages 1359 1376, 2003.
[6] Aldo Piccialli Giovanni De Poli Curtis Roads, Stephen Travis Pope. Musical Signal Processing. Swets & Zeitlinger B.V, Lisse, Netherlands, 1997.
[7] Michigan Tech Department of Physics. Musical signal frequencies. http://www. phy.mtu.edu/~suits/notefreqs.html, 2015.
[8] Zohar Eitan and Inbar Rothschild. How music touches: Musical parameters and listeners audio-tactile metaphorical mappings. Psychology of Music 39(f), pages 449-467, 2010.
[9] Shannon Steinmetz Ellen Gethner and Joseph Verbeke. A view of music. In Douglas McKenna Kelly Delp, Craig S. Kaplan and Reza Sarhangi, editors, Proceedings of Bridges 2015: Mathematics, Music, Art, Architecture, Culture, pages 289-294, Phoenix, Arizona, 2015. Tessellations Publishing. Available online at http://archive.bridgesmathart.org/2015/bridges2015-289.html.
[10] Jai Sam Kim Nicola Veneziani Giovanni Aloisio, G.C Fox. A concurrent implementation of the prime factor algorithm on hypercube. IEEE Transactions on Signal Processing, 39(1), 1991.
[11] I.J. Good. The interaction algorithm and practical fourier analysis. Journal of the Royal Statistical Society. Series B, 20(2):361-372, 1958.
[12] Thomas W. Hungerford. Abstract Algebra An Introduction, page 176. Brooks/-Cole, 2014.
78


[13] Thomas W. Hungerford. Abstract Algebra An Introduction, pages 169-179. Brooks/Cole, 2014.
[14] Thomas W. Hungerford. Abstract algebra an introduction, page 219. Brooks/-Cole, 2014.
[15] Johannes Itten. The art of color, page 34. Wiley & Sons INC, 1973.
[16] Debra A. Zellner J. Michael Barbiere, Ana Vidal. The color of mu-sicxorrespondence through emotion. Empirical Studies Of The Arts, Vol. 25(2), pages 193-208, 2007.
[17] John W. Tukey James W. Cooley. An algorithm for the machine calculation of complex fourier series. Mathematics of Computation, 19(90):397301, Apr. 1965.
[18] Tristan Jehan. Musical signal parameter estimation. Masters thesis, IFSIC, Universite de Rennes, France, and Center for New Music and Audio Technologies (CNMAT), University of California, Berkeley, USA, 1997.
[19] George H. Joblove and Donald Greenberg. Color spaces for computer graphics. SIGGRAPH 5th Annual, pages 20-25, 1978.
[20] Harry F. Jordan and Gita Alaghband. Fundamentals of parallel processing, page 160. Pearson Education, 2003.
[21] Giuliano Monti Juan Pablo Bello and Mark Sandler. An implementation of automatic transcription of monophonic music with a blackboard system. Iris Signals ans Systems Conference (ISSC 2000), 2000.
[22] Giuliano Monti Juan Pablo Bello and Mark Sandler. Techniques for automatic music transcription. 2000.
[23] Guillaume Leparmentier. Manipulating colors in .net. http://www. codeproject.com/Articles/19045/Manipulating-colors-in-NET-Part, 2016.
[24] Michal Levy. Giant steps, http://www.michalevy.com/giant-steps/index. html, 2015.
[25] David Lewin. Generalized musical intervals and transformations, pages 9-11. Oxford University Press, 1 edition, 2011.
[26] Stephen Malinowski. The music animation machine. http://www. kunstderfuge.com/theory/malinowski.htm, 2016.
[27] Lawrence E. Marks. On associations of light and sound, the mediationof brightness, pitch and loudness. The American Journal of Psychology, Vol. 87, No 1/2, pages 173-188, Nov 2016.
79


[28] Paul Masri and Andrew Bateman. Improved modeling of attack transients in music analysis-resynthesis. pages 100-103, 1996.
[29] James A. Moorer. On the transcription of musical sound. Computer Music Journal, 1 (4) :3238, 1977.
[30] Alan V. Oppenheim. Speech spectographs using the fast fourier transform. IEEE Spectrum, 7(8):57-62, Aug 1970.
[31] Konstantina Orlandatou. Sound characteristics which affect attributes of the synaesthetic visual experience. Musicae Scientiae, Vol. 19(4), page 389401, 2015.
[32] Otto Ortmann. Theories of synesthesia in the light of a case of color-hearing. Human Biology, 5(2): 155-211, May 1933.
[33] Otto Ortmann. Theroies of synesthesia in the light of a case of color-hearing. Human Biology, 5(2): 176, May 1933.
[34] Martin Piszczalski and Bernard A. Galler. Automatic music transcription. Computer Music Journal, 1 (4):24 31, Nov. 1977.
[35] Cornel Pokorny. Computer Graphics An Object-Oriented Approach To The Art And Science. Franklin, Beedel and Associates Incorporated, 1 edition, 1994.
[36] Frank B. Wood Richard E. Cytowic. Synesthesia i. a review of major theories and thier brain basis, pages 36-49, 1982.
[37] Frank B. Wood Richard E. Cytowic. Synesthesia ii. psychophysical relations in the synesthesia of geometrically shaped taste and colored hearing. Ninth Annual Meeting of the International Neuropsychological Society, Brain and Cognition, pages 36-49, 1982.
[38] Stephen W. Smith. The Scientists and Engineers Guide to Digital Signal Processing. California Technical Publishing, 1997.
[39] Praveen Sripada. Mp3 decoder in theory and practice. Masters thesis, Blekinge Institude of Technology, March 2006.
[40] Clive Temperton. Implementation of a self-sorting in-place prime factor FFT algorithm. Journal of Computational Physics, 58(4):283-299, 1985.
[41] Clive Temperton. A generalized prime factor FFT algorithm for n = 2p3q5r. SIAM, 13(3):676-686, May 1992.
[42] Dimitri Tymoczko. The geometry of muscial chords. Science, 313(0036-8075):72, July 2006.
[43] Dimitri Tymoczko. A Geometry of Music. Oxford University Press, 1 edition, 2011.
80


[44] Dimitri Tymoczko. A geometry of music, pages 35-36. Oxford University Press, f edition, 2011.
[45] Dimitri Tymoczko. A geometry of music, pages 28-32. Oxford University Press, 1 edition, 2011.
[46] Dimitri Tymoczko. A geometry of music, pages 33-34. Oxford University Press, 1 edition, 2011.
[47] C. van Campen. The Hidden Sense: Synesthesia in Art and Science. Leonardo (Series) (Cambridge, Mass.). MIT Press, 2008.
[48] Dennis G. Zill and Patrick D. Shanahan. A First Cource in Complex Analysis with Applications, page 35. Jones and Bartlett, 2009.
81


APPENDIX A. Source Code
The entire application framework involves several thousand lines of source code including everything from U/I components to logging utilities. We offer a subset of the specific classes and algorithms most pertinent to our thesis.
82


A.0.5 Musical Detect Class
using System ;
using System Collections Generic ; using System Linq ; using System Text ;
namespace Toolkit .Media. DSP {
///
/// Provides a musical detection class that defines the /// detection of some musical note and all its details.
/// K/summary>
public struct M u s i c a 1D et e c t {
public static readonly MusicalDetect Empty = new MusicalDetect (MusicalNote Empty 0 f 0 f 0 ) ;
ffr e g i o n private private private private private private ffe n d r eg
Private
long myTimeOn; long myDuration; float myRf; float my Amp; MusicalNote myNote ; int myHarmonic ; ion
///
/// Create the detection of a note.
/// K/summary>
/// The note f ound <./param>
/// Kparam name = r f > The frequency it was f ound


/// Kparam name = amp'>The int en s ity


/// Kparam name = timeOn'>The first epoch since start of play it was detected in milli s e c o n ds
public M u s i c al D et e c t ( M us ic alNot e note float rf float amp, long timeOn) { myNote = note ; myRf = rf ; my Amp = amp ; myTimeOn = timeOn;
myHarmonic = (int) Math Ceiling (Math. Log ( rf / note Fundamental 2 ) ) ; myDuration = 0;
}
///
/// Create the detection of a note.
/// K/summary>
/// Kparam name = no te > The note f ound K/param^
/// Kparam name = rf > The frequency it was f ound


/// Kparam name = amp'>The int en s ity


/// Kparam name = timeOn'>The first epoch since start of play it was detected in milli s e c o n ds
/// Kparam name = duration>The duration this note lasts in m illi s e c o n d s

public M u s i c al D et e c t ( M us ic alNot e note float rf float amp, long timeOn, long duration) :
this(note,rf amp timeO n ) {
myDuration = duration;
}
^region Properties
///
/// Get the first observed time of this detection from epoch since /// stream start in milliseconds.
/// K/summary>
public long TimeOn { get { return myTimeOn; } set { myTimeOn = value ; } }
///
/// Get the duration of this note in milliseconds.
/// K/summary>
public long Duration { get { return myDuration; } set { myDuration = value; } }
///
/// Get the original sampled rf value.
/// K/summary>
public float Rf { get { return myRf; } }
///
/// Get the impulse value.
83


///

public float Amp { get { return my Amp; } }
///
/// Get the note this matches .
/// K/summary>
public MusicalNote Note { get { return myNote; }}
///
/// Get/Set the optional harmonic if known.
/// K/summary>
public int Harmonic { get { return myHarmonic ; } set { myHarmonic = value; }}
ffe ndregion
///
/// Is this a non detect.
/// K/summary> public bool IsEmpty {
get { return myNote. IsUnknown && myTimeOn ==0 && myRf = 0 && myAmp == 0; }
}
^region Object Overrides
public override bool E q ua Is ( o b j e c t obj ) {
M usic a 1D et ec t det = ( Mus ic a 1D et ec t ) ob j ;
return myTimeOn == det .myTimeOn && myRf == det .myRf && myAmp == det. myAmp && myNote Equals(det myNote ) ;
}
public override int GetHashCode () {
return ToString().GetHashCode();
}
public override string ToStringQ {
return ( myNote.Note + + Harmonic + + Rf + ) + On: + myTimeOn + J\.mp
myAmp;
}
ffe ndregion
}
}


A.0.6 Musical Note Class
using System ;
using System Collections Generic ; using System Linq ; using System Text ;
namespace Toolkit .Media. DSP {
///
/// Immutable structure that holds information about a musical note.
/// K/summary>
public struct MusicalNote {
^region Properties
public static readonly MusicalNote Empty = new MusicalNote (eMusicNote Unknown ,0,0) ;
private eMusicNote myNote;
private float myRf;
private float myTiggerHarmonic ;
ffe ndregion
public MusicalNote(MusicalNote inOther) : this(inOther. myNote in O t her my T igger Har mo nic i n O t her myRf) {}
///
/// Create the musical note.
/// K/summary>
/// The note enumeration <./param>
/// Kparam name = inT i g g erRf > The tigger mapp ing


/// Kparam name = inFundament al> The corresponding fundamental frequency
/// Kparam name = inRf>The measured frequency (harmonic ) public MusicalNote (eMusicNote inNote float inTiggerRf, float inRf) { myNote = inNote; myRf = inRf ;
my TiggerHarmonic = inTiggerRf;
}
^region Properties
///
/// Get the raw note name /// K/summary>
public eMusicNote Note { get { return myNote; } }
///
/// Get the fundamental frequency for this note.
/// K/summary>
public float Fundamental { get { return myRf; }}
///
/// Get the calculated Tigger Rf /// K/summary>
public float TiggerHarmonic { get { return my TiggerHarmonic ; } }
ffe ndregion
^region Utility and Transforms ///
/// Generate step harmonics above this tone.
/// K/summary>
public MusicalDetect Harmonic ( int step) {
float newRf = ( float )( myRf* Math Pow (2.0 ,(float)step)) ;
MusicalDetect md = new MusicalDetect (this newRf ,1 0 ) ; md Harmonic = step ; return md;
}
ffe ndregion
^region Object Overrides
public bool IsUnknown { get { return myNote == eMusicNote Unknown && myRf == 0; } }
public override string ToStringQ {
return ( -)- myNote + : + myRf + : + myTiggerHarmonic + );
}
85


public override bool E q ua Is ( o b j e c t obj ) {
MusicalNote mn = (MusicalNote)obj; return myNote == mn.myNote && myRf == mn.myRf &&
my TiggerHarmonic == mn my TiggerHarmonic ;
}
public override int GetHashCode () {
return ToString().GetHashCode();
}
#e ndregion
}
}
86


A.0.7 Music Utility Class
using System ;
using System Collections Generic ; using System Linq ; using System Text ;
using System Drawing ;
using Core Drawing ;
using Core M at hem at ic s ;
using Core M at hem at ic s A lgeb r a ;
using Core.Mathem at ics. Stats;
using Core. Mathem at ics. Analysis;
using System Diagnostics ;
namespace Toolkit .Media. DSP {
///
/// Provides a table of fundimental frequencies for musical /// tones.
/// K/summary> public class Music {
private static readonly Random ourRandom = new Random () ; ^region Performance Testing Constants
public static bool UseWeakDetect = ///
/// Always has the runtime (ms) of /// Guess .
/// K/summary>
public static long GuessTime = 0;
false;
the most recent call to
ffe ndregion
^region Private Constants ///
/// Map of unique integers to musical notes.
/// K/summary>
private static readonly Dictionary ourTiggerMap; ///
/// Map of integer fundamental frequencies to the note.
/// K/summary>
privat e static readonly Dictionary ourlntegerFund; ///
/// Base color value to a tigger value.
/// K/summary>
private static readonly D i c t io n ar y ourTiggerColorMap ;
/// /// Consonance /// K/summary> private static private static
delta . (Difference between C/C#)
readonly float DISONANCE-l = 100 f; readonly float DISONANCE-2 = 150 f;
ffe ndregion
^region Private Static Util
///
/// Create the full harmonic set for the given sample rate.
/// K/summary>
private static float [] CreateHarmonic ( int sampleRate) {
// Create all fundamental and harmonics List rfs = new L i st ( ) ;
int max = ( in t ) Math Ceiling(Math.Log((sampleRate/2.0f/Music.Min) ,2.Of)) ;
// For each Rf g enerate each harmonic for(int r=0; r float rf = (float ) (Music Notes [ r ] FundamentaLMath Pow ( 2.0 f k ) ) ; if(rf < ( sampleRate / 2.0 f ) )
87


r f s Add (( float ) (Music Notes [ r ] Fundament al* Math Pow (2.0f,k)));
}
}
return rfs .T o Array ( ) ;
}
ffe ndregion
^region Static Constructor static Music() {
ourTiggerMap = new Dictionary() ;
for(int i=0;i i]) ;
// Create fundamentals
Fundamentals = new float [Notes Length ] ;
for(int i=0;i // Create harmonics .
Harmonics = new D i c t io n ar y ();
for ( int i =0; i float [] vals = CreateHarmonic ( ( int ) Known Samp leRates Rat es [ i ] ) ;
Harmonics Add ((int) Known Samp leRates. Rates [i] vals ) ;
}
// Create integer fundamentals .
ourlntegerFund = new Dictionary() ; for ( int i = 0; i ourlntegerFund Add (( int) Notes [ i ] Fundament al Notes [ i ] ) ;
// Create tigger color map
ourTiggerColorMap = new Dictionary() ;
ourTiggerColorMap Add (TiggerHarmonic (CO) Color. Blue. ToArgb ( ) ) ; ourTiggerColorMap Add ( TiggerHarmonic ( CsO) Color Blue ToArgb ( ) ) ; ourTiggerColorMap Add (TiggerHarmonic (DO) Color. Red ToArgb ( ) ) ; ourTiggerColorMap Add (TiggerHarmonic ( DsO) ,Color. Red ToArgb ( ) ) ; ourTiggerColorMap Add ( TiggerHarmonic ( E0 ) Color. Yellow. ToArgb ( ) ) ; ourTiggerColorMap Add ( TiggerHarmonic (F0) Color. Brown ToArgb ( ) ) ; ourTiggerColorMap Add ( TiggerHarmonic ( FsO) ,Color. Brown ToArgb ( ) ) ; ourTiggerColorMap Add ( TiggerHarmonic ( GO) Color Green ToArgb ( ) ) ; ourTiggerColorMap Add ( TiggerHarmonic (GsO) Color Green ToArgb ( ) ) ; ourTiggerColorMap Add ( TiggerHarmonic (AO) Color Green ToArgb ( ) ) ; ourTiggerColorMap Add ( TiggerHarmonic (AsO) Color Green ToArgb ( ) ) ; ourTiggerColorMap Add ( TiggerHarmonic (BO) Color Black ToArgb ( ) ) ;
}
ffe ndregion
^region Public Constants ///
/// Maximum Fundimental Rf in the table.
/// K/summary>
public const float Max = BO;
///
/// Minimum Fundimental Rf in the table /// K/summary>
public const float Min = CO;
public static readonly MusicalNote COn :
, CO) ;
public static readonly MusicalNote CsOn
CsO) CsO) ;
public static readonly MusicalNote DOn :
, DO) ;
public static readonly MusicalNote DsOn
DsO) DsO) ;
public static readonly MusicalNote EOn :
, E0 ) ;
public static readonly MusicalNote FOn :
, F0 ) ;
public static readonly MusicalNote FsOn
FsO),FsO);
public static readonly MusicalNote GOn :
, GO) ;
public static readonly MusicalNote GsOn
GsO) GsO) ;
public static readonly MusicalNote AOn :
, AO) ;
public static readonly MusicalNote AsOn
AsO) AsO) ;
new MusicalNote ( eMusicNote .C, Music TiggerHarmonic (CO) : new MusicalNote( eMusicNote Cs Music TiggerHarmonic ( new MusicalNote ( eMusicNote D, Music TiggerHarmonic (DO) : new MusicalNote ( eMusicNote Ds Music TiggerHarmonic ( new MusicalNote ( eMusicNote E Music TiggerHarmonic ( E0 ) new MusicalNote ( eMusicNote F Music TiggerHarmonic ( F0) : new MusicalNote( eMusicNote Fs Music TiggerHarmonic ( new MusicalNote ( eMusicNote .G, Music TiggerHarmonic (GO) : new MusicalNote ( eMusicNote Gs Music TiggerHarmonic ( new MusicalNote ( eMusicNote .A, Music TiggerHarmonic ( AO ) : new MusicalNote ( eMusicNote As, Music. TiggerHarmonic (


public static readonly MusicalNote BOn = new MusicalNote( eMusicNote .B, Music TiggerHarmonic (BO) BO) ;
///
/// List of all Rf s for musical tones starting with
/// C -> B.
/// K/summary>
public static readonly MusicalNote [] Notes = {
COn,
CsOn ,
DOn,
DsOn ,
EOn ,
FOn ,
FsOn ,
GOn,
GsOn ,
AOn,
AsOn ,
BOn ,
///
/// Has the list of fundimental frequenices in order /// from C to B.
/// K/summary>
public static readonly float [] Fundamentals;
///
/// A map that contains a mapping from the sample rate to float arrays each of which are the /// set of fundamentals and 11 harmonics in order.
///
/// Key := The sample rate.
/// Value := The frequencies.
/// K/summary>
public static readonly D i c t io n ar y Harmonics ; ffe ndregion
^region Characterization Methods ///
/// Execute the Tigger harmonic theorem which maps a frequency to a unique integer.
/// K/summary>
public static int T igger H ar mo n ic ( f 1 o a t f) {
return (int) (100.0 f ( f / Math Pow (2 Math Floor( Math .Log(f,2))) 1.0 f ) ) ;
}
///
/// Use the Tigger harmonc to determine what note corresponds to the given frequency.
/// K/summary>
/// Kparam name = inF '><,/param'>
///
public static MusicalNote C h ar a c t er i z e ( f 1 o at f) { int th = TiggerHarmonic ( f ) ;
i f ( our T igger Map .ContainsKey(th)) return ourT igger M ap [ t h ] ;
i f ( UseWeakDetect ) {
i f ( ourlntegerFund ContainsKey ( ( int ) f ) ) return ourlntegerFund [( int ) f ] ;
}
float sm = float MaxValue ; int sml = 1;
for ( int i = 0; i float t = Math Abs ( Notes [ i ] TiggerHarmonic th) ; i f ( t < sm) { sm = t ; sml = i ;
}
return Notes [sml] ;
}
///
/// Determine the presence of which notes exist in the given input stream of /// samples until end of stream is reached.
/// K/summary>
/// Kparam name = X'> The sequence of sample v alu e s


/// Kparam name = o f f s e t > The starting offset to execute in X<./param>
/// Kparam name = samp le Rat e > The sample rate
89


Full Text
public TopperLower (bool doTop int cnt) { myTop = doTop ; myCnt = cnt ;
}
public void Add ( S ur vey S c or e ss) {
foreach ( Quest io n Score q in ss) {
if(Count < myCnt) { base. Add ( q ) ; continue ;
}
// Find and replace low/high score. for ( int i = 0; i if (myTop) {
if(this[i]. Score < q. Score) { t h is [ i ] = q ;
break;
}
}
else if(this[i].Score > q.Score) { t h is [ i ] = q ;
break;
}
}
}
}
}
#e ndregion
}
141



PAGE 1

SONICIMAGERY:AVIEWOFMUSICVIAMATHEMATICALCOMPUTER SCIENCEANDSIGNALPROCESSING by SHANNONSTEINMETZ BachelorofScience,MSU,1999 Athesissubmittedtothe FacultyoftheGraduateSchoolofthe UniversityofColoradoinpartialfulllment oftherequirementsforthedegreeof MasterofIntegratedSciences IntegratedSciences 2016

PAGE 2

ThisthesisfortheMasterofIntegratedSciencesdegreeby ShannonSteinmetz hasbeenapprovedforthe IntegratedSciencesProgram by EllenGethner,Chair GitaAlaghaband VarisCarey April18,2016 ii

PAGE 3

Steinmetz,ShannonMIS,IntegratedSciences SonicImagery:AViewofMusicviaMathematicalComputerScienceandSignal Processing ThesisdirectedbyAssociateProfessorEllenGethner ABSTRACT Forcenturieshumanshavestrivedtovisualize.Fromcavepaintingstomodern artworks,wearebeingsofbeautyandexpression.AconditionknownasSynesthesia providessomewiththeabilitytoseesoundasitoccurs.Weproposeamathematical computerscienceandsoftwarefoundationcapableoftransformingasound apriori intoavisualrepresentation.Weexploreandexploittechniquesinsignalprocessing, Fourieranalysis,grouptheoryandmusictheoryandattachthisworktoapsychologicalfoundationforcoloringandvisuals.Weproposeanewtheoremfortonedetection, aparallelizedFFTandprovideanalgorithmforchorddetection.Weprovideanextensiblesoftwarearchitectureandimplementationandcompiletheresultsofasmall survey. Theformandcontentofthisabstractareapproved.Irecommenditspublication. Approved:EllenGethner iii

PAGE 4

DEDICATION ThisworkisdedicatedtomylovingfamilyDiane,Kerlin,Brandie,Lathan,Olive andWesleySteinmetz,HarryLordeno,Syrina,KJ,JavierandmylittlebuddiesPaco andRambowhosathoursonendbeingignoredwhileItappedawayatmycomputer andscribbledonmywhiteboard.Iwouldalsoliketodedicatethisworktomygreat friendsCharlyRandallandBrianParkerwhogavemecondence,inspirationand ideasthroughoutmylifeaswellasMikeIiams,withoutwhom,I'dneverhavebeen giventheopportunitiesthatleadmedownthispath. iv

PAGE 5

ACKNOWLEDGMENT IwouldliketothanktheUniversityofColoradoandallthewonderfulfolksthat providedusopportunitiestoshareandgrowourresearch.Thisworkcouldnothave beenpossiblewithoutgreatmentors.IwouldliketothankDr.EllenGethnerfor herideas,inspirationandforbeingthesinglegreatestcontributortomyacademic experience.IwouldliketothankDr.MartinHuberwhoshowedmepatienceand understandingduringtoughtimesandoeredmethisincredibleopportunity.WithoutthesewonderfulscholarsIliterallywouldnothavemadeit.Iwouldalsoliketo thankJasonFiskforencouragingmetogrowandJimMuller,Dr.BobLindeman, DocStoner,ScottCambell,Dr.BlaneJohnsonandJeCaulderforbeingrolemodels andmentorsforsomanyyears. v

PAGE 6

TABLEOFCONTENTS Tables........................................ix Figures.......................................x Chapter 1.Introduction...................................1 2.Inspiration....................................3 2.1PreviousWork..............................3 2.1.1ALittleMusicPlease?......................4 2.2SignalBasics...............................7 2.3Synesthesia................................8 2.3.1AColoredHearingTheorem...................13 3.ProofofConcept................................16 3.1DiscoveryandApproach........................16 3.1.1ATimeDomainExperiment..................17 3.1.2InitialResults...........................20 3.1.3Approach.............................22 3.1.4AnimationTimeBudget.....................23 4.ResearchandDevelopment...........................25 4.1FourierAnalysisandFrequencyDetection...............25 4.1.1UnderstandingtheDFT.....................27 4.1.2AParallelizedFourierTransform................31 4.1.3ASyntheticTest.........................34 4.2ToneDetectionandCharacterization.................40 4.2.1ADetectorPredicate.......................40 4.2.2MusicalNoteCharacterization.................41 4.2.3RigorousCharacterizationAnalysis...............48 4.3ChordDetectionandCharacterization.................53 vi

PAGE 7

4.3.1AChordDetectionAlgorithm..................54 4.4MelodyAnalysis.............................59 4.4.1AGeneralizedParameterizedVisualizer............61 4.4.2Mmmmmm,TheMusicalMelodyMathematicalModularity MovementManager.......................63 5.Results......................................69 5.1Experimentation.............................69 5.2SurveyResults..............................71 5.3ConclusionsandFutureWork.....................73 5.3.1TangentialApplications.....................74 5.3.2TheLawnmowerFilter......................74 5.3.3AnInstrumentFingerprint...................75 5.3.4Conclusions............................77 References ......................................78 Appendix A.SourceCode...................................82 A.0.5MusicalDetectClass.......................83 A.0.6MusicalNoteClass........................85 A.0.7MusicUtilityClass........................87 A.0.8ChordClass............................95 A.0.9ChordDetectionClass......................97 A.0.10SoundProcessingBusClass...................100 A.0.11MediaUtilitiesClass.......................108 A.0.12PCMInfoClass.........................113 A.0.13MelodyAnalysis.........................116 A.0.14FourierTransformClass.....................119 A.0.15SynthesizerClass.........................122 vii

PAGE 8

A.0.16ComplexNumberClass.....................125 A.0.17VortexVisual...........................130 A.0.18VisualizerInterface........................134 A.0.19VisualizerSpace.........................135 A.0.20SurveyResults..........................137 viii

PAGE 9

TABLES Table 2.1MusicIntervals[25,43]...........................5 2.2SynesthesiaNoteColorAssociation...................12 2.3SynesthesiaToneColorMapping.....................12 3.1TimeDomainInitialParameterizations..................19 3.2AnimationTimeBudget..........................24 4.1FundamentalFrequencies[7,38].......................26 4.2DFTvsFFTPerformance.........................36 4.3FFTPerformanceExtractingtheFullSpectrumofTones........39 4.4FFTVersusDFTAccuracy.........................40 4.5InitialNoteGuessingResults........................45 4.6FinalNoteGuessingResults........................48 4.7AccuracyofRandomSignals........................49 4.8Detection&CharacterizationResultsInitialMetrics..........51 4.9Detection&CharacterizationResultsUndetected...........51 4.10Detection&CharacterizationCorrectedResultsFinalMetrics....53 4.11ChordDetectorBasicTest.........................59 4.12VisualSpaceAxioms............................63 5.1QuestionsandAnswers...........................70 ix

PAGE 10

FIGURES Figure 2.1TheFiveFeaturesofMusic.........................6 2.2 PCMTimeSeriesGraph ...........................8 2.3Hue,Saturation,BrightnessColorScale[23]................13 2.4TiggerStripes,16to31HzNoNoise..................15 2.5TiggerStripes,16to22.05KhzNoNoise................15 2.6TiggerStripes,16to22.05Khz,NoNoiseleft,50%Noiseright..15 3.1ExampleSetofParameterizedGeometricFigures.............20 3.2 Symphony-SoundDogs ............................20 3.3 SymphonyBeethoven12'thSymphony,BeethovenViolinSonata,Schubert'sMoment Musical ....................................21 3.4 TechnoElectronica-TermiteSerenity,NaoTokui,Shenebula,TermiteNeurology ..21 3.5 VariousComposers-DebussayClairdeLune,MozartEineKleine,MozartSonata,Chopin Etude .....................................21 3.6Researchapproach..............................23 4.1CosineFunction x = cos t .........................28 4.2 x =2 cos t +3 cos t =5 cos t ..................28 4.3RandomWaveformofMoreThanOneFrequency Note:Notanaccurategraph ..29 4.4DividingFrequencies, a + bi 2 C ......................29 4.5GeometryofComplexFrequency, C =ConstantAmplitude/Radius, k = Frequency, n =Realvaluedcoecient...................30 4.6SynthesizedSignalat27 : 5 2 k f 0 k 11 g ................35 4.7ComparisonofStandardDFTtoParallelizedFFT............36 4.8HanningWindow..............................37 4.9SynthesizedSignal f 2049,8000,16000,22031 g HzOver1Second....39 4.10SynthesizedTones@44.1Khz, f A,C,G# g Over5Seconds........46 x

PAGE 11

4.11ComparisonofRealvsDiscrete TiggerTheorem Notevs T a .....46 4.12AccuracyPlotNotevsHarmonicvsPercentAccuracy-100%...50 4.13SynthesizedRandomSounds@44.1Khz..................50 4.14Detection&CharacterizationResultsRunvs%Accuracy.......52 4.15Detection&CharacterizationResultsCorrectedRunvs%Accuracy.53 4.16UseCaseNewNote............................55 4.17UseCaseMatch..............................56 4.18UseCaseRefresh.............................56 4.19UseCaseResidual.............................57 4.20UseCaseExpiration............................57 4.21UseCaseKill................................57 4.22 D 4 GroupExample.............................60 4.23 D 12 PitchClass...............................60 4.24VisualSpaceExample............................68 5.1BeethovenMinuetinG...........................69 5.2TechnoElectronicaSheNebula......................69 5.3SurveyResultsbyUserMusicGenrevsGrade%............72 5.4SurveyResults,AverageGradebyGenre..................72 5.5SurveyResults,TopandBottom5Scores.................73 5.6FingerprintTechnique............................76 xi

PAGE 12

1.Introduction Forcenturieshumanshavestrivedtovisualize.Fromcavepaintingstomodern artworks,wearebeingsofbeautyandexpression.Withintheverynatureofour languageistheunderlyingdesiretoexpresswhatwefeelintermsofpictographic imagery.TheEnglishlanguageisladledwithtermssuchas"letmesee,"or"seewhat Imean,"andrarelydowegivetheunderlyingmeaningoftermsasecondthought. Whengivennewinformationintheclassroomweoftendesireapictureoftheconcept tosolidifyunderstanding.Whenwehearthewords csquaredequalsasquaredplus bsquared" theyportraylittleintuitionbutwhenshownarighttrianglesomething clicks.Thereisoftenachasmbetweenrepresentationandintuitionconstantlybeing lledbynewtechnologyandideas.Itiswithinthischasmwebeginourclimb. Ourthesisisinspiredbytheideaof Synesthesia ,whichisdenedasthecross modalityofsenses[37,47]andweaimtodeviseamathematicalcomputerscience capableoftransformingthephysicalshapeofasoundintoanintuitiverepresentation,agnosticofcultureorbackground.Forexample,imagineamusicianwithan instrumentconnectedtohisorhercomputerandasthemusicianplayss/hesees amazingpatterns,shapesandcolorscongruent"totheharmonyinreal-timethat representstheactualmood"ofthemelody.Similarly,onemayselectasongfrom anmp3,mp4,or.wavleandplaythemusicintoanapplicationcapableofrendering sonorities 1 astheyemerge.OurresearchleveragesFourierAnalysis,SignalProcessing Detection/Characterization,ComputerGraphics/Animation,GroupTheory,Musical Geometry,MusicTheoryandPsychology.Thistaskisdaunting,itrequiresnotonly aprofoundunderstandingofanumberofadvancedscienticdisciplinesbuttheabilitytointegrateseveralresearchareasintoacohesivemodelinvolvingtheoretical, subjectiveandexperimentalmethodologies. 1 Aterminmusictheorytodescribeacollectionofpitches. 1

PAGE 13

Inordertoprovidearigorousthesisandstillbeabletomaintainalevelofcreativityweaddressthreemajorfronts: a theconstructionofamathematicalmodeland computeralgorithms b thesubstantiationandderivationofaphilosophyinvolving thehumanperceptionofmusic c theaestheticsofcomputergenerated Art .Building fromtheworksofDmitriTymoczko[42,43],CytowicandWood[36,37],StephenW. Smith[38],JamesCoolyandJohnTukey[17],Bello,DePoli,Oppenheim[6,22,30], MichalLevy[24]andGethner,Steinmetz&Verbeke[9]tonameafew.Ourthesis takesonesmallsteptowardthederivationofamodelmathematics,algorithms,softwareandartisticcreativitycapableoftransformingthephysicalshape"ofasound intoimagerydivorcedfromculturalsubjectivity. 2

PAGE 14

2.Inspiration 2.1PreviousWork Sincethedawnoftheelectronicera,mathematicians,physicists,computerscientistsandelectricalengineershavebeenattackingtheseeminglyunsolvableproblem ofblindlycharacterizingatimeseries 1 .WhetherweareparsingadopplerRADAR system,humanspeechormusicweleveragemuchofthesamemathematicsand techniques.Ourendeavourhingesontheabilitytoextractnotesfromatimeseries ofrawenergyimpulses.Inthe1970'sMIT'sAlanOpppenheimpioneeredsomeof rsttechniquesinspeechtranscriptionandsignalsanalysis.In1977theUniversity ofMichigan'sMartinPiszczalskiandBernardGallerimplementedoneoftherst computeralgorithmstotranscribemonophonictones.Latermanyexpertssuchas JuanPabloBelloandGiovanniDePoliaddedvariousmethodologiestoimprovetranscriptionofmonophonicandpolyphonicinstruments,highfrequencydetection,peak detection,isolationandsoon.Theeldof polyphonicmusictranscription servesas aguidetoderivingamathematicalmodelandmethodology.Atremendousamount ofworkhasbeendoneintheareaofautomatictranscriptionbutsadly,thereis nomagicequationandthevariousapproachescomedowntotheirindividualtrade os[6,18,21,22,28{30,34,38]. Unfortunately,orfortunatelydependingonhowyoulookatitwemustswitch focusrapidlyinourresearchbecausewedrawfromsomanydisciplinesatonce.We turnourattentionnowtotheinspirationofmusicianandauthor MichalLevy [24] whosuersherselffromaconditionknownasSynesthesia.Inherbeautiful,procedurallygeneratedanimationsonecanseechoreographedimagerythatmirrorsthetempo andowofasong.MichalLevyconstructedseveralanimationstoincludethetitle GiantSteps" designedtointuitivelyexternalizehercondition.Anotherfamouscontributortomusicvisualsisthecomposerandcomputerscientist StephenMalinowski 1 AseriesofenergyimpulsesextractedfromananalogsignalcoveredinSection2.2. 3

PAGE 15

whointhe1980'sconstructedasimplisticbuteectivevisualizerwhichleveragesencodedMIDIinformationtocreateinjectiveanimations.StrangelyenoughMalinowski wasinspiredbyhisexperimentswithanimatedgraphicalscoresin1974aftertaking LSDandlisteningtoBach[26].Thisisnotonlyinterestingbutsubstantialbecause accordingtoPsychologicalresearchLSDmayinducesynesthesia[36]. 2.1.1ALittleMusicPlease? Itiswellknownthatmusicisunderpinnedbyageometricstructure[1,42,43]. Forourresearchitisimportanttodigestasmallamountofmusictheory,especially whenitcomestojargon.Engineersandscientistsareknownforthecompulsionto nameeverythingandmusicians,asitturnsout,arenodierent.Oneofthemost commonlyusedtermsistheterm interval .Anintervalisnothingmorethanthe distancebetweenanytwonotesandshouldbenostrangertomathematiciansasits meaningisconsistentinmusictheory.However,musicianscreatedconfusinglabels foreachofthenon-negativeintegersuptoandincluding12.Ascientistcouldgo insanetryingtomnemonicallyassociatethelabelssincethenumericvalueinthe labelhaslittletodowiththeactualinterval.Table2.1describestheintervalsand theirnames.Noticethatseventhisactuallyastepof10or11,andsixthisastepof 8or9.Wow! Anyway,wemustbeawareofthisnomenclatureasitisvitaltounderstanding muchofthepsychologicalresearchandmusictheoryresearchregardingtonality. Weturnnowtothefundamentalinspirationforourthesis,theworkofthegreat musictheoristandmathematicianDimitriTymzcko.TobefairtoDr.Tymzcko, inhisownwordshestates"Iamnotamathematician,[43]"howeverhisworkin musicgeometrycontradictssuchaclaim.Tymoczko'sworkisthegluethatholds oursuspicionsinplace.Inthebook"AGeometryofMusic",Tymzckodescribes ascienticmodelforthebehaviorofmusic.Hisclaims,manysubstantiatedand somenot,stronglysuggestsanobjectivecharacterizationofmelodyandharmony, 4

PAGE 16

Table2.1: MusicIntervals[25,43] Step Name 0 unison 1 minorsecond 2 majorsecond 3 minorthird 4 majorthird 5 prefectfourth 6 diminishedfth 7 perfectfth 8 minorsixth 9 majorsixth 10 minorseventh 11 majorseventh 12 octave whichboilsdowntovefundamentalfeatures: ConjunctMelodicMotion Acoustic Consonance HarmonicConsistency LimitedMacroharmony and Centricity .Each ofthesetermsissophisticatedanddiculttounderstandwithoutabackground inmusictheory.Wewillattempttodenethesetermsintuitivelywhilehopefully doingjusticetoTymzcko'swork. ConjunctMelodicMotion meansthatsome harmonydoesnotdierinitsintervalbetweennotesbytoolargeofanamount. Thisissubstantiatedbythefactthatchangesinfrequency,whicharetoosmall, areundetectableandchangesinfrequency,whicharetoolargeareoensive[43]. AcousticConsonance isthetermusedtodescribethatconsonantharmoniesare preferredtodissonantharmoniesandusuallyappearatstable"pointsinthesong. HarmonicConsistency isperhapsthemostimportanttousasthissuggesta sequenceoftoneswhosegeometricstructureissimilarwithinsomeframeofasound. LimitedMacroharmony canbethoughtofastheexternaldistanceofapassage ofmusicbeforeanoticeabletransitionoccurs. Centricity shouldbecomfortable tomostmathematiciansandengineersasitdescribesaninertialreferenceframeor 5

PAGE 17

centroidtothesequenceofmusic.Applyingthelawoflargenumberstomusictheory, onecanalmostenvisioncentricityastheexpectedvalueofthemusic.Figure2.1 providesoneinterpretationofthevefeatures,excludingconsonance.Eachframe representsamacroharmonyandwithineachframearetwochordsrepresentedby reddots.Noticethatthesecondchordinbothframesarealinearcombinationof therstscaledequally. Figure2.1: TheFiveFeaturesofMusic WewillleveragetheideasofTymoczkoinSection4.4whereweattempttogarner asenseofbehaviorfromasequenceofcharacterizedtonesinatimeseries.Fornow wecontinueonwardbydiscussingthedierentmodelsformusiccharacterization.It isimportanttonotethattranscriptionistheprimarysourceofourmathematical modelbutwearenotattemptingtotranscribemusichere.Toboundthescopeof thisthesiswewillbelessconcernedwiththespecicinstrument,timbreormusic notationthanwearewiththerawtonesandchordspresentinanygivensecondof atimeseries.Expandinguponbothtimedelityandinstrumentidenticationmay bepartoffuturework.Thereareseveralmodelsthatareusedwhenattackingthe problemofmusictranscription,ofthemthemostpopularhavebeen aBottomUp wherethedataowgoesfromrawtimetocharacterizednotes bTopDown whereone beginswith aposteriori knowledgeoftheunderlyingsignal cConnectionst ,which actslikeahumanbrainorneuralnetworkdividedintocellseachaprimitiveunit 6

PAGE 18

thatprocessesinparallelandattemptstodetectlinks dBlackboardSystems ,which isaverypopularandsophisticatedsystemthatallowsforascheduledopportunistic environmentthathasforward-esqueowoflogicwithafeedbackmechanismbased uponnewinformation[22][21].Itisthe BottomUp approachwechosetoleverage, largelyduetothefactthatweintendtoprocessrawsound apriori ,whichistosay, independentof aposteriori knowledge.Itisatthistimewetransitionourdiscussion towardanareaofstudydedicatedtothebrainandperception;wespeakofcourseof Psychologyandthoughitisasmallportionofourthesis,itisofmajorinuence. 2.2SignalBasics Ourresearchdependsuponthebehaviorofasoundwave.Soundwavestravel throughtheatmosphereandgenerallyrangefrom25Hzto25Khz[6,38]orinother words25cyclespersecondto25thousandcyclespersecond.Asweagetherange ofhumanhearingdecreasesbecauseourearbersbecomebrittleovertimeandcan nolongersensechangesatsuchahighrate[38].Thehumanearperceivessoundby thechangesinpressuregeneratedbythefrequencyonboththeupanddowncycle ofthewave[38,42].Thespeedatwhichthatpressurechanges e.g.,thefrequency isthewayinwhichabraininterpretsinformationassound.Thefasterthechange higherfrequency thehigherthepitchandviceversa.Therelationshipsodescribed providesaconduittodecomposingtherawinformationintoitsbasicpartsandin turnalgorithmicallyinterpretingandprocessinginformationaboutsound.Mostusers generallylistentomusicintheformofaCompactDisc,MP3Player,orfromatelevisionorotherstereosource;allofthesesystemsuseanencodingschemecalled PCM PCM standsfor P ulse C ode M odulationandisthepreferredmeansoftransmitting andstoringdigitalsoundinformationelectronically[2,39].Figure2.2illustratesa simpletimedomainsignal. Ifoneobservestheredlineasameasurementofhowintenseasoundisrecorded overaperiodoftimegoingfromlefttorightonourgraphthenonecangainagood 7

PAGE 19

Figure2.2: PCMTimeSeriesGraph ideaofhowasoundwaveisreceived.InFigure2.2thesmallbluedotsarediscrete pointsidentiedalongthecurve,inthiscasea3Hzanalogwave.These Sample pointsarewhereamicroprocessorsystem,suchasananalogtodigitalconverter, wouldmeasurethesoundwaveheightandstoreitforuse.Thenumberoftimessound issampleddetermineshowaccuratelythedigitalcopyrepresentstherealsound.The Shannon-NyquistTheorem [38]statesthattoaccuratelyrepresentasignalindigital formonemustsampleatleasttwotimesthemaximumfrequency.Acommonsample ratefoundinmp3lesis44.1Khz.Sincenormalhumanhearingtendstorunbetween 20Hzand20Khz[38]thismakesforagoodsampleratebecausebytheNyquistrulewe have1 = 2 44100 22 Khz beingthemaximumaudiblefrequency.Suchanencoding providesuswithawelldeneddiscretizationofananalogsoundwave.Wenowhave enoughinformationtobreakdowntheoriginalsound. 2.3Synesthesia Thepsychologicalconditionknownas synesthesia involveswhatisknownasa crossmodalityofsenses[36],wheretheso-called Synesthete experiencesausually involuntaryoverlapbetweensomecombinationofhearing,sightandtaste.Forexampleonemayquiteliterallytastecolor,smellsoundormoreimportantly seesound 8

PAGE 20

WedonotoerarigorousstudyinPsychology,moreoverweintendtoleverageelementsofresearch,particularityinthevisualizationofsound,asaroadmaptoward whatmaybeamorescienticapproachtovisualizingmusic.Somecompellingresultslendcredencetoourthesisthatthereexistsauniversalinterpretationofsound. Thecoreofourconjectureistheideathatonecanseethephysicalshape"ofmusicandexperiencevisualsinaculturallyneutralfashion.Theresearch,asitturns out,seemstosupportsuchanidea.TheworkofCytowicandWoodinthe1970's suggestsarelationshipbetweensynesthetesandso-called normals thosewithout synesthesia[36,37].Theirresearchdemonstratesalikelyconnectionbetweensynesthetesandnormals,whichsuggeststhatcolorsareintuitivelyassociatedwithcertain sounds.Zellneretal.suggeststhataversioncalled weaksynesthesia isexperienced bymostpeopleasopposedto strongsynesthesia onlyexperiencedbySynesthetes[16]. Inadditiontocolor,Synesthetestypicallyvisualizeashape,whichisreferredtoasa photism [37].Thetermphotismisusedofteninthepsychologicalcirclesandrefers tothegeometriesandcolorseenbysynestheteswhenhearingaparticularfrequency ormelody.Aphotismisdenedasahallucinatedpatchoflight"[3].Amongstthe researchofsynesthetes,aphotismisusedtodescribethestimuliwhenpresentedwith atone,chordorcomplexmelody. 9

PAGE 21

abbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbc d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d Anoteabouttheauthor Theconceptofphotismcanbediculttoexplain.WhenIwasyoung Iusedtoexperienceastunninglychoreographedarrayofcolorandshapes wheneverIwouldhearanyofmyfavoritemusic.MostcommonlyIwould experienceavortexofspinninggradientsthatchangedinbrightnessand huesynchronizedtothetempo.Sometimesthesevorticeswouldchangehomomorphicallyintootherpolytopes.Theexperiencewasinvoluntaryand intenseforseveralyearsbutbegantoslowlyfadeasIaged.Ihadnotbeen convincedIhadexperiencedSynesthesiauntilIreadtheworksofCarol BergfeldMillsetal.[5]wheretheexactconditionsIexperiencedwerereportedbyothers.AtthestartofourresearchIcouldnothelpbutfeel unsurprisedwhenlearningwhichcolorswouldcommonlybechosentorepresentvariousfrequencies.Iwasequallyunsurprisedwhenpresentedwith evidenceofwhichshapesweremostcommonlyselected.Now,Iunderstand why.Although,Inolongerpossessaninvoluntaryresponse,Idooftenpurposefullyvisualizesimilarphotismswhenhearingapleasingmacroharmony. e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e fggggggggggggggggggggggggggggggggggggggh Asitturnsouttheideaofvisualizingsoundisnotnew,inthe1930'sOttoOrtmannmappedoutseveralchartsattemptingtodenerangesandtonesandtheir correspondingcolorleveragingtheworkofErasmusDarwinandIsaacNewton whobothpredictedtheexistenceofapitchcolorscale[32,36].Wefocusmuch ofourresearchontheassociationoftoneandfrequencythatwefocusmuchofour research. Coloredhearing asitiscalled,isbelievedtobethemostcommon[36]and mostoftenpresentswhenplayedsometonethatlastsformorethan3seconds[5]. Throughoutmuchofthepublishedworksoncoloredhearingthereisacommontheme, thatistosay,withoutpriorknowledgeasubstantialportionofsynesthetesexperi10

PAGE 22

enceacommonscaleofcolorrangingfrom26%to82%concurrenceandphotisms whosesizeandscalesashighas98%concurrencebetweensynesthetes[5,8,16,31]. Conveniently,thegeneralizedexperienceincolorandfrequencycanbeheuristically mappedtoasimplealgorithm.FromtheworksofKonstantinaOrlandatou[31]we havetherangesof0-50Hzasmostlyblack,50-700Hzasmostlywhite,and700Hz to3Khzismostlyyellow.Orlandatoualsonotesthatthereisaclearassociationbetweenpuretoneandsingularcoloringandviceversa.WeseefromLawrenceMarks thatthereisaquantiablerelationshipbetweenamplitudeanddimensionofaphotism[27].Ithasalsobeenobservedthatnoiseisgenerallyachromatic 2 [5,31].We alsoseeinmultiplestudies[16,31]thatpuretonesareoftenseenasyellowandred whereasasawtoothtoneisseenasgreenorbrown.Thiscanbeassociatedwith theharmonicstepsinamelody.Finally,weseethatthereisa mood associatedwith somecoloring,whichmaypresentachallengeifnotfortheworkofDimitriTymoczko Section2.1whereintheoverallbehaviorofamacroharmonycanbedecodedwith theuseofsomemusicgeometry.ItisfromtheseworksofPsychologyandSynesthesiathatwederivethefollowingtables,whichwillactasouralgorithmguidegoing forward.ThecontentsofTables2.2and2.3havebeenconstructedbyconsolidating theworksof[5,8,16,19,27,31,32,36,37].ThesemappingsrepresentdirectpsychologicalexperimentswithourowninterpolationsandstatisticalaveragingofSynesthete responses. 2 Beingwithoutcolororblack/white. 11

PAGE 23

Table2.2: SynesthesiaNoteColorAssociation BaseFrequencyMap N a Note RGBColor C/C# blue D/D# red E yellow F/F# brown G/G# green A/A# green B black Table2.3: SynesthesiaToneColorMapping AdditiveFrequencyMapFaPatternMapPa FrequencyRange Note Color 0-50Hz All black 50-700Hz All white 700-22KHz All yellow FrequencyPattern ColorEect HarmonicStairStep green 12

PAGE 24

2.3.1AColoredHearingTheorem Wewillnowderiveafewtheoremsallowingustoextractanumericcolorvalue fromafrequency. Theorem1 TheRoswellTheorem Thereexistsaproportionatemappingsuchthat anincreaseinnoisetakesanycolortowardthegraycolorscale. Proof: Recallthelinearinterpolationequation )]TJ/F20 11.9552 Tf 9.84 0 Td [(t P 1 + tP 2 with P 1 ;P 2 vectors in R m .Thevalueof t rangesfrom0to1beingapercentageofthetotaldistance betweenvectors P 1 and P 2 .Let P 1 bean R 3 vectoroftheform r;g;b wherethe values r;g;b rangefrom0to255and P 2 = ; 128 ; 128thegraycolor.Let representthetotalnoiselevelofasignal.Ifwecompute t = j j max j j thenwehavea ratioof0to1overtherangeofthenoise.Ifwesubstitute t backintotheinterpolation equation )]TJ/F20 11.9552 Tf 11.492 0 Td [(t P 1 + t ; 128 ; 128wehavealinearinterpolation,whichtransitions anycolortowardgrayasnoiseincreases. Figure2.3: Hue,Saturation,BrightnessColorScale[23] 13

PAGE 25

Theorem2 TheStripesTheorem Denethebinaryoperators and tobe additivecolor 3 and colorintensity operations,respectively.Anymusicalfrequency a can bemappedtoacolorconsistentwithColoredHearing"usingtheequation r;g;b = )]TJ/F20 11.9552 Tf 11.956 0 Td [(t [ H a N a F a ]+ t ; 128 ; 128 where t = j j max j j and isthenoiselevel. .1 Proof: Let H a beamappingofthefundamentalfrequencytoanHSV 4 color fromTable2.2and F a beamappingfromanyfrequencyrangetoanHSVcolor h;s;v inTable2.3.Whenwemixcolorswith h;s;v j;t;w = h + j = 2 mod 360 ; s + t = 2 mod 1+ ; v + w = 2 mod 1+ .2 accordingtothesurfaceoftheconeinFigure2.3,thenscaletheresultof.2 accordingtoharmonic k = H a suchthat k h;s;v = h; )]TJ/F20 11.9552 Tf 11.955 0 Td [(k =max k ;k=max k : .3 Theresultof2.3isacolorcombinationoftheobservednote-colorandgeneral frequency-colormatchingthatisbrighterforhigherfrequenciesanddarkerforlower. Assumingthat mapstoanRGB 5 value,weplugtheresultof.3intoTheorem1 andtheoutputisacolorthatsimulatesaSynesthetes ColoredHearing response. Figures2.4,2.5and2.6illustratecolorsamplesusingtheStripesTheorem, whichweregeneratedoverthefrequencyrange16to22.05Khz.InFigure2.6we seeadarkenedversionoftheblendedcolorsovertherangesof16to31Hz.We 4 HSVorHSBisahue,saturation,brightnesscolorscalewhere0 saturation;brightness 1 and0 hue 360[15]. 5 RGBisared,green,bluecolorscaleusedcommonlyincomputergraphicswhere0 r;g;b 255 [35]. 14

PAGE 26

Figure2.4: TiggerStripes,16to31HzNoNoise Figure2.5: TiggerStripes,16to22.05KhzNoNoise Figure2.6: TiggerStripes,16to22.05Khz,NoNoiseleft,50%Noiseright thengeneratecolorsfor 22KhzshowninFigure2.5wherewenoticeaclear transitionfromdarkertolightertones,whichisconsistentwithourproof.Finally,in Figure2.6weadd50%noisetothefrequencyspectrumrightside,whichcausesa cleartransitiontowardthegrayscaleconsistentwithTheorem1. 15

PAGE 27

3.ProofofConcept 3.1DiscoveryandApproach Itrequiressomanytechnicalelementstosolvetheproblemofvisualizingsound, somuchsothatthequestionofhowtobeginposesasignicantchallenge.Wemust contendwiththelogisticsofparsingandinterpretingPCM,readingfromvarious devices,themeanswithwhichwecandisplaygraphicalinformationandallofthe structuresandutilitiesnecessarytocalculateandrender.Aswithanyjourney,we musttakearststepandwhatbettersteptotakethanasimpleend-to-endprototypethatcanreadasoundleandgeneratesomesortofmappedgraphicalimagery usingonlythetimedomaininformation.Thisprototype,orproofofconceptifyou will,allowsustolearnafewthingsaboutourdata.Weconstructasoundprocessing frameworkintheMicrosoftWindowsenvironmentusingC#.Net.Thislanguagewas chosenprimarilyforit'shighperformancecapabilityandexiblesyntaxwhichallows ustoleverageoperatoroverloadingtomoreeasilyhandlemathematicalstructures. Wearealsoabletoutilizetheraw struct syntaxthatprovidesonstackmemoryallocationasopposedtodynamicmemorythatprovidessignicantperformancedecrease whendealingwithrandomaccess.The SoundBusFramework ,aswecallit,isaprocessingframeworkthatutilizesananimationplug-insystemwhereeachanimation plug-inactsasaninterfacethatcanreceivebothsounddatamessagesandrequeststo rendertheircurrentcontent.Thisallowsustoexperimentwithdierentalgorithms withoutlosinganypreviouswork.Weemploythe MicrosoftXNA asagraphicsapplicationprograminterface API thatallowsustospeaktothegraphicsprocessing unitGPU.The NAudio soundprocessingpackagethatactsanAPIconnectingour systemtothesoundinputdevice,freesusfromhavingtoimplementadevicedriver ordecompressionalgorithm.Ourimplementationiscapableofseamlesslyprocessing rawpulsedatafromarawMP3,.wavle,ordirectmicrophoneinput.Atrst,we intendedonprovidingaone-to-onemappingofimpulsetographicalelement.There 16

PAGE 28

areseveralchallengeswhendealingwithanattempttosynchronizethevisualization ofsoundandimagery.At44Kimpulsespersecond,a100Hzrefreshanddrawingone imageperframethebackloggrowsarithmeticallyas t = t + )]TJ/F15 11.9552 Tf 9.299 0 Td [(10000 t Evenifweincreaseto100imagesperframeafter10secondswehaveabacklogof =341000imagestobedrawn.Asaconsequencewecanneverkeepupwith thesoundthatisplayinginreal-timewithoutcreatingtremendousclutteronscreen orsimplyrenderinganimagetheuserneveractuallysees.Thus,weabandonthe one-to-onerasterization 1 butnottheone-to-onecalculation.Ourframeworkprovides astreamofsoundinformationtoaninterfacedesignedtoprocessindividualsamples atatime.Simultaneouslythereexistsanotherinterfacemechanismthatiscalled ona30FPSintervaltorefreshthecomputerdisplay.Inordertokeepthedierent streamssynchronizedweemployatimersystemthatcalculatesthecurrenttemporal backlogandushesdatatothegraphicscalculation.Theprocessingimplementation systemthenchewsoindividualdatablocksandcontinuouslyincorporatesthedata intoasetofrunningparametersforourgeometricgures.Atanygiventime,the interfaceisaskedtorenderitselfinits currentstate .Theendresultisthatwereceive auidanimationthatgenerallymirrorsthepaceofthesoundandneithergetstoo farbehind,nortoofaraheadifreadingfromasoundle. 3.1.1ATimeDomainExperiment Theimagesshownintheupcomingresultssectionarecreatedusingthefollowing approach:weprimarilytakeadvantageofstatisticalcharacteristicsofasoundwavein thetimedomain.TheimplementationreceivesthePCMovertimeandparameterizes asetofsimpledihedralgeometricgureswhoseedgesaredrawninstagesovertime baseduponinitialparameters.Webeginwiththeset X = f x j x 2 Z ; )]TJ/F15 11.9552 Tf 9.299 0 Td [(2 k x 2 k g thatdescribestheamplitudedata.Tominimizeclutterwelimitthetotalnumber ofanimationsonscreenatanytimeto n 2 Z + .Inourtimesamplingwedealwith 1 Atermincomputergraphicsthatdescribesconvertingmemoryelementstoscreenpixels[35]. 17

PAGE 29

hundredsofthousandsofsamplesinjustafewseconds.Wemustlimittheelements generatedsoastonotoverburdenthegraphicssystemandourCPU.Experimentally, wechoseahardlimitof200itemswhichweshalladjustlaterasneeded.Let S = f s k j 0
PAGE 30

Table3.1: TimeDomainInitialParameterizations Parameter Value Description x Currentamplitude x k )]TJ/F18 7.9701 Tf 6.586 0 Td [(1 Previousamplitude SignaltoNoiseRatio g Gain j x= 255 2 j Startingangle + j x k )]TJ/F18 7.9701 Tf 6.586 0 Td [(1 = 255 2 j Endingangle Currentangle r x= max x k 2 Radius v )]TJ/F20 11.9552 Tf 11.955 0 Td [( = 30 Rotationalvelocity ColorRed f n x mod 255 RGBRedValue ColorGreen x k )]TJ/F18 7.9701 Tf 6.586 0 Td [(1 mod 255 RGBGreenValue ColorBlue 255 mod 255 RGBBlueValue P x f n rand +2 g )]TJ/F15 11.9552 Tf 11.955 0 Td [(1 CentroidX P y f n rand +2 g )]TJ/F15 11.9552 Tf 11.955 0 Td [(1 CentroidY P z f n rand +4 g )]TJ/F15 11.9552 Tf 11.955 0 Td [(1 CentroidZ thetimeitiscreated.Asanadditionalvisualelementwealsosetagradienttonefor thebackgroundbaseduponthecurrentsignalstrengthwhere BackgroundRGB = ; 0 ;f n E [ X ] k )]TJ/F20 11.9552 Tf 12.067 0 Td [(E [ X ] k )]TJ/F18 7.9701 Tf 6.586 0 Td [(1 =E [ X ] k mod 128with E [ X ]beingtheexpectedvalue. Notethatourdisplayrotatestheentireviewmatrixaboutthe y -axisassuming y pointsnorthveryslowlyinacounterclockwisedirection.Therotationangleis associatedwithanaverageofasubsetconsistingofrecentamplitudesinratiotothe maximum. 19

PAGE 31

Figure3.1: ExampleSetofParameterizedGeometricFigures 3.1.2InitialResults Ourapplicationwasrunagainstahandfulofmusicleswhich,inthiscasecame fromsymphonymusicdownloadedfromtheinternet.Tousetheapplicationone simplyselectstheinputsource,inthiscaseanMP3le,andthenpressesplay. Figure3.2: Symphony-SoundDogs 20

PAGE 32

Figure3.3: SymphonyBeethoven12'thSymphony,BeethovenViolinSonata,Schubert'sMoment Musical Figure3.4: TechnoElectronica-TermiteSerenity,NaoTokui,Shenebula,TermiteNeurology Figure3.5: VariousComposers-DebussayClairdeLune,MozartEineKleine,Mozart Sonata,ChopinEtude 21

PAGE 33

Figures3.2,3.3,3.4and3.5illustratescreencapturestakenfromourapplication whileplayingthespeciedmusic.Ifoneobservestheimages,particularlySchubert's SonatafromFigure3.3onecanseetheformationofaconicstructurecomposedof successivegeometricgures.Weconjectureweareseeingthephysicalshapeofthe waveformoversomeduration.Thisbehaviormanifestsitselfthroughoutmostsongs. Thoughmathematicallywehaveshownthatourvisualizationsareinfactadirect resultofthe shape and behavior oftherawtimeseriesitisverydiculttoqualify thatweareseeinganybehaviorthatmirrorstheunderlyingtonalityormelody.The imageryiscaptivatingandinterspersedwithbriefmomentsofmelodicmimicryand synchronizationbutitisnotasucientdemonstrationofourthesis.Wedidhowever accomplishtheinitialgoalofconstructingasoftwareframeworkformovingforward. PortionsoftheprocessingsourcecodecanbefoundinappendicesA.0.10,A.0.11and A.0.12. 3.1.3Approach Itistimetoleverageourresearchandcontinuetryingtoextractwhatmakesthe music behave thewayitdoes.Thus,wemayattempttoincorporatesuchbehavior intoourvisuals.Inordertodothis,wedeviseaplanforouroverallmodelillustrated asaroadmapinFigure3.6.TheplaninFigure3.6allowsustohandleeachstage ofthetransformationwiththelevelofrigorwedeemnecessaryorpossiblewithin thescopeofthisthesis.Weoeramodularapproachtotheprocessingpipeline insofaraswebreakouralgorithmintoseveralparts.Eachstageyieldsaclearoutput thatactsasinputtothenextstage.Wedothiswiththeknowledgethatwecan bothimproveeachstageindependentlyandaddtuningandparameteradjustment forknownshortcomings.Wedonotabandonacohesivemathematicalmodelacross modules,nordoweassumethatinputs/outputsaremutuallyexclusive.Wemerely strivetobreakapartthechallengesandallowindependentresearchandimprovement. Theideabeing,whenonecomponentofthealgorithmpipelineimproves,othersdo 22

PAGE 34

Figure3.6: Researchapproach aswell.Garbagein,garbageoutandviceversa,sotospeak.Therealbeautyisthat wecanndpartialsolutionstoanalgorithmandstillmoveontothenextalgorithm. Thisisveryimportantbecausetherearenoperfectsolutionswithdetectionand characterization,discussedinSections4.1and4.2. 3.1.4AnimationTimeBudget Asnotedearlier,wemustbeabletoprocessdatainrealtimeandthismustbe donequicklyenoughsoastonottolagtoofarbehindthemusic.Weproposeaninitial timebudget consistingof1.5secondsfromtimesampletovisualization.Thetime budgetactsasaguideonhowtotuneperformanceandaccuracyofanalgorithm.For example,wemaysacriceaccuracyinacalculationthattakesminutesandisdone rarelybutisveryslow.Theconceptofthetimebudgetisnotnew;theMicrosoft XNAgraphicsenvironmentprovidesagameclockwherepathsinthecodecanmake choicestoskipactionsonthecurrentcycle,ordoubleupiflotsoftimeisavailable. Ourtimebudget,showninTable3.2,isaguesstimatebaseduponexperimentation fromSection3.1andresearchregardingtheFouriertransform.The timebudget isa softrequirementthatallowsustothinkoftheentireprocessingpipelineasacohesive functionsowedonotlosesightofwhereweareintime.Itiseasytodriftowhen focusingononeareaandforgetthatitisasmallaggregateofalargercalculation. 23

PAGE 35

Ultimately,weexpecttoseelargedeviationsfromourinitialguessbutwemustbegin somewhere. Table3.2: AnimationTimeBudget Step AllottedTimemilliseconds FrequencyDetection 600 NoteCharacterization 100 ChordCharacterization 200 GeometryAnalysis 100 MelodyAnalysis 100 GraphicsProcessing/Rendering 200 Total 1.3Seconds 24

PAGE 36

4.ResearchandDevelopment 4.1FourierAnalysisandFrequencyDetection Todetect,isolateandcharacterizethecontentsofamusicsignalwemustbe ableextractthetonesfromatimeseries.Thisimpliesthatwerequirefrequency informationfromthetimedata.Thebasisforthisassumptionisfromasimpleconcept proposedbyBello[22]thatimplieswemustbeabletodeterminetwomajorproperties ofanytimedistribution.First,weidentifythe signicant frequencieswithinthe sampledistribution,specicallythoseassociatedwithmusicaltones.Second,we identifytheeventtimeofeachfrequency.Belloproposesthreekeyvalues pitch, onset,andduration which,wewillutilizeimplicitlylateron.Forthetimebeing, weconstrainouranalysistoaonesecondintervalandusetheDiscreteTimeFourier Transform DTFT [6,38]toextractfrequencyinformationfromatimeseries.We chooseourtimeintervaltobeonesecondwhich,isbaseduponSynesthesiaresearch statingthatsub-secondintervalsoftoneareunlikelytoinvokearesponseandlonger intervalsdonotchangetheresponse[33].Thisalsodecreasesourmathematical complexityandimplementationcomplexity.Themodelforextractingtonesthat includesallfrequenciesoftheform f n a = a n where a 2 R + ;n 2 Z + .1 iseveryvalueof n thatproducesaharmonicofthefundamentalfrequency a .Alist ofallthefundamentalfrequenciesisshowninTable4.1.Werefertothefundamental foraparticularnoteas f 0 butwemustintroduceafewnewformsofnotation.The followingismorecomputersciencethanmathematicalandishowwewillreferencea namednote"asafunction f 0 0 A 0 =27 : 5 ;f 1 0 A # 0 =58 : 28andsoon.Wewillalso usethemorecommonnotationforaharmonicfrequencycommonlyfoundinmusic textswhichis ,eg: A 4whichimplies27 : 5 2 4 1 1 A 4 = f 4 0 A 0 isthefourthharmonicofthesetoffrequencies f k 0 A 0 = f 440 k= 12 j)]TJ/F8 9.9626 Tf 17.035 0 Td [(48 k 39 g [22][38]. 25

PAGE 37

Table4.1: FundamentalFrequencies[7,38] Note FrequencyHz Wavelength C 0 16.35 2109.89 C # 0 17.32 1991.47 D 0 18.35 1879.69 D # 0 19.45 1774.20 E 0 20.60 1674.62 F 0 21.83 1580.63 F # 0 23.12 1491.91 G 0 24.50 1408.18 G # 0 25.96 1329.14 A 0 27.50 1254.55 A # 0 29.14 1184.13 B 0 30.87 1117.67 26

PAGE 38

Denition1 MusicFundamentalSet Let F bethesetoffundamentalfrequencies where F = f f 0 0 C 0 ;f 0 0 C # 0 ;:::;f 0 0 B 0 g asshowninTable4.1. Denition2 MusicalHarmonicSet Let M bethesetofallmusicalharmonic frequencieswhere M = f f 0 0 C 0 2 k ;f 0 0 C # 0 2 k ;:::;f 0 0 B 0 2 k ; 8 k 0 g Usingourdenitions 2 andknowingthespecicrangeswearetargeting,wecan designouranalysisinsuchawayastoextractaspecicsubsetofthetimedomain. Wewillbedealingwithunmodulateddata ie:thereisnocarrierwaveorshift keying andassumeamaximumsamplerateof44,100Hz[2][38]. 4.1.1UnderstandingtheDFT Mathematicallyspeaking,whatisawave?Toanswerthiswemustrstexamine thecosinefunction.BackinFigure2.2wesawa3Hzwave.Ifweinspectthe cosinefunctionoversomeperiodoftime t youhavethemapping f : R R where x = f t = acos t with f x j)]TJ/F15 11.9552 Tf 18.357 0 Td [(1 x 1 g andthepeakandtroughofthefunction areamaximumandminimumof a ,illustratedinFigure4.1with a =1.Remember thecosinefunctionrisesandfallssymmetricallyovertheabscissaandpeaktopeak measurementsarecongruent.NowexamineFigure4.2,whichillustratesaverysimple exampleofhowaseriesofcosinevaluesbeingaddedcanproduceanewwaveform. Supposethatinsteadof f t = cos t youhad t =2 kn or f n = cos kn where k issomeconstant thefrequency and n issomerealnumberthatiterates overallpossiblevaluesofthefunctionforthatdesiredfrequency.Forthepurposes ofFigure4.2, k wouldbeequaltounitythusyielding x = cos 1 .Interestingly enoughwehaveaddedthreewavestogether,eachwavebeingasinglecycle/frequency ofamplitudes2,3,5respectively.Ingeneral2,3and5couldbemeasurementsof voltage,powerorsomeotherratioofchangebetweentwovaluessuchasDecibels 2 Toassistthereaderwetrytomaintainconsistencyanddenitionmnemonic.Noticethat F is thefundamentalfrequencysetand M isallmusicalfrequencies.Wheneverpossible,westickwith consistentvariablenamesforfrequency,iterants,sets,etc... 27

PAGE 39

Figure4.1: CosineFunction x = cos t Figure4.2: x =2 cos t +3 cos t =5 cos t where a =10 log 10 x=x [38].Forthemomentweignoreintensityandassumeour powerlevelsareatunit.Figure4.3isapictographicrepresentationofarandom wave.Notethatthiswaveisnotaccurateintermsofitsstructurebutforintuition only.Thisimagedepictshowtwowavesofdierentfrequenciesareabletobeadded togethertoformanewwavejustasinthepreviousexample.UsingFigure4.2asa startingpointonemightseethatwecanreverseengineerthe presence ofanyoriginal wavewithinanother.Howdowedothis?Bydividingouteachfrequencyweperceive tobe present intheoriginaltimeseriesasillustratedinFigure4.4. Thus,taketheset X = f cos n + cos n ; 0 n 1g .Howwouldwe determineifasinusoidoffrequency2Hzliveswithinthiswave?Essentially,we wouldwanttoaskthewaveateverypointhowmuchthefunction f n = cos 2 n 28

PAGE 40

Figure4.3: RandomWaveformofMoreThanOneFrequency Note:Notanaccurategraph Figure4.4: DividingFrequencies, a + bi 2 C matchesinwhatamountstoacrosscorrelationbetweenthetimeseriesandthe complexfunction.Remember,thecomplexfunctioniteratesoverallpossiblevalues ofawaveataparticularfrequencysocomparingittotheoriginaltimeseries X at thesameintervalsresultsinacorrelationresponse.However,wedonotsubtract,we dividebyoursearchwaveateachintervalandsumtheresults.Thisgenerateswhat isineectacorrelationvaluewhosemagnitudeisameasureofthepresenceofour desiredfrequency.Forexampleifwewerelookingfor2Hzsignalsintheoriginal wavewecouldperformthefollowing Presence Hz = X alln X n cos 2 n .2 RecallEuler'sformula e ix = cos x + isin x where i istheimaginaryunit.Take notethatifweweretodividebythecosineonlywe'dhaveouroriginalEquation4.2. HoweverFigure4.5depictsthecomplexplaneandthebehavioroftheEulier'sforumula.Thefunction Z = Re )]TJ/F21 7.9701 Tf 6.586 0 Td [(i 2 kn tracesacurveinthecomplexplane,whichresults inaperfectcircleabouttheoriginofradius R .Withinthecomplexplaneitself,itis 29

PAGE 41

notapparenthow k aectsthewavethusweextendalonga4thaxiswith k = f 0 t .We thenuseanorthogonalprojectionontosomeaneplaneparallelwiththeimaginary axis.Onecanseethattheimageundertheprojectionisthe2DsinusoidalwaveformsimilartoEquation4.2,thedierencebeingthatnowwehaveencodedphase informationintotheresultofourdivision. Figure4.5: GeometryofComplexFrequency, C =ConstantAmplitude/Radius, k =Frequency, n =Realvaluedcoecient. Fromthismodelitfollowsnaturallytosubstitutecomplexdivisionforourreal division,andtheresultingmappingisknownastheDiscreteFourierTransformor DFT .Then X k = N )]TJ/F18 7.9701 Tf 6.587 0 Td [(1 X n =0 X n e )]TJ/F21 7.9701 Tf 6.587 0 Td [(i 2 kn N .3 where X k isasetofcomplexnumberswhosemagnitudessignifythe presence ofa frequency k intheoriginalsignal.Additionalreferencesforthevariousavorsofthe DFTandFFTcanbefoundin[4,6,11,17,38,40,41]. 30

PAGE 42

4.1.2AParallelizedFourierTransform TheFastFourierTransformispredicatedonanumbertheoreticapproachto factorizingtheDFTintosmallerpieces.Ingeneralthisallowsustoimprovethe computationalcomplexitybyextractingfactorsfromaninnersummandandperformingthosemultiplicationsasingletimethusreducingouropsfrom O N 2 to 2 Nlog 2 N [17].Althoughthereareseverallibrariesthathavefullimplementationsof theFFTweattempttoderiveourownmathematicalmodelhere.Inpart,allowingus toexplorethebehaviorofthecomputation,butalsoweassumewemayneedtone tunethecomputationaddressingspecicsofourdetectionalgorithm.Tomaximize performanceonmodernhardwarewetakeadvantageofthisideaontwofronts: a byremovingfactorsfromtheinnersummandand b creatingacomputationthat canbeparallelized.WhendealingwiththeFouriertransformitiscommontodene aconstant W N = e )]TJ/F21 7.9701 Tf 6.587 0 Td [(i 2 N which,allowsustodealwithacompactformoftheDFT as X k = P N )]TJ/F18 7.9701 Tf 6.586 0 Td [(1 n =0 X n W nk N .TakingideasfromtheworkbyJamesCooleyandJohn Tukey[17]andothers[4,10,41]wederiveasimplerversionoftheFastFourierTransformFFTinTheorem3. Theorem3 AParallelizableFastFourierTransform f b = f a;N 1 ;N 2 ;k = N 2 )]TJ/F18 7.9701 Tf 6.587 0 Td [(1 X a =0 X a + bN 2 W ak N X k = 1 N N 1 )]TJ/F18 7.9701 Tf 6.587 0 Td [(1 X b =0 f b W bN 2 k N .4 Proof: Let N;N 1 ;N 2 2 Z .Weobtain N 1 ;N 2 byfactoring N suchthat N = N 1 N 2 .Usingthen,nthrootsofunity W nk N let n = a + bN 2 withintegers a;b Bythedivisionalgorithm,wehavemappingsin a = f 0 ; 1 ; 2 ;:::;N 2 )]TJ/F15 11.9552 Tf 12.859 0 Td [(1 g and b = 31

PAGE 43

f 0 ; 1 ; 2 ;:::;N 1 )]TJ/F15 11.9552 Tf 10.354 0 Td [(1 g .Thekeyistonoticethat a iscyclicin N 2 and b iscyclicin N 1 [17]. Thisisessentiallythesameasatwodimensionalexpansionofthesingledimensioned value n .Observe,whenwebreakaparttheDFTusingourtwodimensionalindex schemewehave X k = P N 1 )]TJ/F18 7.9701 Tf 6.587 0 Td [(1 b =0 P N 2 )]TJ/F18 7.9701 Tf 6.586 0 Td [(1 a =0 X a + bN 2 W a + bN 2 k N Ifweapplysomebasicalgebraandfactorweendupwith W a + bN 2 k N = W ak N W bN 2 k N which,ultimatelyyeilds X k = P N 1 )]TJ/F18 7.9701 Tf 6.587 0 Td [(1 b =0 P N 2 )]TJ/F18 7.9701 Tf 6.586 0 Td [(1 a =0 X a + bN 2 W ak N W bN 2 k N Notice,theonlychangesintheinnersummandare a and k which,meanswecan extractafunction f b = f a;N 1 ;N 2 ;k = N 2 )]TJ/F18 7.9701 Tf 6.587 0 Td [(1 X a =0 X a + bN 2 W ak N .5 Ifweplugourfunctionbackin,andnormalizetheresultbythetotalsamplecount 1 =N weget X k = 1 N N 1 )]TJ/F18 7.9701 Tf 6.586 0 Td [(1 X b =0 f b W bN 2 k N .6 OurderivedFFTisapproximately7 N 2 +7 N 1 withthisfactorizationandiscomputationallymorecomplexthanthestandardDFT.Ifwelet e )]TJ/F21 7.9701 Tf 6.587 0 Td [(i =Nx 1 x 2 approximate 6ops,assumingEuler'sEquation e ix = cos x + isin x [48].TheoriginalDFT rangesover N elementsand N frequenciesthatimplies+1 N N =7 N 2 .Observe thatEquation4.5isapproximately6+1opsrangingover N 2 and6+1opsranging overtheoutersummand N 1 whichyields N 1 N 2 7+7 N 1 fromtheoutersummand. Given N searchfrequencieswehave N N 1 N 2 +7 N 1 7 N 2 +7 N 1 totalops. 32

PAGE 44

Althoughtheincreaseinopsseemstobeacomputationallosswehaveananoverall gainaswehaveextractedamemorycontiguousinnersummandwithnoexternal dependencies.Thiscalculationallowsustoprovideequalsizedcontiguousblocks ofRAMtoindependentthreadsprovidingwefactorevenly.Contiguousblocksof independentdataperthreadminimizescachecontentionanddecreasesoveralloverheadassociatedwithlockingandcontextswitching[20].Algorithm1approachesthe parallelizationofEquation4.6bycreatingaprocess/threadtoexecute N 2 multiplicationsandadditionseachandreturningthepartialsummandthatwethenaddto asynchronizedaccumulationvariable. 33

PAGE 45

abbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbc d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d Algorithm1 ParallelFourierTransform Shared searchRf f SearchFrequencies g ,X f PCM g ,c 0 Private N LengthX, f N 1 N 2 g FactorN For i 0ToLengthsearchRf c 0 k searchRf[i] For a 0To N 1 )]TJ/F15 11.9552 Tf 11.955 0 Td [(1 Parallel Private t 0 Consume c into t Produce c t+ f b e )]TJ/F21 7.9701 Tf 6.586 0 Td [(i 2 aN 2 k N EndParallel Next Barrier X [ b k c ] c 1 =N Next e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e fggggggggggggggggggggggggggggggggggggggh 4.1.3ASyntheticTest Beforeweempiricallyanalyzetheperformance,wewillverifyourparallelized fouriermodelwillcorrectlyextractafrequencydistribution.First,weconstructa signal synthesizer utilitythatgeneratesanarticialsignalatthedesiredfrequency values F = f f 1 ;f 2 ; ;f k g .Then,usingEquation4.1andEquation4.7wecan 34

PAGE 46

interlacethosefrequenciesintoatimeseries.Given a 2 R k;n 2 Z X n = X allk a cos 2 F k n N .7 where a istheamplitude, n iscurrentsampleand N isthesamplerate.Figure4.6 showsourtargetsignal,fabricatedwithonlyasinglefundamental f 0 0 A 0 and12 harmonics.ThesourcecodefortheFouriertransformandSynthesizercanbefound inAppendixA.0.15andA.0.14. Figure4.6: SynthesizedSignalat27 : 5 2 k f 0 k 11 g WecompareournewFouriermodelinEquation4.6withthestandardDFTusing thesignalinFigure4.6todetermineiftheresultsareequivalent.Figure4.7shows amplitudemeasurementsforeachknownfrequencyarethesamebetweentheDFT 35

PAGE 47

andthenewmodel.Condentthatouralgorithmsareproducingthesamepeaks wenowcontrasttheiroverallperformance.Weexecutedeachalgorithm5timesand averagedtheirspeedusingahighprecisionsoftwaretimer.Theresultsareshownin Table4.2. Figure4.7: ComparisonofStandardDFTtoParallelizedFFT Table4.2: DFTvsFFTPerformance Algorithm Timems ParallelizedFFT 72.2 StandardDFT 192.4 Table4.2wascomputedwith f 0 0 A 0 and12Harmonicsonan Intel3.5Ghzi7 with 16GBSDRAMonWindows7Professional64bit.Wecomputetheoverallperformance with Speedup = T 1 T p [20],whichyields192 : 4 = 72 : 2 2 : 66thusouralgorithmis 166% 36

PAGE 48

fasterthanthestandardDFT.Thiswillbehelpfulwhentryingtoextractalarge numberoffrequenciesinatimelyfashion. Beforefurtheranalysis,wemustincludeanothercalculationinourFouriermodel inordertodealwithissuesthatariseduringthesub-samplingofatimedistribution. Weapplyatechniqueknownas windowing ,whichminimizestheeectcalled spectral leakage [6,22,30].Theleakageisinresponsetothetransformbeingappliedtoatime seriesinpartialchunks,inourcase1secondintervals.Whenwesampleaportion ofatimeseriesithasbeenobservedthathighfrequencyaliasingmayappearatthe seamsofthesamplespace[6,30].Tocompensate,thewindowfunctionallowsus topartiallyrepairthisleakagebysmoothingthetransition.Thistechniquetakes manypossibleformsandwehavechosenthe HanningorHann window.Figure4.8 illustratesthegraphofa Hanning window,whichisessentiallythehaversinefunction. Althoughtherearemanydierenttypesofwindowingfunctionsthisoneisknown tobeeectiveinmusicaltranscription[6,30].Equation4.8showsthecalculation adjustedforourfactorizationandwhenweincorporate a;b intoEquation4.5we haveEquation4.9. Figure4.8: HanningWindow 37

PAGE 49

! a;b = 1 2 1 )]TJ/F20 11.9552 Tf 11.955 0 Td [(cos 2 a + bN 2 N )]TJ/F15 11.9552 Tf 11.955 0 Td [(1 .8 f !;a = N 2 )]TJ/F18 7.9701 Tf 6.586 0 Td [(1 X a =0 1 2 1 )]TJ/F20 11.9552 Tf 11.955 0 Td [(cos 2 a + bN 2 N )]TJ/F15 11.9552 Tf 11.955 0 Td [(1 X a + bN 2 W ak N X k = N 1 )]TJ/F18 7.9701 Tf 6.587 0 Td [(1 X b =0 f !;b W bN 2 k N .9 Itfollowsthatwemustdetermineifouralgorithmwillbeeectiveagainstthe full spectrum oftones f 0 0 C 0 2 k through f 0 0 B 0 2 k .Weincludethe Hanning window inallthefollowingcalculations.Inordertoverifyaccuracyandperformancewe ranEquation4.9againstasetofsyntheticsignals,whichincludedaminimumof12 harmonicstepsforeveryfrequencyuptoandincludingtheNyquest.Theresultsare showninTable4.3. Weareapproachingakeythresholdinourfrequencydetectionperformance. RecallfromSection3.1.4thatourtimebudgetallowsfor500mstransformtime. Insteadofjumpingtoaverycomplicatedmusicalscore,wewillcontinuetoincrementallycomplexify"oursignalandevaluateouranalysisaswego.Youcansee anothersignalpatternFigure4.9andtheresultantFFTvaluesforthespecied frequenciesinTable4.4. AninterestingresultisthattheDFTandFFTnowdierslightlyintheirrealand imaginarycomponents.Initiallyweassumedthiswascausedbynumericalprecision issuesinouralgorithmbutasitturnsoutwewerecorrectinourassumption.In Section4.2wediscoveredcertainfrequencyrangeswerenotbeingdetectedproperly andthiswasduetothefactthatwehadmixedthefrequencysearchvariabletype betweenintegerandoatingpointcausingthe e )]TJ/F21 7.9701 Tf 6.586 0 Td [(i 2 int n k N computationtoreturn roundedresults.AftercorrectingtheproblemtheoutputisexactbetweentheDFT andourparallelizedtransform. 38

PAGE 50

Table4.3: FFTPerformanceExtractingtheFullSpectrumofTones SampleRate NumFreq's AveTimems 8000 96 156.2 16000 108 219.2 22050 120 422.6 32000 120 452 37800 132 569.2 44056 132 624 44100 132 640.4 47250 132 696 48000 132 698.2 50000 132 719.8 50400 132 724.4 88200 144 1334 Figure4.9: SynthesizedSignal f 2049,8000,16000,22031 g HzOver1Second 4.2ToneDetectionandCharacterization 39

PAGE 51

Table4.4: FFTVersusDFTAccuracy RfHz DFT2-norm FFT2-norm 2049 2.500003 2.488039 8000 2.499994 2.488039 16000 2.499994 2.488036 22031 2.49998 2.488338 Withourrawfrequencydetectiontechniqueinplacewemustdetermineanefcientandaccuratewayofcharacterizingthemusicalsignalintermsofitsmusical notes eg:A,A#,C,etc. .Wedonotexpecttondaperfecttechnique,especially whenitcomestocomplexscores,noiseand/orpercussioninstrumentsgenerating anti-tonalsound.Wealsohavetocontendwithdiscretesamplesofacontinuous wave.Itisunderstoodthatwewillencounterfrequencyaliasing,frequencylossand detectionambiguity[6,22,29,30,34,38].Thefollowingdetectionalgorithmbeganby borrowingatechniqueproposedbyJehan[18]whichleveragesascaled,homogeneous, meansquarederrorofthespectralgraph.Duringourexperimentationweweableto simplifytheresponsedetectionforourpurposesofonlyextractingdominanttones. 4.2.1ADetectorPredicate Providedwithanamplitudeinrelationtoaparticularfrequencywemustbeable toestimatethemusicalnote,howeverthereisanadditionalchallenge.Recallthe DFTrequiresustoimposethemathematical oor ofrealvalues,thusaccommodating situationswhereafrequencysuchas27.5Hzexistsintheoriginalsignalbutour analysisproducesapeakat27Hzorpotentially26through28Hzorsomesimilar combination.Thiscouldbetroublesomegoingforwardhoweverallourfrequencies dierbyatleast.5Hz,whichmeanswecantruncateeachfrequencytothenearest integerwithouttoomuchtrouble.Givenacomplexvaluedfrequencydistribution X and a 2 C wecandetectafrequencypeakusingEquation4.10. 40

PAGE 52

S a = j a j max j X j D a = 8 > > < > > : TrueS a >t FalseS a
PAGE 53

Theorem4 TheTigger"Theorem Everyelementin M canbemappedtoa distinctintegeroftheform T f = b 100 f 2 b log 2 f c )]TJ/F15 11.9552 Tf 11.956 0 Td [(1 c .11 Proof: Giventheequation t = f 2 b log 2 f c : .12 Since f 2 b log 2 f c = f 2 k + c forsomeinteger c itimplies k + c = b log 2 f c ,whichimplies .12mapsanyfrequencytoitsfundamentaldividedbyanother c factorsof2. Ifwecomputethevaluesforeachfundamentalfrequencyweget T = f 1.021875, 1.0825,1.146875,1.215625,1.2875,1.364375,1.445,1.53125,1.6225,1.71875,1.82125, 1.929375 g ,orderingthesetascendingbythecorrespondingfundamental.Whenwe subtract1fromeachelementweget f .021875,.0825,.146875,.215625,.2875,.364375, .445,.53125,.6225,.71875,.82125,.929375 g .Whenwemultiplyby100andtakethe mathematicaloorwehavetheset Q = f 2,8,14,21,28,36,44,53,62,71,82,92 g Byinspectioneveryelementin Q isuniquetherefore.12mapseveryelementin M toadistinctinteger. Denition3 TiggerHarmonicSet GiventheTiggerTheoremmapping T : M Z ,dene tobethesetofallTiggerHarmonicvaluessuchthat = f T m ;m 2 M g Nowthatwehaveadetectionandcharacterizationtechniquewedeviseanalgorithm,whichallowsustoprocessanyaudioinputstreamandguessthenoteswithin thatstream.Weconstructouralgorithmasfollows,Equation4.6extractsaportion ofafrequencydistributionofamusicaltimeseries,Equation4.10allowsustodetect thepresenceofafrequencyandTheorem4allowsustomapthatdetectedfrequency toamusicalnote.TheentireapproachisdenedinAlgorithm2andListing4.1and 42

PAGE 54

4.2showsthemainstructuresandmethodsusedbythealgorithm. abbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbc d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d Algorithm2 NoteCharacterizer Private X f SampleInputs g ,SearchRf f f 1 f 2 ,... g ,DetectedNotes Private N LengthSearchRf,i 0 X FFTX,SearchRf For k 0ToN-1 If D X [ k ] Then DetectedNotes[i] Tk i i +1 EndIf Next Return DetectedNotes e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e fggggggggggggggggggggggggggggggggggggggh 43

PAGE 55

Listing4.1:CharacterizationStructures //Immutablestructurethatholdsinfoabout //afundamentaltone. struct MusicalNote f //Noteenumerationvalue public eMusicNotemyNote; //ThefundamentalRfforthisnote publicfloat myRf; //Thetiggermappedvalue. publicfloat myTiggerRf; g //Mutablestructurethatholdsinformation //aboutanobservabletone. struct MusicalDetect f //TheStartingsecondthenotewasseen. publiclong myTimeOn; //Theobservedfrequency publicfloat myRf; //Theobservedamplitude publicfloat myAmp; //Thenotedetected. public MusicalNotemyNote; //Thedurationthetonewasplayedinmilliseconds publicfloat myDuration; //TheharmonicofthefundamentalinmyNote. publicint myHarmonic; g 44

PAGE 56

Listing4.2:CharacterizationMethods //Determinesthefundamentalnotefromanyfrequency. //f:=Thefrequencyforwhichtoguessthenote. MusicalNoteCharacterize float f; //Determineswhichnotesexistwithinthegiventime //seriesstartingfromoffset. //X:=SetoftimeseriessamplesPCM //sampleRate:=ThesamplespersecondinX //offset:=WhereinXtobeginguessing. MusicalDetect[]GuessNotesZ2[]X, int sampleRate, int offset; Itiswiseatthispointtoexperimentwiththealgorithmanddetermineitseffectiveness.Figure4.10depictsthetimeplotofamorecomplicatedalbeitslightly unrealisticsignalgeneratedwiththesynthesizer.Thesignaliscomposedofthree tones A,C,G# atvaluesof.5,1000,2,.32,5000,1,.96,7000,2frequency, powerandtimesecondsrespectively.WeexecuteAlgorithm2onthissignaland demonstratetheresultsinTable4.5. Table4.5: InitialNoteGuessingResults Time Found Expected Amp 0 A A 248.0868 1 A A 248.0868 2 C C 1242.433 3 G G# 1736.93 4 G G# 1741.317 45

PAGE 57

Figure4.10: SynthesizedTones@44.1Khz, f A,C,G# g Over5Seconds Theresultsofourdetectionandcharacterizationarepromising,howeverthere areobviouslynotexact.RecallthattheFFTmustworkwithdiscretefrequency valuesthusprovidingachallengetotheaccuracyoftheTiggerTheorem.Figure4.11 illustratesacomparisonoftherationalvaluedharmonicsversusthediscreteharmonics inthe44,100Nyquistrange.TheverticalaxisistheimageundertheTiggermapping andthehorizontalaxisisthefrequencydenotedbythecorrespondingmusicalnote. Figure4.11: ComparisonofRealvsDiscrete TiggerTheorem Notevs T a 46

PAGE 58

Theresultsaresurprisinglysimilar,howeverwenoticecleardeviationsthatare mostprominentatthefundamentals.Weshallattempttoremedythiserrorbyadding anewcharacterizationstep.Werequireanewsetofintegervaluesdirectlymapped tothefundamentalfrequenciesandanothersetdirectlymappedtotheharmonics. Denition4 FundamentalIntegerSet Dene F I tobethesetofintegersbijective to F suchthat F I = fb F cg Denition5 HarmonicIntegerSet Dene M I tobethesetofintegersbijectiveto M suchthat M I = fb M cg Theorem5 TheDiscreteTigger"Theorem Anyfundamentalfrequencycanbemappedtoauniqueinteger. Proof: Wecomputebyexhaustiontheset F I = f 16,17,18,19,20,21,23,24,25,27,29,30 g Byobservationallelementsof F I areunique. Theorems4and5allowustoderiveaslightlymoreaccurateapproachinthe numericalenvironment.Givenanymusicfrequencyvalue a 2 M I weattemptour characterizationbydeterminingifthefrequencyisanelementof F I andusingthe correspondingnoteifamatchisfound.Ifnomatchisfoundinthefundamentals wecomputetheTiggerHarmonic"andsearchforamatchintheTiggerHarmonic Set" soastominimize j T a )]TJ/F47 11.9552 Tf 12.395 0 Td [( k j .Weexecutethisnewtechniqueonthesame signalanddisplaytheresultsinTable4.6.Theresultsareperfectlyaccurateforour verysimplesignal.Thesourcecodeforthenotedetection/characterizationcanbe foundinAppendixA.0.7inthefunction GuessNotes 47

PAGE 59

Table4.6: FinalNoteGuessingResults Time Found Expected Amp 0 A A 248.0868 1 A A 248.0868 2 C C 1242.433 3 G# G# 1736.93 4 G# G# 1741.317 4.2.3RigorousCharacterizationAnalysis Wehavesuccessfullynetunedourmodeltoperfectlydetectthesequence A;C;G #soweproceedtoperformmorerigorousanalysis.Westartbysynthesizingasoundthatappendseveryharmonicforeverynoteintoatimeseriesand executeAlgorithm2onthetimeseries.Figure4.12clearlyillustratestheresults. Thealgorithmappearstobreakdownintheupperharmonics,whichhasbeen observedbybello[22]andothers.Ahighfrequencydetectionmethodissupplied byBello,whichcouldpossiblybeappliedatalaterdate.Forthetimebeing,the 11'thand12'thharmonicsremain 45%accuratewhiletherestofthespectrum is100%accurate.Asanaltestwesynthesizesignals,whichconsistof random tones at randomharmonics for randomintervals ,creatingamorecomplexsound ofapproximately5-14seconds.Figure4.13illustratesoneexampleoftherandom soundusedinthisanalysis.Wekeeptrackofthe groundtruth informationanduseit tomeasuretheexpectedoutputagainsttheguessingalgorithm.Atrstwegenerate onlyafewsignalsandrunthemthroughthedetector.Theoutputcanbeseenin Table4.7. 48

PAGE 60

Table4.7: AccuracyofRandomSignals Time Found Expected Amp 0 G G 983.947 1 G G 983.947 2 F# F# 2487.615 3 A# A# 963.4857 4 G# G# 2223.319 5 G# G# 2223.319 6 F F 1707.491 7 D D 1086.08 8 G G 1142.743 9 G G 2419.794 10 D D 2255.06 11 B B 2432.167 12 B B 2432.167 Time Found Expected Amp 0 F F 2219.488 1 C C 2048.204 2 C# C# 2557.707 3 G B 10.61324 4 A A 2507.143 5 A A 2507.143 6 C C 1920.38 7 G G 2351.047 8 G G 2351.047 9 Undetected F# 5470.793 10 Undetected F# 5470.793 Time Found Expected Amp 0 F F 1927.708 1 F F 1927.708 2 E E 1222.492 3 E E 1222.492 Time Found Expected Amp 0 A# A# 1157.51 1 F# F# 972.2834 2 D# D# 564.4082 3 A# A# 513.2103 4 C# C# 1556.932 49

PAGE 61

Figure4.12: AccuracyPlotNotevsHarmonicvsPercentAccuracy-100% Figure4.13: SynthesizedRandomSounds@44.1Khz Asexpected,theguessesarefairlyaccuratebutweweneedmoreinformationto determinehowaccurate,soweproceedtoautomateaveryrigoroustestoverhundreds ofrandomsignals.Wetallytheresultsintothreeareas,Figure4.14showsthesuccess asapercentageofaccuracyovertime,Table4.8demonstratestheoverallperformance andTable4.9showsusthetonesandrangesthatwereundetected. Theexecutiontimeiswellbelowwhatwasexpectedsince 8secondsoftime onlytakes 1 : 6secondstoguessthetones.Thisleavesusasubstantialamountof 50

PAGE 62

Table4.8: Detection&CharacterizationResultsInitialMetrics TotalRuns AveSampleTime AveGuessTime OverallAccuracy 200 8.435s 1675.49ms 86.60344% Table4.9: Detection&CharacterizationResultsUndetected UndetectedNote Times Frequency Amplitude C 17 35471.36-35471.36Hz 2557.409-9612.205 C 41 28160-56320Hz 1008.208-10325.92 C 17 26583.04-53166.08Hz 1367.82-10405.38 C 22 22353.92-44707.84Hz 1348.063-10926.15 C 21 23674.88-47349.76Hz 1836.552-10948.71 C 30 31610.88-63221.76Hz 1796.666-10658.98 C 10 33484.8-33484.8Hz 1145.468-7371.489 C 29 25088-50176Hz 1211.848-10403.89 C 24 29839.36-59678.72Hz 1711.161-10979.81 C 9 37580.8-37580.8Hz 3221.58-10981.38 C 10 42188.8-42188.8Hz 1913.405-10745.18 C 8 39833.6-39833.6Hz 1283.639-10054.6 51

PAGE 63

Figure4.14: Detection&CharacterizationResultsRunvs%Accuracy processingslackformelodyanalysis.However,theaccuracyof 86%isfairlygood butdoesnotseemtofollowwithour100%accuracyinTable4.5.AftercarefulobservationofTable4.9wediscoveraphenomenon.Noticethatallundetectedinstances of C areabovetheNyquistfrequencyof22.050Khz.Recallwearesamplingat44.1 Khz,whichmeansthat22.050Khzisthemaximumdetectablefrequency.Asit turnsoutourtonegeneratorhasaverysimplebutsignicantdefect.Atoctavesabove 12itcreatesfrequenciesabovetheNyquistrange.Wecorrecttheissuebylimiting thesynthesizedfrequenciestotheNyquistrangeandre-runouranalysis.Theresults areexcellent.Table4.10andFigure4.15showsthatwearenowat100%accuracy running 500%timesfasterthan realtime .Aspreviouslystated,noalgorithmwill beperfect,thusweexpecttoseeamarginoferrorwhenprocessingrealmusic,but forthetimebeingwearecondentenoughtomoveforward. 52

PAGE 64

Table4.10: Detection&CharacterizationCorrectedResultsFinalMetrics TotalRuns AveSampleTime AveGuessTime OverallAccuracy 200 8.855s 1814.64ms 100% Figure4.15: Detection&CharacterizationResultsCorrectedRunvs%Accuracy 4.3ChordDetectionandCharacterization Amusicalchordcanbediculttoclearlydene,especiallywhenwedivergefrom westernmusicandincludetheideaofinharmonicordissonanttones.Informally,a chordisasetofnotesplayedatthesametimebuttolimitconfusionandenhance scienticrigorweadoptTymoczko'salgebraicdenitionofa musicalobject thatis anorderedsequenceofpitches[44]"whoseelementsareverticesofanelementinthe D 12 group 3 .Topreciselydetectandcharacterizeachordisverydicult.Ourtone characterizationprovidesuswithasetofmusicalnotesbutwemustdeterminethe startingtimeanddurationofgroupsofnoteswhichformachord.Thisisespecially 3 Thedihedralgroupon12vertices.DiscussedindetailinSection4.4. 53

PAGE 65

trickywhendealingwithsub-band 4 onsetofmultipletonescombinedwithlowlatency. RememberfromSection2.1thatasynestheticresponseistriggeredbyatonebeing playedforatleast3seconds.Wewishtoelicitthissameresponseinnormalsand weconjecturethatapreferredresponseinnormalsismorelikelywhenmirroringthe stimuliofsynesthetes.Thereasonbeingthatallhumanssharecommonneurological componentsinauditoryandvisualcognitionandmostpeopleexperiencesomekind ofsynesthesia[16,37,38].Nevertheless,atthispointonewouldpreferasub-second knowledgeoftoneonsettoaidinchordidentication.Forexample,supposewehada twosecondintervalandourcharacterizationalgorithmproducesthenotes f A,C,G# g and f A,D,E g .Wedon'tknowatwhattimeswithinthesecondanyofthetonesstarted onlytheyexistsomewherewithinthesecond.Nothingprohibitsan 'A' notefrom playingfor25%oftherstsecondand50%ofthenextsecondandourdetection wouldnotspecifywhichportionoftheseconditbeganorended.Weshalladdahigh delitytimemodeltoourlistoffutureworkandpressonwardwithour1second delity. 4.3.1AChordDetectionAlgorithm Inordertoidentifyachordweemployastraightforwardtechniquebutrstwe mustdiscusssometerminology.Theterm residual asusedinthissection,willrefer toanyunisontonenotimmediatelypairedwithanexistingchord.Weusetheterm minimumduration torepresenttheminimumtimemillisecondsanoteisplayed. Figures4.16,4.18,4.21,4.19,and4.17showtheUseCases 5 forouralgorithm.It shouldbenotedthatouralgorithmisagnostictotimedelitysofutureworkwill employthesamealgorithmevenifweadjustsub-bandfrequencyextraction.We breakourcasesinto NewNotes Match RefreshChord Residuals ChordExpiration 4 WeusethisterminthiscontexttorefertospaceinthesamplinglessthanthecurrentNyquist samplerate. 5 InSoftwareEngineeringaUseCasedenestheconceptofoperationofanactor"andthe interactionwithaparticularinterface,machineordomain. 54

PAGE 66

UseCaseNewNotes: Assumption: None. 1Asetofnewnotesarecharacterized. 2Ifno Match: isfound Residuals: 3If Match: isfound RefreshChord: 3Incrementsteptimetothemaximumofallnotes-onsettime. 4Run ChordExpiration: Figure4.16: UseCaseNewNote and ChordKill .Eachcaseisasequentialportionofthegranderalgorithmwith NewNote beingtheentrypointand Match beingautilityforcomparisonofnotes. WeusetheclassstructureinListing4.3tomanagethechordconstruction.Wecall thisclassthe ChordDetector anditactsasastatemachinethatbehavesaccordingtothealgorithmsdescribedbyUseCases4.16,4.18,4.21,4.19,and4.17.The chorddetectorclassacceptsonetomanytonesintheformofa MusicalDetect array Listing4.1 ChordDetector makesnoassumptionsabouttheincomingtonesand attemptstomergeitemsbytimeanddurationintoappropriatechordsorunison tones.ThesourcecodeforthechorddetectorcanbefoundinAppendixA.0.9,and A.0.8. 55

PAGE 67

UseCaseMatch: Assumption: Let n beanyexistingnoteand m betheincomingnote, T is thetimeonsetand D isthelastknowndurationofanote. 1If f 0 0 n 0 == f 0 0 m 0 andif f k 0 n 0 == f k 0 m 0 forsome k andif T n T m T n + D n wehaveamatch. 2Otherwisenomatch. Figure4.17: UseCaseMatch UseCaseRefreshChord: Assumption Matchingnotefound. 1Ifamatchingnoteisfoundinanexistingchordandincrementdurationby minimumduration 2Elseifanotewiththesameonsettimeisfoundaddnotetochord. Figure4.18: UseCaseRefresh 56

PAGE 68

UseCaseResiduals: Assumption: Nomatchestothenote. 1Startanewchord. 2Addnotetochord. Figure4.19: UseCaseResidual UseCaseChordExpiration: Assumption: Let n beanyexistingnoteand T isthetimeonsetand D isthelastknowndurationofanote. StepTime isacounterthatholdseach intervalof minimumduration thathaspassed. 1If T n + D n < StepTimeforanynote,removethatnote. 2Ifallnotesexpired KillChord: Figure4.20: UseCaseExpiration UseCaseKillChord: Assumption: Allnotesexpiredinchord. 1Removethechordfromthedictionary. Figure4.21: UseCaseKill 57

PAGE 69

Listing4.3:ChordDetectorClass //Achordstructurethatholdsnotesplayedatthesametime class Chord f //Listofnotesinthischord. List < MusicalDetect > myNotes; g //Managesasetofchordsdetectedinrealtime. class ChordDetector f //ThecurrenttimestepinunitsofmyMinDuration. //incrementedoneachcalltoNewNotes. privatelong myTimeStep=0; //Thelistofcurrentlyheldchords private ChordmyChord; //Addnewnotestobedetectedandaddedtoachord //orcreatenewchords. publicvoid NewNotesMusicalDetect[]newNotes; //Requestallcurrentlyheldchordstructures. public ChordGetChord; g Fortunately,itiseasiertoverifythisalgorithmbecausethedataspaceissmaller andthealgorithmisinfactdeterministic.Table4.11demonstratestheinputseriesof notesmanuallyconstructedtotrytofoolouralgorithm.Noticethatfromtime1to time2ouralgorithmisnotfooledandproperlyremovestones A 2and G #3.Italso compresses B 4intothesamenoteandproperlydisposesof C 2fromtime2totime 3.Ifweexperienceissueslaterinourimplementationwewillrevisitamorerigorous 58

PAGE 70

Table4.11: ChordDetectorBasicTest Time 0 1 2 3 4 Notes C2A2G#3 C2A2G#3 C2B4B4 B4A4G#4 B4A4G#4 Expected f C2,A2,G#3 g f C2,A2,G#3 g f C2,B4 g f B4,A4,G#4 g f B4,A4,G#4 g Output f C2,A2,G#3 g f C2,A2,G#3 g f C2,B4 g f B4,A4,G#4 g f B4,A4,G#4 g testofthechordcharacterizer. 4.4MelodyAnalysis Inthissectionweexploretheanalysisofthemusicasacohesivesetofchord progressionswhereinachordmayconsistofasinglenote.Wetreateachchordas ageometric musicalobject sobeforewebeginthemelodyanalysisusingtheve featuresofmusicinSection2.1.Ourmelodymappingisbaseduponthenotionthat musichassomekindofcenter;musicalobjectsthatmoveinsmallincrementsand areconsonantaremorepleasingandthatsimilarlyofchordstructuresareexpected toappearoften. Wemustquicklymentionthealgebraicgroups,specicallythe dihedral group. The dihedral groupisthegroupofsymmetriesofaregularpolygonin n vertices[12] Toboundthescopeofourthesiswedonotoerarigorouseducationingrouptheory butwedooerasimpleintuitiveexplanationofthedihedralgroup.Supposeyou haveaplanargraphwithfourvertices.Youembedtheverticesinthe2Dcartesian planesuchthatallverticesaresymmetricandequidistantfrom,0.Ifwearbitrarily labeleachvertexasinFigure4.22wehavethedihedralfourgroupoftenreferredtoas D 4 .TherstimageofFigure4.22showsposition0,whichwedeemtheidentity.The secondimageisaclockwiserotationabouttheoriginof90degrees.Ifwecontinue tosuperimposetheverticesineverypermutationandlabeleachconguration,those labelscombinedwithabinaryoperationformthegroup.Itiscalledanalgebraic 59

PAGE 71

Figure4.22: D 4 GroupExample Figure4.23: D 12 PitchClass group"becauseitcanbeshowntopossessclosure 6 ,associativity,inverseandidentity ofthebinaryoperationanditselements[13].Inthecaseofthesymmetricgroupwe usethe operatortodenotethebinaryoperation.Forexample r 0 r 1= r 1= r 1 r 0, givingustheidentityin r 0.Similarly, r 1 r 1= r 2and r 1 r 1 3 = r 1 r 1 )]TJ/F18 7.9701 Tf 6.587 0 Td [(1 = r 0. Becausewereturntotheidentityelement r 1 3 istheinverseof r 1.Thiscanbethought ofastakingthevalueof90deg+deg 3 0 mod 360.Onenalwaytoenvisiona groupinverseandidentityisusingthefamiliarrealnumbers.Noticethattheinverse ofthevalue2is2 1 2 =2 2 )]TJ/F18 7.9701 Tf 6.587 0 Td [(1 =1whereastheidentityof2is2 1=1 2=2. Thealgebraicdihedralgroupsimplyusesanewfancylabelforfamiliarconceptssuch asinversesandidentities.Itiswithinthesegroupstructuresthatwedeterminethe behaviorofthemusic. Weanalyzethemelodyaccordingtothe Pitchclass ,aterminmusictheory referringtothe D 12 groupwithmusicalnotesshowninFigure4.23[45] throughout thissectionwewillinterchangetheterms pitchclass and D 12 .Usingthevetenets ofmusicweconstructapatternanalysisdesignedtocapturetheowofthemelody. Our ChordDetector canextractenoughinformationtoassignavalueofthe D 12 group 6 Closureofagroupimpliesthatthebinaryoperationontwoelementsresultsinanotherelement withinthesamegroup. 60

PAGE 72

toeachnoteofachord,whichformsasubgroupofthegroup.Therearemanychanges toasubgraphthatcanoccurbuttwokeychangesareknownasdistancepreserving functions transposition and inversion .Transpositionandinversionareanalogousto thegeometricoperationsoftranslationandreection[46].Thesmallerthetranslation valuetheharderitistodetectthechange[46]andviceversa.Inversion,accordingto Tymoczko,hasa similarity propertywhereininverselyrelatedchordssoundsimilar. Wewillusetheseideastohelpguideourmappings. 4.4.1AGeneralizedParameterizedVisualizer MuchlikeourproofofconceptinSection3.1wewanttoproducesimilaroutput onscreenbutusingmorerigorousmusicalinformation.RememberfromTable3.1 thattheinitialimplementationisdirectlylinkedtothetimeseries.Inordertofacilitatetheuseofthevefeaturesoftonalitywemustbreakthetimeseriesassociation, refactorourcodeandgeneratearobustandre-useableanimationwidget.Constructingvariousanimationcomponentscanbeverytimeconsumingandcomplicated.In futureworkweintendtoexperimentwithamyriadofvisualbehaviorsbutforour currentscopewewillre-usethecurrentwork.Thismeanswewanttobeabletoplug inanewanimationwithouthavingtore-wirevisualcalculationsordependonvisuals tiedcloselytothetypeofdata.Wecreatedaclassstructureandinterfacedesigned tohelpusde-coupletherenderingsfromtheanalysislogic.Tothatend,wedenea newspace. 61

PAGE 73

Denition6 ValueSpace A valuespace isavectorin Q m whoseelementsare valuesrangingfrom0to1,orderedbytheimportanceofthevaluefromlowestto highest. Denition7 ColorSpace A colorspace isasetofintegerRGBvaluesordered fromlowesttohighestimportance. Denition8 FocalSpace A focalspace isvectorin Q m whoseelementsarevalues from0to1,suggestingareferencepointorcentroidofactivitysomewherewithinthe valuespace Denitions6,7,and8allowustodiscusssomegeometrywithveryspecicvalues, withouthavingtoknowspecicallywhatthatgeometrywillbe.Whenwecombine allthesespacestogetherwehaveour visualspace ,whichcanbeusedtoparameterize thedrawingoranimationengine. Denition9 VisualSpace A visualspace isanobjectthatcontainsavaluespace, focalspaceandcolor-spacealongwith Spectacle -Avaluefrom0to1,with1being morefancy"and0beingdull. Maximumelements -Avaluethatspeciesahardlimiton thenumberofgeneratedelements. Wecreateanewinterfacecalled IVisualizer meanttobeimplementedbyan animationenginethatdrawsthingsonscreen.Weneedawaytotellthevisualizer howtodrawaccordingtoourmusic,soweprovideaclassstructurecalled VisualSpace thatconformstoourdenitions.Thevisualspaceprovidesintuitiveinformationthat 62

PAGE 74

Table4.12: VisualSpaceAxioms Axiom Description a Allvaluesareconsideredtobehomogeneoustooneanotherandalldatagoingforward. b Wheneverpossiblevaluesshouldrangefrom0to1. c Unlessnotsupplied,allsuppliedcolorsaretobeusedandderivedcolorsaretobe gradientsofsuppliedcolors. a knowledgable" algorithmcancomputeaheadoftimeandoertheanimator.The visualspace decouplestheanimationimplementationfromengineeringunitsorthe typeofdata.Thereisanimplicitagreementbetweenthevisualspaceandanimation engine,whichincludesthetenetsofTable4.12. Asweprogresswemayaddmoreparameterstothevisualspacebutfornowwe moveontothemelodyandhowweintendtoextractuseablevaluesthatcanbefed thevisualspace.Weattempttodescribetheheuristicswithoutgettingtoofarinto thesourcecode,butthefullmelodyanalysissourcecanbefoundinAppendixA.0.13. Thesourcecodeforthe IVisualizer andourinitialanimationenginederivedfrom thetimedomainanimationcanbefoundintheAppendixA.0.19,A.0.17andA.0.18. 4.4.2Mmmmmm,TheMusicalMelodyMathematicalModularity MovementManager Ifyouarenotchucklingyoumaynotbereadytodigestthisapproach.The techniqueweproposerequiresustoleverageeverythingwe'veconsidereduptothis point.WeapproachtransformationofmelodyusingTymoczko's veproperties asa guideandexploittranspositionandinversiontohelpcreatereasonablevalues.From Section4.4.1weneedtoparameterizethevisualspaceinawaythatmakessense. Unfortunatelywemustbombardthereaderwithseveralmoredenitionsandtheorems;howevereachdenitionisacriticalcomponentincalculatingthevisualspace. 63

PAGE 75

Theorem6 AngryTiggerTheorem Givenachord C with N toneseachhaving frequency a ,wecanapproximateavalueofconsonancerangingfrom0to1 x = 1 N 2 )]TJ/F20 11.9552 Tf 11.955 0 Td [(N N )]TJ/F18 7.9701 Tf 6.587 0 Td [(1 X j =0 N )]TJ/F18 7.9701 Tf 6.586 0 Td [(1 X i =0 j a i )]TJ/F20 11.9552 Tf 11.955 0 Td [(a j j = 8 > > < > > : 1if 1 2 2 x )]TJ/F21 7.9701 Tf 6.586 0 Td [( 1 2 2 )]TJ/F21 7.9701 Tf 6.587 0 Td [( 1 otherwise .13 with1beingmost"consonant. Proof: Ithasbeenobservedthattwonotesbeingplayed,whichareveryclose togetherinfrequencybutnotexactcausesadissonanttone[38,43].Giventhefrequenciesofachord f a 1 ;a 2 ; ;a N g wecomputethesumof j a i )]TJ/F20 11.9552 Tf 11.688 0 Td [(a j j forall i;j N Wethentaketheaveragebymultiplying 1 N 2 )]TJ/F21 7.9701 Tf 6.586 0 Td [(N subtracting N inthedenominatorto excludeitemscomparedwiththemselveswhicharenecessarilyzeroandassignthis valueto x .Let 1 bethedierenceinfrequencybetweenthe A and A #notes,and 2 bethedierencebetweenthe A and B notes.Let P 0 = 1 2 and P 1 = 2 suchthat x = )]TJ/F20 11.9552 Tf 12.124 0 Td [(t P 0 + tP 1 .Note:wedivide 2 by2toensurewehaveavaluethatisvery close,butnotexactlyequal.If x islessthan 1 or x isgreaterthan 2 wedeemit consonantandreturn1,otherwisewehavealinearinterpolationbetween P 0 and P 1 asafunctionof t thus x = )]TJ/F20 11.9552 Tf 12.013 0 Td [(t 1 2 + t 2 t = x )]TJ/F21 7.9701 Tf 6.587 0 Td [( 1 = 2 2 )]TJ/F21 7.9701 Tf 6.586 0 Td [( 1 = 2 = 2 x )]TJ/F21 7.9701 Tf 6.586 0 Td [( 1 2 2 )]TJ/F21 7.9701 Tf 6.586 0 Td [( 1 ,whichyields t asa functionof x .Since t isavaluefrom0to1where0isequalto 1 2 and1isequalto 2 ,wehaveameasureofconsonancefrom0to1. 64

PAGE 76

Denition10 MagnitudeofaChord Givenchord C = f c 1 ;c 2 ; ;c N g where n is anamednote,themagnitudeofachordisdenedtobetheaveragefrequencyscaled bytheaverageharmonicvalue = 1 N 2 N X j =1 N X k =1 f 0 c k H c j .14 Denition11 CentricityofaChord Thecentricityofachord C = f c 1 ;c 2 ; ;c N g where c isanamednote,isdenedastheoriginaboutwhichachordiscenteredwith regardstoboththeharmonicvalueandfundamentalfrequency. TheHarmonicCentricity H = 1 N N X k =1 H c k .15 TheFundamentalCentricity F = 1 N N X k =1 f 0 c k .16 Denition11describescentricitywithrespecttoasinglechord.Wemustalso maintaintheoverallcentricityofmelodyacrossallchordsastheyprogresswithrespecttobothharmonicvalueandpitch.Wewillrefertothesevaluesas H and F .Wehavetheabilitytomeasurethingsaboutachord,butweneedtheability toanalyzechange,thismeansadditivechangeinregardstooctaveandpitchclass. Technicallyspeaking,thecomputersystemdoesnotsupportgroupoperationsof D 12 sincetheyarenon-numeric/symbolic.Tosolvethisweassignauniqueintegervalue toeachvertexasshowninFigure4.23.Thisyieldsthe Z 12 ; +group,whichisisomorphictothecyclicsubgroup = f r 0 ;r 1 ;r 2 ; ;r 11 g ,whichisalltherotations of D 12 ; +.Weleveragethefactthatanynitecyclicgroupoforder n isisomorphic 65

PAGE 77

to Z n ; +[14],thuswecandealwithtranslationsofournotesaspositiveintegers modulo12. Denition12 ChangeofaChord The changeofachord isavaluefrom0to 1where1meansalargechangeintonalstructure.Let C;D betwochordswhose elementsarefrom Z 12 sortedinascendingorder.Given m = max j C j ; j D j where jj istheorderoftheset,wedene X = f C; 0 1 ; ; 0 k ;D; 0 1 ; ; 0 l g where k j C j)]TJ/F20 11.9552 Tf 15.372 0 Td [(m and l j D j)]TJ/F20 11.9552 Tf 18.45 0 Td [(m ,asthejoinednotesof C and D whoseorderisdivisibleby2with zerollrespectively.Let Y = fj X t )]TJ/F20 11.9552 Tf 12.836 0 Td [(X t + m jg with 0 t
PAGE 78

subsequentlytheirbehaviors,intothevisualspace.WecallthisTigger'sRoar". Denition14 Tigger'sRoar Givenaseriesofchordsprocessedintimeorderthe viewspaceofachord isdenedtobe Values: v 1 ;v 2 ;v 3 = ; + ; H Focus: c 1 ;c 2 ;c 3 = ; 0 ; Colors: RGB 1 ;RGB 2 ; ;RGB k =TheStripesTheorem Spectacle: Denition13 Elements: E =30+ )]TJ/F11 10.9091 Tf 10.909 0 Td [( 50+ Spectacle 50 ThenotationinDenition14maybeconfusing.Itisassumedallcalculations aredonewithrespecttotheappropriatecentricityandthemostrecentchordin thesequence.Denition14isreallyanamalgamationofourworkcombiningthe propertiesofmusic,Synesthetesresponse,Fourieranalysis,noteandchorddetection andchangestomelodyovertime.Itisournalcalculationandthereisnotmuch atthispointthatcanprovenmathematically.Figure4.24illustratesthevisualspace computedusingDenition14against100secondsofsymphonymusic.Itappearswe have,atleastinpart,achievedourgoalofaviewspacethatrepresentsthechanges ofthemelody.Asdesiredthe primaryfocus and primaryvalue demonstrateastrong correlation,whichmeanswehavecapturedharmonicconsistency.Itwouldalsoseem wehaveaddressedcentricityinthe focusspace becausethesecondaryandtertiary fociarestableandconsistent,whichfollowsthetheoryofconjunctmelodicmotion. 67

PAGE 79

Figure4.24: VisualSpaceExample Itisencouragingtoseethatthethirdelementofthe valuespace spikesonlyafew times.Recallthatthisisthechordmagnitudescaledbyadissonancefactorsowe wouldnotexpecttoseethisvaluegrowoften.Unfortunately,itisimpossibletotruly deducehowtheanimationenginewillbehavefromthisgraph. 68

PAGE 80

5.Results 5.1Experimentation Atthispointwecanonlyspeculateastohowwellourdatawillreectthemusic. Wemustpressonwardandintegratetheviewspacealgorithmintoouroriginaltime domainengineandvisuallyexaminetheresults.Figures5.1and5.2illustrateoutput usingthemelody-parameterizedviewspacefromSection4.4.Wewillleaveamore rigorousstudyanddiscussioninthevariousanimationtechniquesforfuturework; forthetimebeingonecanseeourcurrentimplementationinAppendixA.0.17.The imageryisstunningandweobserveconsistentconnectionsbetweentonechangeand coloringaswellasacoarseassociationbetweenthegeometryandmelody.Unfortunately,itisimpossibletocapturethebehaviorofauidanimationsequenceina document. Figure5.1: BeethovenMinuet inG Figure5.2: TechnoElectronica SheNebula Ourthesisaddressestheconstructionofaprocessingframeworkcapableofdetectingfrequencies,characterizingtonesandgeneratingaparameterizedgeometryfrom themusic.Atthispointwehaveonlyscratchedthesurfaceintermsoftheabilityto visualizemelodybutourprototypedemonstratesinterestingbehavior.Forthetime 69

PAGE 81

beingwewouldlikefeedbackonourprogresssowedeviseasurveyandsupplyitto asmallnumberofindividuals.Wecreatedanapplicationandframeworkthatplays severalsongstoalistener.Eachsongisfollowedbyaquestionnairewhoseresultsare talliedandemailedtousuponcompletion.Thequestionsarebaseduponaweighted systemwherewedeneeitherpositiveornegativeresultsfromtheresponse.The areasofdiscoveryareasfollows;aneutralizeanybiastowardthemusicgenreb neutralizeabiastowardtheaestheticscheavilyweightpositive"associationsof mood"andsynchronization.Tothatendwedevisedthequestionsandanswersin Table5.1. Table5.1: QuestionsandAnswers Importance Question 2 Doyoulikethistypeofmusic? 3 Werethevisualsaestheticallypleasing? 8 Howmuchdoyoufeelthevisualsmatchedthegenre"ofthesong? 8 Werethevisualsinsyncwiththemusic? 10 Doyoufeelthevisualscapturedthemood"ofthemusic? 10 Didyouseeandhearenoughtoanswerquestions? 5 Didthevisualskeepyourattention? Weight Answer 1 Unsure 5 StronglyAgree 4 Agree 3 Disagree 2 StronglyDisagree Tocalculatethescorepermodule,whereeachmoduleisassociatedtoonesong, welet Q = f q 1 ;q 2 ;:::;q k g betheimportanceofeachquestionbasedonTable5.1, A = f a 1 ;a 2 ; ;a k g betheanswerstoeachquestion.Wecomputethe best possibleanswer baseduponthefollowing;thesubjectstronglydislikesthemusictype;stronglydislikes thevisuals;stronglybelievesthevisualsmatchedthetypeofmusic;stronglyfound thevisualstobesynchronizedwiththemusic;stronglybelievesthevisualscaptured themood;stronglyfeelstheysawenoughtomakeadecisionandwasstronglyfocused ontheexperiment,whichyields Denition15 BestScore B = f 2 ; 2 ; 5 ; 5 ; 5 ; 5 ; 5 g 70

PAGE 82

Thisimpliesthatthe worst possibleanswertobebasedupon;thesubjectstrongly likesthemusictype;stronglylikesthevisuals;stronglybelievesthevisualsdidnot matchthetypeofmusic;stronglybelievesvisualstobeoutofsynch;stronglybelieves thevisualsdidnotcapturethemood;stronglybelievestheydidnotseeenoughto makeadecisionandstronglybelievestheyweredistracted,whichyields Denition16 WorstScore W = f 5 ; 5 ; 2 ; 2 ; 2 ; 2 ; 2 g Equation5.1calculatesthescorebycreatingaweightedsumofthebestanswer correspondingtothepointsforthatanswer. Letaninversescorebe s i = 5 )]TJ/F20 11.9552 Tf 11.956 0 Td [(a i 5 q i Letadirectscorebe t i = a i 5 q i Thenwecompute score S = s 1 + s 2 + t 3 + t 4 + t 5 + t 6 + t 7 grade= score S score B .1 Sincethebestpossiblegradeisscore B =45wedividetheusersscorefrom Equation5.1by45andgetagrade"forasong.Thesourcecodeforprocessingthe surveyresultscanbefoundinA.0.20.Infutureworkthisfoundationallowsusto continueourresearchastheimplementationgrowsmoresophisticated. 5.2SurveyResults Thesurveyconsistedof11peoplerangingfromage9to75withbothmaleand femaleparticipants.Figure5.3showsthatTechnoElectronicaandSymphonymusic scoredthehighest.Thisisnotatallsurprisingbeingthatbothwerecommonlyused duringdevelopment.TheaveragesinFigure5.4seemlow,butaremuchbetterthan 71

PAGE 83

Figure5.3: SurveyResultsbyUserMusicGenrevsGrade% Figure5.4: SurveyResults,AverageGradebyGenre wehadexpected.WhenwecomparethescoresinFigure5.4withFigure5.5the dramaticshiftwithinthesamegenreunderlinesthechallengewefacewithregard tosubjectivity.Itwouldbeprematuretodrawanysolidconclusionsfromthisexperiment,howeverwecanstatethatwehaveaquantiablemeasureofsuccessand clearerunderstandingofexternalperception. 72

PAGE 84

Figure5.5: SurveyResults,TopandBottom5Scores 5.3ConclusionsandFutureWork Aswehavementioned,timedelityposesarealchallengeinourabilitytoaccuratelymodelsound.Notonlydoweintendtoevaluateourdetectionaccuracywith realisticdata,weintendtoincreasethetimedelityandextractsub-secondpeaks toimprovecharacterization.BecausewehaveobservedthattheFouriertransformis sensitivetothesamplesize,wemustresearchadjustingallcalculations.Itfollows thatwewillneedamoreeectivedetection;wemaypossiblyusethetechniqueproposedbyJehan[18].Wewouldalsoliketoconsolidatetheentiremodelend-to-end inanicelycompactedalgebraicfactorization;theideaistondaclosed-formcomputation,whichgetsusclosertobeingabletoproveanisomorphism.Theissueof melodyisverycomplexandstillunclearhowimportantitwillbeinoureort,but weshallcontinuetoresearchthis.Weintendtocontinuefocusingonthegeometryof musicandonmathematicalrepresentationsofmelody.Althoughwehavecaptureda 73

PAGE 85

goodamountthusfar,wewanttocontinueresearchingwhatitmeanstobemathematicallyupliftingordepressingintermsofapitchclassoperation.Wealsointend toexpanduponourinitialworkwithtranspositionandinversionandunlockamore sophisticatedrelationshipwithinthemusicalchordprogressions.Mostimportantly, nowthatwehaveasolidframeworktoexpandupon,weplanonexperimentingwith manyformsofgeometryandanimationtoincludefractals,uiddynamicsandother visualsthatmaybemoreconducivetointuitiverepresentation. 5.3.1TangentialApplications Therearemanydirectionswecantakethisresearchandmanyofthemarepractical.Oneofthemostuplifting,istheabilitytoallowahearingimpairedperson tosee"whattheirenvironmentsoundslike.Notonlyforentertainmentpurposes butforsafetyandcomprehension.Supposethatadeafpersonhasadogandthat personhasneverheardtheanimalsvoice.Ourresearchmayallowthemtoassociate uniquepatternsofimagerywiththemoodortimbre"oftheanimalallowingthem todiscernanger,happiness,concernorwhathaveyou.Thismayalsobeextendedto includecommonsoundsaroundthehousehold.Imaginethatamicrowaveding"goes o,ortheoventimerorthewasher/dryer.Moreimportantly,imaginethatsomeone breaksopenthefrontdoororawindoworisyellingforhelp.Alargepanelonvarious wallsofthehouse,orevenahandhelddevicecouldalerttheusertothesoundsof theenvironment.Withthepropermathematicstheimageswouldbeconsistentand thereforedistinctlyidentiable.Youcouldsayit'slikebraillefortheeyes! 5.3.2TheLawnmowerFilter Duringouranalysisweidentiedafewmathematicaltechniques,whichwarrant furtherresearchandmayproveusefulgoingforward.Thersttechniqueinvolves instantlteringofunwantedfrequencyinformationusingthematrixexpansionof anexponentialseries.Recallthewellknownseries e x =1+ x + x 2 2! cos x = 74

PAGE 86

1 )]TJ/F20 11.9552 Tf 10.129 0 Td [(x 2 + x 4 4! and sin x = x )]TJ/F21 7.9701 Tf 11.324 4.707 Td [(x 3 3! .RecallEuler'sEquation e ix = cos x + isin x Ifwereplace x withamatrix A andinsertitintotheexponentialserieswehave e A = I + A + A 2 2! .UsingEuler'sEquationwehave e i A = cos A + isin A and wecansolveforthecosineandsineusingtheseriesexpansionandsomeconvergence technique.Nowsupposethat A isadiagonalmatrixwhoseelements A ii areof theform f)]TJ/F15 11.9552 Tf 15.276 0 Td [(2 k 1 ; )]TJ/F15 11.9552 Tf 9.298 0 Td [(2 k 2 ; g .Ifwesubstitutethematrixformoftheexponential functionandmultiplytheexponentwiththeimaginaryunitwehaveasimultaneous Fouriertransformthatchecksformultiplefrequenciesatonce Y = P N )]TJ/F18 7.9701 Tf 6.586 0 Td [(1 n =0 X n e i A n Theriskhereisthatourresponsetomultiplefrequenciesisboundtogetherandlikely tobeinaccuratewithrespecttothecurrentsample.Letsreverseourthinkingfrom detectiontoltering.Wehavethen'throotsofunitybutnotnecessarilyavalid coecient.Ifweselectsomemagnitude suchthat X k = N )]TJ/F18 7.9701 Tf 6.586 0 Td [(1 X n =0 X n )]TJ/F20 11.9552 Tf 11.955 0 Td [( jj e i A n jj p .2 wemayjustbeabletodelete"unwantednoiseorotherfrequenciesfromthesignal enoughtoseesomedesiredpattern. 5.3.3AnInstrumentFingerprint Onetopicwehavenotyetbreachedinourresearchistheabilitytoidentifyspecicinstruments.Itisunclearhowmuchthiswillaectouroverallgoalbutsomeof theSynesthesiaresearchdemonstratesspeciccolorsandshapeschosenbasedupon musicalinstrument[32].Weproposeonepossibletechniquetoidentifythesignature, orngerprintifyouwill,ofaninstrument.Ourapproachisquitesimple,weanchor avalueattherstpointinthefunctionwheretheslopeispositive.Wedrawacurve linetoeachsuccessivepointfromtheanchormakinglittletrianglesandcompute theangleothex-axisforeachtriangle.Wedothisuntiltheslopegoesnegative. Ifweaddtheresultstogetherforsomeportionoftheentirewave,wehaveavalue foreachpositivelymonotonicsegmentofthefunction.Figure5.6illustratesacrude 75

PAGE 87

depictionofthegeometry.Thehopeisthateachinstrumentcantheoreticallybe modeledbysomefunction f x whosederivativeidentiesauniquesignaturetothat instrument. Figure5.6: FingerprintTechnique Onepossiblesolutionmightbetointegrateovereachsegmentofpositiveslope andaddalltheanglestogether.Let x = 8 > > < > > : 0 f 0 x < 0 1 f 0 x > 0 thenthefunction = X 2 6 6 4 Z x =0 x 6 =0 tan )]TJ/F18 7.9701 Tf 6.586 0 Td [(1 j f x j j x )]TJ/F20 11.9552 Tf 11.955 0 Td [(x 0 j dx 3 7 7 5 : .3 Equation5.3providesacontinuousmodelofthisapproachthataggregatesthefront sideslopeofsomeseriesofimpulsesoveradesiredregionofthewave.Wesimply deriveadiscreteversionofthisfunction,combineitwithEquation5.2toreducenoise andvoila!auniquevaluethatdescribestheinstrument.Ifwetaketheexpectedvalue overseveralsamplesandcreatearangeoftolerancewehaveamin/maxboundsfor detectingtheinstrument. 76

PAGE 88

5.3.4Conclusions Althoughwemadesignicantprogress,ourresearchisfarfromover.Wehave learnedanumberofthingsaboutthenatureofprocessingsoundandthechallengesof precisionandtimeliness.Wesuccessfullymeldedtheneurologicalresponseof colored hearing withthecomputersystemsabilitytogenerateimagery.Wedemonstrated amathematicalmappingfromsoundtopictographandsuccessfullyimplementeda exibleframeworkcapableofprocessinganysoundinrealtime.Wehavedemonstratedasubstantialimprovementincapturingthemelodyinthefrequencydomain versusthetimedomain.Wehaveproposedandimplementedauniquetechniquefor musicalfrequencydetectionandnotecharacterization.Wehaveexposedourimplementationtohumansubjectsandseenencouragingresults.Allinall,ourprogress opensthedoortoalargerworldandlaysthefoundationformorepossibilitiesin extendinghumansensoryperception. 77

PAGE 89

REFERENCES [1]ThomasM.FioreAlissaS.CransandRamonSatyendra.Musicalactionsof dihedralgroups. TheMathematicalAssociationofAmerica ,June-July,2009. [2]MIDIManufacturersAssociation.Historyofmidi. http:/www.midi.org/ aboutmidi/tut_history.php ,2015. [3]JanDirkBlom. ADictionaryofHallucinations ,page73.SpringerScience+BusinessMedia,2010. [4]C.SidneyBurrus.IndexmappingsformultidimensionalformulationoftheDFT andconvolution. IEEETransactionsonAcoustics,Speech,andSignalProcessing ,ASSP-25:239{241,1977. [5]GlendaKLarcombeCarolBergfeldMills,EdithHowellBoteler.Seeingthings inmyhead:Asynesthete'simagesformusicandnotes. Perception,volume32 pages1359{1376,2003. [6]AldoPiccialliGiovanniDePoliCurtisRoads,StephenTravisPope. Musical SignalProcessing .Swets&ZeitlingerB.V,Lisse,Netherlands,1997. [7]MichiganTechDepartmentofPhysics.Musicalsignalfrequencies. http://www. phy.mtu.edu/ ~ suits/notefreqs.html ,2015. [8]ZoharEitanandInbarRothschild.Howmusictouches:Musicalparametersand listenersaudio-tactilemetaphoricalmappings. PsychologyofMusic39 ,pages 449{467,2010. [9]ShannonSteinmetzEllenGethnerandJosephVerbeke.Aviewofmusic.In DouglasMcKennaKellyDelp,CraigS.KaplanandRezaSarhangi,editors, ProceedingsofBridges2015:Mathematics,Music,Art,Architecture,Culture ,pages 289{294,Phoenix,Arizona,2015.TessellationsPublishing.Availableonlineat http://archive.bridgesmathart.org/2015/bridges2015-289.html [10]JaiSamKimNicolaVenezianiGiovanniAloisio,G.CFox.Aconcurrentimplementationoftheprimefactoralgorithmonhypercube. IEEETransactionson SignalProcessing ,39,1991. [11]I.J.Good.Theinteractionalgorithmandpracticalfourieranalysis. Journalof theRoyalStatisticalSociety.SeriesB ,20:361{372,1958. [12]ThomasW.Hungerford. AbstractAlgebraAnIntroduction ,page176.Brooks/Cole,2014. 78

PAGE 90

[13]ThomasW.Hungerford. AbstractAlgebraAnIntroduction ,pages169{179. Brooks/Cole,2014. [14]ThomasW.Hungerford.Abstractalgebraanintroduction.page219.Brooks/Cole,2014. [15]JohannesItten.Theartofcolor.page34.Wiley&SonsINC,1973. [16]DebraA.ZellnerJ.MichaelBarbiere,AnaVidal.Thecolorofmusic:correspondencethroughemotion. EmpiricalStudiesOfTheArts,Vol.25 pages193{208,2007. [17]JohnW.TukeyJamesW.Cooley.Analgorithmforthemachinecalculationof complexfourierseries. MathematicsofComputation ,19:397{301,Apr.1965. [18]TristanJehan.Musicalsignalparameterestimation.Master'sthesis,IFSIC, UniversitedeRennes,France,andCenterforNewMusicandAudioTechnologies CNMAT,UniversityofCalifornia,Berkeley,USA,1997. [19]GeorgeH.JobloveandDonaldGreenberg.Colorspacesforcomputergraphics. SIGGRAPH5'thAnnual ,pages20{25,1978. [20]HarryF.JordanandGitaAlaghband.Fundamentalsofparallelprocessing.page 160.PearsonEducation,2003. [21]GiulianoMontiJuanPabloBelloandMarkSandler.Animplementationof automatictranscriptionofmonophonicmusicwithablackboardsystem. Iris SignalsansSystemsConferenceISSC2000 ,2000. [22]GiulianoMontiJuanPabloBelloandMarkSandler.Techniquesforautomatic musictranscription.2000. [23]GuillaumeLeparmentier.Manipulatingcolorsin.net. http://www. codeproject.com/Articles/19045/Manipulating-colors-in-NET-Part 2016. [24]MichalLevy.Giantsteps. http://www.michalevy.com/giant-steps/index. html ,2015. [25]DavidLewin.Generalizedmusicalintervalsandtransformations.pages9{11. OxfordUniversityPress,1edition,2011. [26]StephenMalinowski.Themusicanimationmachine. http://www. kunstderfuge.com/theory/malinowski.htm ,2016. [27]LawrenceE.Marks.Onassociationsoflightandsound,themediationofbrightness,pitchandloudness. TheAmericanJournalofPsychology,Vol.87,No1/2 pages173{188,Nov2016. 79

PAGE 91

[28]PaulMasriandAndrewBateman.Improvedmodelingofattacktransientsin musicanalysis-resynthesis.pages100{103,1996. [29]JamesA.Moorer.Onthetranscriptionofmusicalsound. ComputerMusic Journal ,1:32{38,1977. [30]AlanV.Oppenheim.Speechspectographsusingthefastfouriertransform. IEEE Spectrum ,7:57{62,Aug1970. [31]KonstantinaOrlandatou.Soundcharacteristicswhichaectattributesofthe synaestheticvisualexperience. MusicaeScientiae,Vol.19 ,page389401,2015. [32]OttoOrtmann.Theoriesofsynesthesiainthelightofacaseofcolor-hearing. HumanBiology ,5:155{211,May1933. [33]OttoOrtmann.Theroiesofsynesthesiainthelightofacaseofcolor-hearing. HumanBiology ,5:176,May1933. [34]MartinPiszczalskiandBernardA.Galler.Automaticmusictranscription. ComputerMusicJournal ,1:24{31,Nov.1977. [35]CornelPokorny. ComputerGraphicsAnObject-OrientedApproachToTheArt AndScience .Franklin,BeedelandAssociatesIncorporated,1edition,1994. [36]FrankB.WoodRichardE.Cytowic.Synesthesiai.areviewofmajortheories andthierbrainbasis.pages36{49,1982. [37]FrankB.WoodRichardE.Cytowic.Synesthesiaii.psychophysicalrelationsin thesynesthesiaofgeometricallyshapedtasteandcoloredhearing. NinthAnnual MeetingoftheInternationalNeuropsychologicalSociety,BrainandCognition pages36{49,1982. [38]StephenW.Smith. TheScientistsandEngineer'sGuidetoDigitalSignalProcessing .CaliforniaTechnicalPublishing,1997. [39]PraveenSripada.Mp3decoderintheoryandpractice.Master'sthesis,Blekinge InstitudeofTechnology,March2006. [40]CliveTemperton.Implementationofaself-sortingin-placeprimefactorFFT algorithm. JournalofComputationalPhysics ,58:283{299,1985. [41]CliveTemperton.AgeneralizedprimefactorFFTalgorithmfor n =2 p 3 q 5 r SIAM ,13:676{686,May1992. [42]DimitriTymoczko.Thegeometryofmuscialchords. Science ,3130036-8075:72, July2006. [43]DimitriTymoczko. AGeometryofMusic .OxfordUniversityPress,1edition, 2011. 80

PAGE 92

[44]DimitriTymoczko.Ageometryofmusic.pages35{36.OxfordUniversityPress, 1edition,2011. [45]DimitriTymoczko.Ageometryofmusic.pages28{32.OxfordUniversityPress, 1edition,2011. [46]DimitriTymoczko.Ageometryofmusic.pages33{34.OxfordUniversityPress, 1edition,2011. [47]C.vanCampen. TheHiddenSense:SynesthesiainArtandScience .Leonardo SeriesCambridge,Mass..MITPress,2008. [48]DennisG.ZillandPatrickD.Shanahan. AFirstCourceinComplexAnalysis withApplications ,page35.JonesandBartlett,2009. 81

PAGE 93

APPENDIXA.SourceCode Theentireapplicationframeworkinvolvesseveralthousandlinesofsourcecode includingeverythingfromU/Icomponentstologgingutilities.Weoerasubsetof thespecicclassesandalgorithmsmostpertinenttoourthesis. 82

PAGE 94

A.0.5MusicalDetectClass usingSystem; using System.Collections.Generic; using System.Linq; using System.Text; namespace Toolkit.Media.DSP f /// < summary > ///Providesamusicaldetectionclassthatdefinesthe ///detectionofsomemusicalnoteandallitsdetails. /// < /summary > publicstruct MusicalDetect f publicstatic readonlyMusicalDetectEmpty= new MusicalDetectMusicalNote.Empty,0f,0f,0; #regionPrivate privatelong myTimeOn; privatelong myDuration; privatefloat myRf; privatefloat myAmp; private MusicalNotemyNote; privateint myHarmonic; #endregion /// < summary > ///Createthedetectionofanote. /// < /summary > /// < paramname="note" > Thenotefound < /param > /// < paramname="rf" > Thefrequencyitwasfound < /param > /// < paramname="amp" > Theintensity < /param > /// < paramname="timeOn" > Thefirstepochsincestartofplayitwasdetectedinmilliseconds < / param > public MusicalDetectMusicalNotenote, float rf, float amp, long timeOn f myNote=note; myRf=rf; myAmp=amp; myTimeOn=timeOn; myHarmonic= int Math.CeilingMath.Logrf/note.Fundamental,2; myDuration=0; g /// < summary > ///Createthedetectionofanote. /// < /summary > /// < paramname="note" > Thenotefound < /param > /// < paramname="rf" > Thefrequencyitwasfound < /param > /// < paramname="amp" > Theintensity < /param > /// < paramname="timeOn" > Thefirstepochsincestartofplayitwasdetectedinmilliseconds < / param > /// < paramname="duration" > Thedurationthisnotelastsinmilliseconds. < /param > public MusicalDetectMusicalNotenote, float rf, float amp, long timeOn, long duration: this note,rf,amp,timeOn f myDuration=duration; g #regionProperties /// < summary > ///Getthefirstobservedtimeofthisdetectionfromepochsince ///streamstartinmilliseconds. /// < /summary > publiclong TimeOn f get f return myTimeOn; g set f myTimeOn=value; gg /// < summary > ///Getthedurationofthisnoteinmilliseconds. /// < /summary > publiclong Duration f get f return myDuration; g set f myDuration=value; gg /// < summary > ///Gettheoriginalsampledrfvalue. /// < /summary > publicfloat Rf f get f return myRf; gg /// < summary > ///Gettheimpulsevalue. 83

PAGE 95

/// < /summary > publicfloat Amp f get f return myAmp; gg /// < summary > ///Getthenotethismatches. /// < /summary > public MusicalNoteNote f get f return myNote; gg /// < summary > ///Get/Settheoptionalharmonicifknown. /// < /summary > publicint Harmonic f get f return myHarmonic; g set f myHarmonic=value; gg #endregion /// < summary > ///Isthisanondetect. /// < /summary > publicbool IsEmpty f get f return myNote.IsUnknown&&myTimeOn==0&&myRf==0&&myAmp==0; g g #regionObjectOverrides public override bool Equalsobjectobj f MusicalDetectdet=MusicalDetectobj; return myTimeOn==det.myTimeOn&& myRf==det.myRf&& myAmp==det.myAmp&& myNote.Equalsdet.myNote; g public override int GetHashCode f return ToString.GetHashCode; g public overridestringToString f return ""+myNote.Note+""+Harmonic+":"+Rf+" "+" On:"+myTimeOn+", Amp:"+ myAmp; g #endregion g g 84

PAGE 96

A.0.6MusicalNoteClass usingSystem; using System.Collections.Generic; using System.Linq; using System.Text; namespace Toolkit.Media.DSP f /// < summary > ///Immutablestructurethatholdsinformationaboutamusicalnote. /// < /summary > publicstruct MusicalNote f #regionProperties publicstatic readonlyMusicalNoteEmpty= new MusicalNoteeMusicNote.Unknown,0,0; private eMusicNotemyNote; privatefloat myRf; privatefloat myTiggerHarmonic; #endregion public MusicalNoteMusicalNoteinOther: this inOther.myNote,inOther.myTiggerHarmonic,inOther .myRf fg /// < summary > ///Createthemusicalnote. /// < /summary > /// < paramname="inNote" > Thenoteenumeration < /param > /// < paramname="inTiggerRf" > Thetiggermapping < /param > /// < paramname="inFundamental" > Thecorrespondingfundamentalfrequency < /param > /// < paramname="inRf" > Themeasuredfrequencyharmonic < /param > public MusicalNoteeMusicNoteinNote, float inTiggerRf, float inRf f myNote=inNote; myRf=inRf; myTiggerHarmonic=inTiggerRf; g #regionProperties /// < summary > ///Gettherawnotename /// < /summary > public eMusicNoteNote f get f return myNote; gg /// < summary > ///Getthefundamentalfrequencyforthisnote. /// < /summary > publicfloat Fundamental f get f return myRf; gg /// < summary > ///GetthecalculatedTiggerRf /// < /summary > publicfloat TiggerHarmonic f get f return myTiggerHarmonic; gg #endregion #regionUtility and Transforms /// < summary > ///Generatestepharmonicsabovethistone. /// < /summary > public MusicalDetectHarmonic int step f float newRf= float myRf Math.Pow2.0, float step; MusicalDetectmd= new MusicalDetect this ,newRf,1,0; md.Harmonic=step; return md; g #endregion #regionObjectOverrides publicbool IsUnknown f get f return myNote==eMusicNote.Unknown&&myRf==0; gg public overridestringToString f return ""+myNote+" :"+myRf+":"+myTiggerHarmonic+""; g 85

PAGE 97

public override bool Equalsobjectobj f MusicalNotemn=MusicalNoteobj; return myNote==mn.myNote&& myRf==mn.myRf&& myTiggerHarmonic==mn.myTiggerHarmonic; g public override int GetHashCode f return ToString.GetHashCode; g #endregion g g 86

PAGE 98

A.0.7MusicUtilityClass usingSystem; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; using Core.Drawing; using Core.Mathematics; using Core.Mathematics.Algebra; using Core.Mathematics.Stats; using Core.Mathematics.Analysis; using System.Diagnostics; namespace Toolkit.Media.DSP f /// < summary > ///Providesatableoffundimentalfrequenciesformusical ///tones. /// /// < /summary > publicclass Music f privatestatic readonlyRandomourRandom= new Random; #regionPerformanceTestingConstants publicstaticbool UseWeakDetect= false ; /// < summary > ///Alwayshastheruntimemsofthemostrecentcallto ///Guess. /// < /summary > publicstaticlong GuessTime=0; #endregion #regionPrivateConstants /// < summary > ///Mapofuniqueintegerstomusicalnotes. /// < /summary > privatestatic readonlyDictionary < int ,MusicalNote > ourTiggerMap; /// < summary > ///Mapofintegerfundamentalfrequenciestothenote. /// < /summary > privatestatic readonlyDictionary < int ,MusicalNote > ourIntegerFund; /// < summary > ///Basecolorvaluetoatiggervalue. /// < /summary > privatestatic readonlyDictionary < int int > ourTiggerColorMap; /// < summary > ///Consonancedelta.DifferencebetweenC/C# /// < /summary > privatestatic readonly float DISONANCE 1=100f; privatestatic readonly float DISONANCE 2=150f; #endregion #regionPrivateStaticUtil /// < summary > ///Createthefullharmonicsetforthegivensamplerate. /// < /summary > privatestaticfloat []CreateHarmonic int sampleRate f //Createallfundamentalandharmonics List < float > rfs= new List < float > ; int max= int Math.CeilingMath.LogsampleRate/2.0f/Music.Min,2.0f; //ForeachRfgenerateeachharmonic. for int r=0;r < Music.Notes.Length;r++ f for int k=0;k < max;k++ f float rf= float Music.Notes[r].Fundamental Math.Pow2.0f,k; if rf < sampleRate/2.0f 87

PAGE 99

rfs.Add float Music.Notes[r].Fundamental Math.Pow2.0f,k; g g return rfs.ToArray; g #endregion #regionStaticConstructor static Music f ourTiggerMap= new Dictionary < int ,MusicalNote > ; for int i=0;i < Notes.Length;i++ourTiggerMap.AddTiggerHarmonicNotes[i].Fundamental,Notes[ i]; //Createfundamentals Fundamentals= newfloat [Notes.Length]; for int i=0;i < Notes.Length;i++Fundamentals[i]=Notes[i].Fundamental; //Createharmonics. Harmonics= new Dictionary < int float [] > ; for int i=0;i < KnownSampleRates.Rates.Length;i++ f float []vals=CreateHarmonic int KnownSampleRates.Rates[i]; Harmonics.Add int KnownSampleRates.Rates[i],vals; g //Createintegerfundamentals. ourIntegerFund= new Dictionary < int ,MusicalNote > ; for int i=0;i < Notes.Length;i++ ourIntegerFund.Add int Notes[i].Fundamental,Notes[i]; //Createtiggercolormap ourTiggerColorMap= new Dictionary < int int > ; ourTiggerColorMap.AddTiggerHarmonicC0,Color.Blue.ToArgb; ourTiggerColorMap.AddTiggerHarmonicCs0,Color.Blue.ToArgb; ourTiggerColorMap.AddTiggerHarmonicD0,Color.Red.ToArgb; ourTiggerColorMap.AddTiggerHarmonicDs0,Color.Red.ToArgb; ourTiggerColorMap.AddTiggerHarmonicE0,Color.Yellow.ToArgb; ourTiggerColorMap.AddTiggerHarmonicF0,Color.Brown.ToArgb; ourTiggerColorMap.AddTiggerHarmonicFs0,Color.Brown.ToArgb; ourTiggerColorMap.AddTiggerHarmonicG0,Color.Green.ToArgb; ourTiggerColorMap.AddTiggerHarmonicGs0,Color.Green.ToArgb; ourTiggerColorMap.AddTiggerHarmonicA0,Color.Green.ToArgb; ourTiggerColorMap.AddTiggerHarmonicAs0,Color.Green.ToArgb; ourTiggerColorMap.AddTiggerHarmonicB0,Color.Black.ToArgb; g #endregion #regionPublicConstants /// < summary > ///MaximumFundimentalRfinthetable. /// < /summary > publicconstfloat Max=B0; /// < summary > ///MinimumFundimentalRfinthetable. /// < /summary > publicconstfloat Min=C0; publicstatic readonlyMusicalNoteC0n= new MusicalNoteeMusicNote.C,Music.TiggerHarmonicC0 ,C0; publicstatic readonlyMusicalNoteCs0n= new MusicalNoteeMusicNote.Cs,Music.TiggerHarmonic Cs0,Cs0; publicstatic readonlyMusicalNoteD0n= new MusicalNoteeMusicNote.D,Music.TiggerHarmonicD0 ,D0; publicstatic readonlyMusicalNoteDs0n= new MusicalNoteeMusicNote.Ds,Music.TiggerHarmonic Ds0,Ds0; publicstatic readonlyMusicalNoteE0n= new MusicalNoteeMusicNote.E,Music.TiggerHarmonicE0 ,E0; publicstatic readonlyMusicalNoteF0n= new MusicalNoteeMusicNote.F,Music.TiggerHarmonicF0 ,F0; publicstatic readonlyMusicalNoteFs0n= new MusicalNoteeMusicNote.Fs,Music.TiggerHarmonic Fs0,Fs0; publicstatic readonlyMusicalNoteG0n= new MusicalNoteeMusicNote.G,Music.TiggerHarmonicG0 ,G0; publicstatic readonlyMusicalNoteGs0n= new MusicalNoteeMusicNote.Gs,Music.TiggerHarmonic Gs0,Gs0; publicstatic readonlyMusicalNoteA0n= new MusicalNoteeMusicNote.A,Music.TiggerHarmonicA0 ,A0; publicstatic readonlyMusicalNoteAs0n= new MusicalNoteeMusicNote.As,Music.TiggerHarmonic As0,As0; 88

PAGE 100

publicstatic readonlyMusicalNoteB0n= new MusicalNoteeMusicNote.B,Music.TiggerHarmonicB0 ,B0; /// < summary > ///ListofallRf'sformusicaltonesstartingwith ///C )]TJ/F22 5.9776 Tf 4.659 0 Td [(> B. /// < /summary > publicstatic readonlyMusicalNote[]Notes= f C0n, Cs0n, D0n, Ds0n, E0n, F0n, Fs0n, G0n, Gs0n, A0n, As0n, B0n, g ; /// < summary > ///Hasthelistoffundimentalfrequenicesinorder ///fromCtoB. /// < /summary > publicstatic readonly float []Fundamentals; /// < summary > ///Amapthatcontainsamappingfromthesampleratetofloatarrayseachofwhicharethe ///setoffundamentalsand11harmonicsinorder. /// ///Key:=Thesamplerate. ///Value:=Thefrequencies. /// < /summary > publicstatic readonlyDictionary < int float [] > Harmonics; #endregion #regionCharacterizationMethods /// < summary > ///ExecutetheTiggerharmonictheoremwhichmapsafrequencytoauniqueinteger. /// < /summary > publicstaticint TiggerHarmonic float f f return int 100.0f f/Math.Pow2,Math.FloorMath.Logf,2 )]TJ/F19 5.9776 Tf 10.618 0 Td [(1.0f; g /// < summary > ///UsetheTiggerharmonctodeterminewhatnotecorrespondstothegivenfrequency. /// < /summary > /// < paramname="inF" >< /param > /// < returns >< /returns > publicstatic MusicalNoteCharacterize float f f int th=TiggerHarmonicf; if ourTiggerMap.ContainsKeyth return ourTiggerMap[th]; if !UseWeakDetect f if ourIntegerFund.ContainsKey int f return ourIntegerFund[ int f]; g float sm= float .MaxValue; int smI= )]TJ/F19 5.9776 Tf 6.084 0 Td [(1; for int i=0;i < Notes.Length;i++ f float t=Math.AbsNotes[i].TiggerHarmonic )]TJ/F19 5.9776 Tf 10.298 0 Td [(th; if t < sm f sm=t; smI=i; g g return Notes[smI]; g /// < summary > ///Determinethepresenceofwhichnotesexistinthegiveninputstreamof ///samplesuntilendofstreamisreached. /// < /summary > /// < paramname="X" > Thesequenceofsamplevalues. < /param > /// < paramname="offset" > ThestartingoffsettoexecuteinX < /param > /// < paramname="sampleRate" > Thesamplerate < /param > 89

PAGE 101

/// < paramname="stats" > Thecurrentrunningstatisticsornulltocompute. < /param > publicstatic MusicalDetect[]GuessNotesZ2[]X, int offset, int sampleRate f //Addthesamplerateifwedon'talreadyknowaboutit. if !Harmonics.ContainsKeysampleRateHarmonics.AddsampleRate,CreateHarmonicsampleRate; List < MusicalDetect > notes= new List < MusicalDetect > ; float N=sampleRate; float []freqs=Harmonics[sampleRate]; int i=offset; float max=0; while i < X.Length f FourierTransform.RfFFTX,sampleRate,i,freqs,eWindowType.Hanning; //Computemax f X g andnoise max=0; for int m=0;m < sampleRate;m++ f float aa=X[i+m].B.Magnitude; if aa > max max=aa; g //Findallvalueswhosepeaktosignalislargeenough. for int k=0;k < sampleRate;k++ f float m=X[i+k].B.Magnitude; float d=m/max; if d > SoundParameters.RF DETECT FULL AMP f MusicalDetectdet= new MusicalDetectCharacterizek,k,m, long double i/ double sampleRate 1000.0d; //Staticallyaddedfornow. det.Duration=1000; notes.Adddet; g g i+=sampleRate; g return notes.ToArray; g /// < summary > ///Computetheconsonancefactor. ///Returnsavaluefrom0to1with1beingthemostconsonanttone. /// < /summary > /// < returns > Avaluefrom0to1with1beingcompletelyconsonant. < /returns > publicstaticfloat ConsonanceChordch f if ch.Count==1 return 1; float aveDelta=0; for int i=0;i < ch.Count;i++ f for int j=0;j < ch.Count;j++ aveDelta+=Math.Absch[i].Rf )]TJ/F19 5.9776 Tf 10.169 0 Td [(ch[j].Rf; g aveDelta=Math.AbsaveDelta/ float ch.Count ch.Count )]TJ/F19 5.9776 Tf 5.557 0 Td [(ch.Count; if aveDelta > DISONANCE 2 return 1; return float 2.0f aveDelta )]TJ/F19 5.9776 Tf 9.517 0 Td [(DISONANCE 1/2.0f DISONANCE 2 )]TJ/F19 5.9776 Tf 9.517 0 Td [(DISONANCE 1; g /// < summary > ///GettheZ12elementforaspecificnote. /// < /summary > publicstaticint Z12eMusicNotenote f switch note f case eMusicNote.C: return 0; case eMusicNote.Cs: return 1; case eMusicNote.D: return 2; case eMusicNote.Ds: return 3; case eMusicNote.E: return 4; case eMusicNote.F: return 5; case eMusicNote.Fs: return 6; 90

PAGE 102

case eMusicNote.G: return 7; case eMusicNote.Gs: return 8; case eMusicNote.A: return 9; case eMusicNote.As: return 10; case eMusicNote.B: return 11; g return )]TJ/F19 5.9776 Tf 6.083 0 Td [(1; g /// < summary > ///ConvertaZ12elementtoapitchscaleelement. /// < /summary > publicstatic eMusicNotePitchScale int ps f switch ps f case 0: return eMusicNote.C; case 1: return eMusicNote.Cs; case 2: return eMusicNote.D; case 3: return eMusicNote.Ds; case 4: return eMusicNote.E; case 5: return eMusicNote.F; case 6: return eMusicNote.Fs; case 7: return eMusicNote.G; case 8: return eMusicNote.Gs; case 9: return eMusicNote.A; case 10: return eMusicNote.As; case 11: return eMusicNote.B; g return eMusicNote.Unknown; g /// < summary > ///Determineavaluefrom0to1thatishowmuchachordchangesintonalstructure,meaning ///1isabigchange. /// < /summary > /// < remarks > Avaluefrom0to1. < /remarks > publicstaticfloat ChangeOfChordChordc1,Chordc2 f int m=Math.Maxc1.Count,c2.Count; //Preparation.Thiscouldbedoneinoneloop,butI'mtired. //nestbotharraysinonearray float []X= newfloat [m 2]; for int i=0;i < c1.Count;i++X[i]=Z12c1[i].Note.Note; for int i=0;i < c1.Count )]TJ/F19 5.9776 Tf 4.273 0 Td [(m;i++X[c1.Count+i]=0; for int i=0;i < c2.Count;i++X[m+i]=Z12c2[i].Note.Note; for int i=0;i < c2.Count )]TJ/F19 5.9776 Tf 4.273 0 Td [(m;i++X[m+i]=0; //Calculation j x t )]TJ/F58 5.9776 Tf 10.655 0 Td [(x f t+m gj float []Y= newfloat [m]; for int t=0;t < m;t++Y[t]=Math.AbsX[t] )]TJ/F19 5.9776 Tf 5.496 0 Td [(X[t+m]; //Computestd. float exp=ExtraStats.MeanY; float std=ExtraStats.StdDevY; if exp==0 return 0; return 1f )]TJ/F19 5.9776 Tf 9.771 0 Td [(Math.Absstd )]TJ/F19 5.9776 Tf 5.57 0 Td [(exp/exp; g /// < summary > ///Computethemagnitudeofachord. /// < /summary > publicstaticfloat MagnitudeChordc1 f 91

PAGE 103

float f=0; float N=c1.Count; for int i=0;i < c1.Count;i++ f for int j=0;j < c1.Count;j++ f+=c1[i].Rf c1[i].Harmonic+1; g return 1.0f/N N f; g /// < summary > ///Computetheharmoniccentricitytheta Hofachord. /// < /summary > publicstaticfloat HarmonicCentricityChordc f float t=0; float N=c.Count; for int i=0;i < c.Count;i++t+=c[i].Harmonic+1; return t/N; g /// < summary > ///Computethefundamentalcentricitytheta Fofachord. /// < /summary > publicstaticfloat FundamentalCentricityChordc f float t=0; float N=c.Count; for int i=0;i < c.Count;i++t+=c[i].Note.Fundamental; return t/N; g #endregion #regionMusicalToneRfValues publicconstfloat C0=16.35f; publicconstfloat Cs0=17.32f; publicconstfloat D0=18.35f; publicconstfloat Ds0=19.45f; publicconstfloat E0=20.60f; publicconstfloat F0=21.83f; publicconstfloat Fs0=23.12f; publicconstfloat G0=24.50f; publicconstfloat Gs0=25.96f; publicconstfloat A0=27.50f; publicconstfloat As0=29.14f; publicconstfloat B0=30.87f; #endregion #regionToneGeneration /// < summary > ///Retrivethemusicalnoteforthegivenenumerationat ///thefundamental. /// < /summary > publicstatic MusicalNoteGetNoteeMusicNoten f for int i=0;i < Notes.Length;i++ f if Notes[i].Note==n return Notes[i]; g return MusicalNote.Empty; g /// < summary > ///Getharmonicsforallnotes /// < /summary > /// < paramname="s" > Theharmonicvalueofthenotestoretrieve. < /param > /// < returns > Alistofallnotesatthegivenharmonic < /returns > publicstatic MusicalDetect[]GetHarmonics int s f List < MusicalDetect > l= new List < MusicalDetect > ; for int i=0;i < Notes.Length;i++l.AddNotes[i].Harmonics; return l.ToArray; g /// < summary > ///Generatearandomsoundofthegivendurationinseconds. /// < /summary > publicstatic RandomSoundRandomSound int dur, int sampleRate, int floorAmp f Synthesizerst= new SynthesizersampleRate; List < MusicalDetect > notes= new List < MusicalDetect > ; 92

PAGE 104

int i=0; float Nyquist= float sampleRate/2.0f; while i < dur f int step=1+ourRandom.Next2; MusicalNotenote=Notes[ourRandom.NextNotes.Length]; int harm=ourRandom.Next12; float freq= float note.Fundamental Math.Pow2.0f,harm )83()]TJ/F19 5.9776 Tf 10.769 0 Td [(; while freq > Nyquist freq= float note.Fundamental Math.Pow2.0f, )26()]TJ/F19 5.9776 Tf 11.477 0 Td [(harm; float amp= float floorAmp+ourRandom.NextDouble 10000.0f; for int k=0;k < step;k++ f MusicalDetectmd= new MusicalDetectnote, freq, amp, int i+k 1000.0f; st.Addmd.Rf,md.Amp,1; notes.Addmd; g i+=step; g returnnew RandomSoundsampleRate,st.ToArray,notes.ToArray; g /// < summary > ///UtilitymethodtogenerateaboguschordfromZ12vales. /// < /summary > /// < returns > Achordoffundamentaltones < /returns > publicstatic ChordMakeChord int []values f Chordch= new Chord; for int i=0;i < values.Length;i++ f MusicalNoten=Music.GetNoteMusic.PitchScalevalues[i]; ch.Add new MusicalDetectn,n.Fundamental,10,0; g return ch; g #endregion #regionColorMapping /// < summary > ///Providesacolormappedfromthegivenfrequencyvalue. /// < /summary > /// < paramname="inTiggerH" > Thetiggerharmonicvalue < /param > /// < paramname="noise" > Thenoisesample,avaluefrom0to1with1beingmaxnoise < /param > publicstatic ColorTiggerStripesMusicalDetectinTone, float noise f ColorNa=Color.FromArgbourTiggerColorMap[ int inTone.Note.TiggerHarmonic]; float t=noise; float Ha= float inTone.Harmonic+1f/12.0f%1.00001f; ColorFa; Colorg=Color.FromArgb128,128,128; if inTone.Rf < =50 Fa=Color.Black; elseif inTone.Rf > =50&&inTone.Rf < =700 Fa=Color.White; else Fa=Color.Yellow; //Na n diamondFa // )223()224()223()224()223()224()223()224()223()224()223()224()223()223()224()223()224()]TJ/F58 5.9776 Tf -6.913 -7.671 Td [(//Ha //However,weusetheintensityastheharmonicinHSV Colorleft=Fa.AddNa; left=ExtraColor.HSVtoRGBleft.GetHue,1f )]TJ/F19 5.9776 Tf 4.989 0 Td [(Ha,Ha; return left.Mul1.0f )]TJ/F19 5.9776 Tf 5.823 0 Td [(t.AddRGBg.Mult; g #endregion g #regionAdditionalStructures/Classes 93

PAGE 105

/// < summary > ///Amusicalnote. /// < /summary > publicenum eMusicNote f C=0, Cs=1, D=2, Ds=3, E=4, F=5, Fs=6, G=7, Gs=8, A=9, As=10, B=11, Unknown=100, g /// < summary > ///UtiltyclassthatcontainsasetofPCMandthecorrespondingnotes. /// < /summary > publicstruct RandomSound f public Z2[]Samples; public MusicalDetect[]Notes; publicint SampleRate; /// < summary > ///Createarandomsound. /// < /summary > public RandomSound int sampleRate,Z2[]samples,MusicalDetect[]det f Samples=samples; Notes=det; SampleRate=sampleRate; g g #endregion g 94

PAGE 106

A.0.8ChordClass usingSystem; using System.Collections.Generic; using System.Linq; using System.Text; namespace Toolkit.Media.DSP f /// < summary > ///Providesastructurethatmanagesasetofnotesdefiningsome ///musicalchord. /// < /summary > publicclass Chord:List < MusicalDetect > f publicstaticint OWNS= )]TJ/F19 5.9776 Tf 6.083 0 Td [(2; publicstaticint NONE= )]TJ/F19 5.9776 Tf 6.083 0 Td [(1; #regionConstructor public Chord fg #endregion #regionUtilityMethods /// < summary > ///Askifthegivenmusicaldetectmatchessomethinginthischord. ///MatchCriteria: /// ///1Samenote,sameharmonic ///2Sametimeduration. /// < /summary > /// < returns > ///Chord.NONENoMatch ///Chord.OWNSBelongstochord,butnodirectmatch ///indexTheindexofmatchednote ///Thematcheddetectionindexor )]TJ/F58 5.9776 Tf 5.586 0 Td [(1 < /returns > publicint BelongsToMusicalDetectmd f //Firstseeifmatchingnote. for int i=0;i < Count;i++ f MusicalDetecttst= this [i]; if tst.Note.Note==md.Note.Note&& tst.Harmonic==md.Harmonic return i; g //Nowseeiftimesoverlap. for int i=0;i < Count;i++ f MusicalDetecttst= this [i]; if md.TimeOn > =tst.TimeOn&& md.TimeOn < =tst.TimeOn+tst.Duration return OWNS; g return NONE; g /// < summary > ///Returnsthischordwithallnotesraisedby ///thegivenharmonic. /// < /summary > public ChordGetHarmonic int h f Chordch= new Chord; for int i=0;i < Count;i++ f MusicalDetectcur= this [i]; MusicalNotemn=Music.GetNotecur.Note.Note; float newRf= float mn.Fundamental Math.Pow2.0f,cur.Harmonic+h; ch.Add new MusicalDetectmn,newRf,cur.Amp,cur.TimeOn; g return ch; g #endregion 95

PAGE 107

#regionProperties #endregion public overridestringToString f StringBuildersb= new StringBuilder; sb.Append" f "; for int i=0;i < Count;i++ f if i!=0sb.Append","; sb.Append""+ this [i].TimeOn+" )]TJ/F19 5.9776 Tf 5.664 0 Td [("+ this [i].Duration+""+ this [i].Note.Note+""+ this [i].Harmonic; g sb.Append" g "; return sb.ToString; g g g 96

PAGE 108

A.0.9ChordDetectionClass usingSystem; using System.Collections.Generic; using System.Linq; using System.Text; namespace Toolkit.Media.DSP f /// < summary > ///Providesachorddetectionsystemthatkeepstrackofachord'sstructureover ///timeasnewnotesareadded.Althoughthereisonlyonechordsuppliedwe ///provideanarrayofchordsforexpansionintolaterimplementations. /// < /summary > publicclass ChordDetector f #regionPrivate //ThecurrenttimestepinunitsofmyMinDuration. //incrementedoneachcalltoNewNotes. privatelong myTimeStep=0; private List < Chord > myChords= new List < Chord > ; privateint myRecent= )]TJ/F19 5.9776 Tf 6.084 0 Td [(1; #endregion #regionAlgorithm /// < summary > ///Assumption:LetnbeanyexistingnoteandTisthetimeonsetandD ///isthelastknowndurationofanote.StepTimeisacounterthatholdseach ///intervalofminimumdurationthathaspassed. ///1IfTn+Dn<StepTimeforanynote,removethatnote. ///2IfallnotesexpiredKillChord:. /// < /summary > privatevoid Expiration f for int i=0;i < myChords.Count;i++ f Chordc=myChords[i]; for int j=0;j < c.Count;j++ f MusicalDetectmd=c[j]; //Checktoseeif long expireTime=md.TimeOn+md.Duration; if expireTime < =myTimeStep f c.RemoveAtj; j )31()]TJ/F19 5.9776 Tf 11.234 0 Td [(; g g if c.Count==0 f myChords.RemoveAti; i )31()]TJ/F19 5.9776 Tf 11.234 0 Td [(; g g g #endregion #regionConstructors /// < summary > ///Createthechorddetection. /// < /summary > /// < paramname="sampleRate" > Thesamplerateoftheoriginaldata < /param > /// < paramname="minimumDuration" > Theminimumdurationofachord'slifeinmilliseconds < / param > public ChordDetector fg #endregion #regionUtility /// < summary > ///Addanewnotetothechordsystem. 97

PAGE 109

/// < /summary > publicvoid NewNotesMusicalDetect[]newNotes f //Refreshallnewnotesthatalreadyexistinchords. for int i=0;i < newNotes.Length;i++ f MusicalDetectmatch=newNotes[i]; newNotes[i]=match; bool found= false ; //Refresh: for int j=0;j < myChords.Count&&!found;j++ f Chordch=myChords[j]; int foundIdx=ch.BelongsTomatch; //Ifthenotebelongstothechord. if foundIdx!=Chord.NONE f //Thisnotebelongstothechordintimebutthatnoteisn'tthere. if foundIdx==Chord.OWNS f ch.Addmatch; myRecent=j; found= true ; g //Thenoteisinthechordalready. else f MusicalDetectmd=ch[foundIdx]; md.Duration+=match.Duration; ch[foundIdx]=md; myRecent=j; found= true ; g g g //Residuals: if !found f Chordch= new Chord; ch.Addmatch; myChords.Addch; myRecent=myChords.Count )]TJ/F19 5.9776 Tf 6.166 0 Td [(1; g g Expiration; for int i=0;i < newNotes.Length;i++ f if newNotes[i].TimeOn > myTimeStepmyTimeStep=newNotes[i].TimeOn; g g /// < summary > ///Getallcurrentlyheldchords. /// < /summary > public Chord[]GetChords f return myChords.ToArray; g /// < summary > ///Removeallchords. /// < /summary > publicvoid Clear f myChords.Clear; myRecent= )]TJ/F19 5.9776 Tf 6.084 0 Td [(1; g #endregion #regionProperties /// < summary > ///Getthemostrecentlyseenchord. /// < /summary > /// < value > nullifnochordexists. < /value > public ChordLatest f get f if myRecent== )]TJ/F19 5.9776 Tf 5.929 0 Td [(1 return null; return myChords[myRecent]; g g #endregion public overridestringToString f StringBuildersb= new StringBuilder; 98

PAGE 110

for int i=0;i < myChords.Count;i++sb.AppendmyChords[i].ToString+" n r n n"; return sb.ToString; g g g 99

PAGE 111

A.0.10SoundProcessingBusClass usingSystem; using System.IO; using System.Threading; using System.Collections.Generic; using System.Linq; using System.Text; using Core.IO; using Core.Collections; using Core.Forms; using Core.Util; using Core.Mathematics.Algebra; using NAudio.Wave; using Toolkit.Media.DSP; using System.Diagnostics; namespace Toolkit.Media.DSP f /// < summary > ///Identifiesthecurrentstateofthesoundbus /// < /summary > publicenum eSoundBusPhase f /// < summary > ///Thebusplayedtheentirestreamor ///somethingwaskilled,oranerroroccured. /// < /summary > PlayEnded, /// < summary > ///ThesoundbushasreceivedaPCMstreamandisstartingplay /// < /summary > PlayStarted, /// < summary > ///Thesoundbushasreceivedarequesttopauseplay. /// < /summary > PlayPaused, g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Delegatenotifiedofthecurrentsoundbusphase /// < /summary > /// < paramname="phase" >< /param > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public delegate void dSoundBusPhaseeSoundBusPhasephase; //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Delegatethatisnotifiedofclockticksasthesoundbusprocessesdata. /// < /summary > /// < paramname="tineNs" > ThecurrentprocessingtimeinNanoseconds < /param > /// < paramname="step" > Thedefinitionofthistickstep < /param > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public delegate void dSoundBusTickDSPStatisticstimeNs,eTickStepstep; //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Delegatenotifiedofeachreadsample. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public delegate void dSampleNoticeZ2[]inSamp; //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Providesafoundationutiltyclassthatcanreadinputfilesor ///fromamicrophoneandplay'stothespeakeraswellas ///forwardingrawdatatoprocessorsforanypurpose. /// ///Note:Thisclassisthreaded. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 100

PAGE 112

publicclass SoundBus f #regionPrivateMembers privatebool myPlayForever= false ; //IftruethenwesuckinPCM foreveramicrophonesourceorsomething private ePlayerStatemyState=Core.Forms.ePlayerState.None; private DSPStatisticsmyTimeStats= new DSPStatistics; private ObjectmyMutex= new Object; private SoundBusPluginQueuemyPlugins= new SoundBusPluginQueue; private ThreadmyPlayThread; private StreammyInputPCMStream; private PCMInfomyInfo; //Usedforreadingasample1atatime. private byte[]myWorkspaceBuffer; //Providerofwavedatatosoundcard private BufferedWaveProvidermyProvider=null; //TotalnumberofbytesreadfrominputPCMstream. privatedouble myProcessedBytes=0; //Threaduseonlyforstopping. privatebool myThIsPlaying= false ; #endregion #regionEventMethods privatevoid FirePhaseChangeeSoundBusPhaseinPh f if PhaseChanged!=null PhaseChangedinPh; g privatevoid FireProgressChangeStringinMsg, int min, int max, int val f if ProgressChanged!=null ProgressChangedinMsg,min,max,val; g privatevoid FireTickDSPStatisticstimeNs,eTickStepstep f try f if Tick!=nullTicktimeNs,step; g catch Exceptionex f Log.Errorex; g g privatevoid FireSampleReadZ2[]samp f if SampleRead!=null SampleReadsamp; g #endregion #regionThreads //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Thiswillattempttobufferagoodnumberofsamplesforplaying. ///IfweareawavefileinputmyPlayForever=falsethen ///bufferenoughdatatothecardtomatchup.Otherwisejustgrab ///onesampleatatime. /// < /summary > /// < paramname="input" > Howmanysecondstobuffer. < /param > /// < returns > trueifsomethingtoprocess < /returns > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ privatebool BufferSecondsStreaminStream,refZ2[]outSamples, int sec f int neededSamples= int myInfo.SampleRate sec; //Re )]TJ/F58 5.9776 Tf 5.894 0 Td [(shapetempbufferifneeded. if myWorkspaceBuffer==null jj myWorkspaceBuffer.Length < neededSamples myWorkspaceBuffer= new byte[ int myInfo.SampleSize myInfo.SampleRate]; Array.ClearoutSamples,0,outSamples.Length; Array.ClearmyWorkspaceBuffer,0,myWorkspaceBuffer.Length; int r=0; int dr=0; int offset=0; int bytesToRead= int myInfo.SampleRate myInfo.SampleSize; int total=0; if myWorkspaceBuffer.Length < bytesToRead thrownew ArgumentException"Not enough buffer space in 'inSpace'"; if outSamples.Length < myInfo.SampleRate sec thrownew ArgumentException"Not enough buffer space in 'inTo'"; 101

PAGE 113

//Readnsecondsofdata. for int i=0;i < sec;i++ f //Readtheamountweneed while r < bytesToRead f dr=inStream.ReadmyWorkspaceBuffer,r,bytesToRead )]TJ/F19 5.9776 Tf 5.822 0 Td [(r; if dr==0 break ; r+=dr; g if r > 0 f myProvider.AddSamplesmyWorkspaceBuffer,0,r; MediaUtils.CreateSamplesrefoutSamples,myWorkspaceBuffer,0, int myInfo.SampleRate, offset,myInfo; myTimeStats.AddAoutSamples,0, int r/myInfo.SampleSize; myProcessedBytes+=r; g total+=r; r=0; offset+= int myInfo.SampleRate; g return total > 0; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Themainthreadthatplaysthecurrentaudiostream. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ privatevoid PlayCorrectlyThread f myState=ePlayerState.Playing; WaveOutwavePlayer=null; try f GameTimegt= new GameTime; myThIsPlaying= true ; wavePlayer= new WaveOut; WaveOutwavePlayer.DesiredLatency= int SoundParameters.LATENCY; myState=Core.Forms.ePlayerState.Playing; FirePhaseChangeeSoundBusPhase.PlayStarted; myProvider= new BufferedWaveProvidermyInfo.ToWaveFormat; wavePlayer.InitmyProvider; wavePlayer.Play; //Addpaddingtomatchblockalignment. myTimeStats.Clear; myTimeStats.ResetInterval= int myInfo.SampleRate; //Initializeallthepluginswiththewaveinformation. for int k=0;k < myPlugins.Length;k++myPlugins[k].Initialize this ,myTimeStats,myInfo; bool dataAvail= false ; double myTotalBytes=myPlayForever? double .MaxValue:myInputPCMStream.Length; FireProgressChange"Processing Signal ...",0,0,0; DateTimenextStatus=DateTime.UtcNow.Add new TimeSpan0,0,3; int bufferedTimeS=1; float targetPaintInterval=1f/60f 1000f; float paintInterval=targetPaintInterval; float paintCnt=0; //Do1secondbuffer. Z2[] 1Secondbuffer= new Z2[ int myInfo.SampleRate]; while true f if myState==Core.Forms.ePlayerState.Paused f while myState==Core.Forms.ePlayerState.PausedThread.Sleep50; g if myState==Core.Forms.ePlayerState.Stopped break ; //Wehavereal )]TJ/F58 5.9776 Tf 5.867 0 Td [(timemicmodeandplayingfromfilemode //Thisisforplayingfromafilemode.Weneedtokeepthesoundcardinsync. if !myPlayForever f if myProvider.BufferedDuration < new TimeSpan0,0,0,1,300 dataAvail=BufferSecondsmyInputPCMStream,ref 1Secondbuffer,bufferedTimeS; //Nodatasostop. if !dataAvail&&myProvider.BufferedBytes==0 break ; 102

PAGE 114

g else dataAvail=BufferSecondsmyInputPCMStream,ref 1Secondbuffer,bufferedTimeS; //Updategametime. gt.UpGame1000; //Didwereadanythingnew. if dataAvail f myPlugins.Add 1Secondbuffer; FireSampleRead 1Secondbuffer; g //Waitforrealtimetocatchup. while myProvider.BufferedDuration > new TimeSpan0,0,0,0,300 f FireTickmyTimeStats,eTickStep.FPS30 Tick; paintCnt++; Thread.Sleep int paintInterval; g //Status/progressupdate. if DateTime.UtcNow > nextStatus f int p= int myProcessedBytes/myTotalBytes 100.0d; FireProgressChange"Processing Signal ...",0,100,p; paintInterval=paintInterval paintCnt/180f; if paintInterval > targetPaintInterval paintInterval=targetPaintInterval; elseif paintInterval==0paintInterval=1f; nextStatus=DateTime.UtcNow.Add new TimeSpan0,0,3; g while gt.Schedule > 0 f FireTickmyTimeStats,eTickStep.FPS30 Tick; paintCnt++; Thread.Sleep1; g gt.Clear; g g catch ThreadInterruptedException fg catch ThreadAbortException fg catch IOExceptionex f Log.Statusex.ToString; if ProgressChanged!=nullProgressChanged"PCM Stream ended "+ex.Message,0,0,0; g catch Exceptionex f Log.Errorex; if ProgressChanged!=nullProgressChanged"Yipes! Something went wrong "+ex.Message ,0,0,0; g finally f myState=ePlayerState.Stopped; try f if myInputPCMStream!=nullmyInputPCMStream.Close; if wavePlayer!=nullwavePlayer.Dispose; myInputPCMStream=null; myWorkspaceBuffer=null; myProcessedBytes=0; myProvider=null; myPlugins.ClearPlugins; g catch Exception fg myThIsPlaying= false ; try f if PhaseChanged!=nullPhaseChangedeSoundBusPhase.PlayEnded; g catch Exceptionex f Log.Errorex; g g g #endregion #regionConstructors 103

PAGE 115

//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Destructor /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~SoundBus f Dispose; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Createasoundbus. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public SoundBus fg #endregion #regionPluginActions //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Addabusplugintothesoundbusformessaging. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicvoid AddISoundBusPluginplugin f if plugin==null thrownew NullReferenceException"Can't add a null plugin"; if myState==ePlayerState.Playing thrownew SystemException"Can't add a plugin while the bus is playing"; myPlugins.Addplugin; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Queryforthepluginbythespecifiedname. /// < /summary > /// < paramname="name" > Thehumanreadablenameofthisplugin < /param > /// < returns > nullifnotfound < /returns > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public ISoundBusPluginFindPluginStringname f lockmyMutex f for int i=0;i < myPlugins.Length;i++ f if myPlugins[i].PluginName!=null&&myPlugins[i].PluginName==name return myPlugins[i]; g g return null; g #endregion #regionPlayback //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Beginplayingofthegivenaudiofile. ///Supported.wav,.mp3 /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicvoid PlayFilePathinFile f PCMStreamSourcesrc= new PCMStreamSource new FilePathinFile; Streamstr=src.CreateStream; Playsrc.Format,str; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Begintheplayingofthegivenaudiostream. /// < /summary > /// < paramname="info" > ThePCMcontentofthegivenstream < /param > /// < paramname="pcmAudioStream" > ThestreamtoreadPCMdatafrom < /param > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicvoid PlayPCMInfoinfo,StreampcmAudioStream f try f Log.Status"Starting Play Thread ..."; //Trytodeterminethestreamsource if pcmAudioStreamisMicrophoneStream 104

PAGE 116

myPlayForever= true ; else myPlayForever= false ; ExtraThread.KillrefmyPlayThread; myInputPCMStream=pcmAudioStream; myInfo=info; myPlayThread= new ThreadPlayCorrectlyThread; myPlayThread.IsBackground= true ; myPlayThread.Name="Playback Thread"; myPlayThread.Start; g catch ThreadAbortException f Log.Status"Play thread aborted ..."; ExtraThread.KillrefmyPlayThread; g catch ThreadInterruptedException f Log.Status"Play thread interrupted ..."; ExtraThread.KillrefmyPlayThread; g catch Exceptionex f Log.Errorex; g g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Pauseplayback /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicvoid Pause f myState=Core.Forms.ePlayerState.Paused; if PhaseChanged!=nullPhaseChangedeSoundBusPhase.PlayPaused; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Stopplayback /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicvoid Stop f myState=ePlayerState.Stopped; while myThIsPlayingThread.Yield; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Resumeplayback. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicvoid Resume f myState=Core.Forms.ePlayerState.Playing; if PhaseChanged!=nullPhaseChangedeSoundBusPhase.PlayStarted; g #endregion #regionCleanup //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Clearallpluginsandreadyforanotherrun. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicvoid Clear f lockmyMutex f ExtraThread.KillrefmyPlayThread; myWorkspaceBuffer=null; myProcessedBytes=0; myProvider=null; myInputPCMStream=null; g g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///DisposetheBus /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicvoid Dispose f try f 105

PAGE 117

Stop; g catch Exceptionex fg g #endregion #regionProperties //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Requestaccesstothepluginsinstalledinthesoundbus. ///Nevermodifythisarray /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public ISoundBusPlugin[]Plugins f get f return myPlugins.ToArray; g g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Getthestatisticsbeinggatheredoneachreaditem. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public DSPStatisticsTimeStats f get f return myTimeStats; gg //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Getthestreamencodinginformation. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public PCMInfoStreamInfo f get f return myInfo; gg //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Requestthecurrentstatusofthesoundbus. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public ePlayerStateStatus f get f return myState; gg /// < summary > ///Getthesoundbuspluginqueuethatprocessesdataandsendstoplugins. /// < /summary > public SoundBusPluginQueueQueue f get f return myPlugins; gg #endregion #regionEvents //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Eventnotifiedofthecurrentstate. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public eventdSoundBusPhasePhaseChanged; //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Eventthatisnotifiedofprogress. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public eventdProgressInfoProgressChanged; //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Eventthatisnotifiedoftickstepsalongthewayduringprocessing ///ofdata. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public eventdSoundBusTickTick; //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///ThiseventisfornonISoundBusPluginsthatwishtobenotified ///wheneverasampleisreadwithoutalltheotherfunctions. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public eventdSampleNoticeSampleRead; #endregion 106

PAGE 118

g g 107

PAGE 119

A.0.11MediaUtilitiesClass usingSystem; using System.IO; using System.Collections.Generic; using System.Linq; using System.Text; using Core.IO; using Core.Util; using Core.Forms; using Core.Mathematics.Algebra; using NAudio.Wave; namespace Toolkit.Media.DSP f //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Variousutilitiesfordealingwithsound/video/images. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public abstract class MediaUtils f #regionPrivateUtility privatestatic List < FilePath > ourCreatedTempFiles= new List < FilePath > ; privatestaticbool ourIsCleaned= false ; #endregion #regionCalculations //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Getthenumberofbytesinasamplechunk /// < /summary > /// < paramname="bps" > Bitspersample < /param > /// < paramname="channels" > Numberofchannels < /param > /// < paramname="blockAlign" > Blockalignment < /param > /// < returns > Thenumberofbytesinasample < /returns > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstaticint GetSampleSize int bps, int channels, int blockAlign f returnnew PCMInfobps,channels,0,blockAlign.SampleSize; g #endregion #regionI/O //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///GivenabufferofrawPCMdatareadsamplesforaspecificchannel. /// < /summary > /// < paramname="numSamps" > Thetotalnumberofsamplestoread < /param > /// < paramname="outRec" > Receivesbyreferencethereadsampledata. < /param > /// < paramname="buf" > Thebuffertoreadfrom < /param > /// < paramname="offset" > Thestartingoffsettoreadfrombuf < /param > /// < paramname="info" > Thestreaminfor < /param > /// < paramname="size" > Thenumberofsamplestoread < /param > /// < paramname="channel" > Thechanneltoreadfrom < /param > /// < returns > Anarrayofthesamplesread. < /returns > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstaticvoid CreateSamplesrefZ2[]outRec,byte[]buf, int channel, int numSamps, int offset,PCMInfoinfo f if outRec.Length < numSamps thrownew ArgumentException"Sample buffer is not large enough! "; for int i=0;i < numSamps;i++ f PCMSamples=CreateSamplebuf,offset,info; switch channel f case 0: outRec[i].A=s.Amp1; break ; case 1: outRec[i].A=s.Amp2; break ; 108

PAGE 120

case 2: outRec[i].A=s.Amp3; break ; case 3: outRec[i].A=s.Amp4; break ; case 4: outRec[i].A=s.Amp5; break ; g offset+=info.SampleSize; g g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///GivenacomplexvaluedimpulseinthetimedomaincreateaPCMsample. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic byte[]CreatePCMZs,PCMInfoinfo f int b=info.BitsPerSample/8; //Bytespersample byte[]a= new byte[info.SampleSize info.Channels]; for int i=0;i < info.Channels;i++ f if b==1 f byteb1=byte int s.Magnitude; a[i info.SampleSize]=b1; g elseif b==2 f byte[]bytes=BitConverter.GetBytes int s.Magnitude; a[i info.SampleSize]=bytes[1]; a[i info.SampleSize+1]=bytes[0]; g g return a; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///CreateaPCMsamplefromabufferthatcontainsaPCMencodedsamplestartingat ///thegivenoffset. /// < /summary > /// < paramname="buf" > Thebuffertoreadfrom < /param > /// < paramname="info" > InfoaboutthePCMencoding < /param > /// < returns > Thesampleread < /returns > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic PCMSampleCreateSamplebyte[]buf, int offset,PCMInfoinfo f int b=info.BitsPerSample/8; //Bytespersample PCMSamplepc= new PCMSample; for int i=0;i < info.Channels;i++ f int k=offset+b i; int valread=0; if b==2 f int val=buf[k+1]; val=val << 8; val=val j buf[k]; valread= short val; if valread > Int16.MaxValuevalread=Int16.MaxValue; elseif valread < )]TJ/F19 5.9776 Tf 6.14 0 Td [(Int16.MaxValuevalread= )]TJ/F19 5.9776 Tf 6.14 0 Td [(Int16.MaxValue; g elseif b==1 f valread= short buf[k]; if valread > Int16.MaxValuevalread=Int16.MaxValue; elseif valread < )]TJ/F19 5.9776 Tf 6.14 0 Td [(Int16.MaxValuevalread= )]TJ/F19 5.9776 Tf 6.14 0 Td [(Int16.MaxValue; g //Dependingonwhichchannelwe'vereadsavethevalueoff. switch i f case 0: pc.Amp1=valread; break ; case 1: pc.Amp2=valread; break ; case 2: pc.Amp3=valread; break ; 109

PAGE 121

case 3: pc.Amp4=valread; break ; case 4: pc.Amp5=valread; break ; g g return pc; g #endregion #regionGenerationUtilities //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///FabricateaPCMstreamofdatathatisprobablyjustnoisebut ///PCMformattedasdatachunks /// < /summary > /// < paramname="bps" > Bitspersecond < /param > /// < paramname="sampleRate" > Samplerate < /param > /// < paramname="blockAlign" > Blockalignment < /param > /// < paramname="channels" > Channels < /param > /// < paramname="numSamples" > Thenumberofsamplestogenerate < /param > /// < returns > Therawdataarrayofdatachunks < /returns > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic byte[]CreateRandomPCMPCMInfopi, int numSamples f int bps=pi.BitsPerSample; float sampleRate=pi.SampleRate; int blockAlign=pi.BlockAlign; int channels=pi.Channels; if bps/8 channels > blockAlign thrownew ArgumentException"Block alignment must be greater than or equal to bits per sample and channels"; //Addpaddingtomatchblockalignment. int sampleSize=channels bps/8+ blockAlign )]TJ/F19 5.9776 Tf 10.543 0 Td [(channels bps/8; byte[]buf= new byte[sampleSize]; MemoryStreamms= new MemoryStream; for int i=0;i < numSamples;i++ f Randomr= new Random; for int j=0;j < buf.Length;j++ f if r.Next100 > 50 buf[j]=byter.Next char .MaxValue/2; else buf[j]=byte )]TJ/F19 5.9776 Tf 6.051 0 Td [(r.Next char .MaxValue/2; g ms.Writebuf,0,buf.Length; g return ms.ToArray; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Createasamplestreamassumingthegivendatais ///aSingleChannel ///b16bitspersample ///c40.1KhzSampleRate /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic PCMSamplerMemoryStreamCreateSingleChanne16BPS int []channel1 f PCMInfopi= new PCMInfo16,1,40100,2; PCMSamplerMemoryStreamss= new PCMSamplerMemoryStreampi; for int i=0;i < channel1.Length;i++ ss.Write new PCMSamplechannel1[i]; return ss; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Createasamplestreamassumingthegivendatais ///aDualChannel 110

PAGE 122

///b16bitspersample ///c40.1KhzSampleRate /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic PCMSamplerMemoryStreamCreateDualChannel6BPS int []channel1, int []channel2 f PCMInfopi= new PCMInfo16,2,40100,4; PCMSamplerMemoryStreamss= new PCMSamplerMemoryStreampi; for int i=0;i < channel1.Length;i++ ss.Write new PCMSamplechannel1[i],channel2[i]; return ss; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Createasamplestreamassumingthegivendatais ///aSingleChannel ///b8bitspersample ///c40.1KhzSampleRate /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic PCMSamplerMemoryStreamCreateSingleChannel8BPS int []channel1 f PCMInfopi= new PCMInfo8,1,40100,1; PCMSamplerMemoryStreamss= new PCMSamplerMemoryStreampi; for int i=0;i < channel1.Length;i++ ss.Write new PCMSamplechannel1[i]; return ss; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Createasamplestreamassumingthegivendatais ///aDualChannel ///b8bitspersample ///c40.1KhzSampleRate /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic PCMSamplerMemoryStreamCreateDualChanne8BPS int []channel1, int []channel2 f PCMInfopi= new PCMInfo8,2,40100,2; PCMSamplerMemoryStreamss= new PCMSamplerMemoryStreampi; for int i=0;i < channel1.Length;i++ ss.Write new PCMSamplechannel1[i],channel2[i]; return ss; g #endregion #regionStreamCreation //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Givenastreamsourcedataobjectthiscreatesthestreamand ///returnsthestreamandpopulatestheformatinformationinthe ///givensourceobject. /// < /summary > /// < returns > nullifunabletofindthestreamlocationofaPCMstream < /returns > /// < paramname="source" > Thesourceinformationforthestreamdesired < /param > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic StreamCreateStreamPCMStreamSourcesource f try f //Inputfromaselectedfile. switch source.SourceType f #regionFileSource case ePCMSourceType.WaveFile: if !source.FilePath.Exists f StatusControl.Instance.StatusBlue="You must select a file to read from!"; return null; g StatusControl.Instance.StatusBlue="Loading file ..."; WaveFileReaderinput=null; 111

PAGE 123

if source.FilePath.Extension.ToLower=="mp3" f Mp3FileReadermpr= new Mp3FileReadersource.FilePath.ToString; StatusControl.Instance.StatusBlue="Decoding file ..."; FilePathmyCreatedFile=FilePath.CreateTempFile; lockourCreatedTempFiles f ourCreatedTempFiles.AddmyCreatedFile; g WaveFileWriter.CreateWaveFilemyCreatedFile.ToString,mpr; mpr.Close; input= new WaveFileReadermyCreatedFile.ToString; g else input= new WaveFileReadersource.FilePath.ToString; source.Format= new PCMInfoinput.WaveFormat; return input; #endregion #regionMicropohone case ePCMSourceType.Microphone: MicrophoneStreamms= new MicrophoneStreamsource.Microphone; ms.Start; source.Format=ms.OutputInfo; return ms; #endregion g g catch Exceptionex f StatusControl.Instance.StatusRed=ex.Message; Log.Errorex; g return null; g #endregion //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Shouldbecalledonappshutdowntocleanupmediaartifactsandsystem ///resources. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstaticvoid Shutdown f try f if !ourIsCleaned f ourIsCleaned= true ; lockourCreatedTempFiles f foreachFilePathfpinourCreatedTempFilesfp.Delete; g g g catch Exceptionex fg g g g 112

PAGE 124

A.0.12PCMInfoClass usingSystem; using System.Collections.Generic; using System.Linq; using System.Text; using NAudio.Wave; namespace Toolkit.Media.DSP f //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///StucturethatholdsPulseCodeModulateddata /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstruct PCMInfo f #regionConstant publicstatic readonlyPCMInfoEmpty= new PCMInfo )]TJ/F19 5.9776 Tf 6.103 0 Td [(1, )]TJ/F19 5.9776 Tf 6.103 0 Td [(1, )]TJ/F19 5.9776 Tf 6.103 0 Td [(1, )]TJ/F19 5.9776 Tf 6.103 0 Td [(1; #endregion #regionMembers /// < summary > ///Bitspersample /// < /summary > publicint BitsPerSample; /// < summary > ///Numberofchannels /// < /summary > publicint Channels; /// < summary > ///ThesamplerateinHz /// < /summary > publicfloat SampleRate; /// < summary > ///TheMaximumsizeofasamplechunkincludingeachchannel. ///Blockalignmentstateswhatboundrythebytesshouldendonwhen ///encompassingeachbitspersampleandeachchannelbeingofthebits ///persample. /// /// < example > ///BlockAlignment2bytes /// j ///v /// )227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()]TJ/F58 5.9776 Tf -15.657 -7.671 Td [(/// j Byte0 j Byte1 j Byte2 j Byte3 j /// )227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()]TJ/F58 5.9776 Tf -15.657 -7.671 Td [(/// j Time0 j Time1 j /// )227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()]TJ/F58 5.9776 Tf -15.657 -7.671 Td [(/// j Ch1 j Ch2 j Ch1 j Ch2 j /// )227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()227()228()227()227()228()227()227()227()228()227()]TJ/F58 5.9776 Tf -15.657 -7.672 Td [(/// ///WhereChisthechannel /// /// < /example > /// < /summary > publicint BlockAlign; #endregion #regionConstructors //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Makeacopy /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public PCMInfoPCMInfoother f BitsPerSample=other.BitsPerSample; Channels=other.Channels; 113

PAGE 125

SampleRate=other.SampleRate; BlockAlign=other.BlockAlign; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///CreateaPCMInfofromawaveformat /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public PCMInfoWaveFormatfmt f BitsPerSample=fmt.BitsPerSample; Channels=fmt.Channels; SampleRate=fmt.SampleRate; BlockAlign=fmt.BlockAlign; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///CreateaPCMInfofromthevalues /// < /summary > /// < paramname="bps" > Bitspersample < /param > /// < paramname="channels" > Channels < /param > /// < paramname="sampleRate" > SamplerateHz < /param > /// < paramname="blockAlign" > Blockalignment < /param > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public PCMInfo int bps, int channels, float sampleRate, int blockAlign f BitsPerSample=bps; Channels=channels; SampleRate=sampleRate; BlockAlign=blockAlign; g #endregion #regionUtilityMethods //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Caluclatethesamplesizeofachunkhowmanybytesperasingledatasample /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicint SampleSize f get f return Channels BitsPerSample/8+BlockAlign )]TJ/F19 5.9776 Tf 10.327 0 Td [(Channels BitsPerSample/8; g g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Convertthistoawaveformatstructure. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public WaveFormatToWaveFormat f returnnew PCMWaveFormat this ; g #endregion #regionInternalClasses /// < summary > ///UtilityclassthatalllowsustoconvertfromaPCMInfoclasstoa ///WaveFormatclass. /// < /summary > publicclass PCMWaveFormat:WaveFormat f private PCMInfomyInfo; public PCMWaveFormatPCMInfoinfo:base int info.SampleRate, int info.BitsPerSample,info. Channels f base.blockAlign= short info.BlockAlign; g g #endregion #regionProeprties //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///AskifthisisanempyinstanceofthePCMinfostructure /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 114

PAGE 126

publicbool IsEmpty f get f return BitsPerSample== )]TJ/F19 5.9776 Tf 5.596 0 Td [(1&& Channels== )]TJ/F19 5.9776 Tf 5.596 0 Td [(1&& SampleRate== )]TJ/F19 5.9776 Tf 5.596 0 Td [(1&& BlockAlign== )]TJ/F19 5.9776 Tf 6.083 0 Td [(1; g g #endregion #regionObjectOverloads public override bool Equalsobjectobj f PCMInfoother=PCMInfoobj; return BitsPerSample==other.BitsPerSample&& BlockAlign==other.BlockAlign&& Channels==other.Channels&& SampleRate==other.SampleRate; g public override int GetHashCode f return ToString.GetHashCode; g public overridestringToString f return SampleRate/1000f+"kHz Ch: "+Channels+" BPS: "+BitsPerSample; g #endregion #regionOperatorOverloads publicstaticbooloperator ==PCMInfoa,PCMInfob f return a.Equalsb; g publicstaticbooloperator !=PCMInfoa,PCMInfob f return !a.Equalsb; g #endregion g g 115

PAGE 127

A.0.13MelodyAnalysis usingSystem; using System.Drawing; using System.Collections.Generic; using System.Linq; using System.Text; using Core.Mathematics; using Core.Mathematics.Stats; using Core.Mathematics.Linear; using Core.Mathematics.Algebra; using Toolkit.Media.UI; namespace Toolkit.Media.DSP f /// < summary > ///Providesamelodyanalysissystemwhichcomputesavisualspace ///frommusicalinput. /// < /summary > publicclass MelodyAnalysis f #regionPrivateMembers private PCMInfomyInfo; private ChordmyLastCh; private ChordDetectormyChordDetector= new ChordDetector; private RunningStatisticsmyStats; privatefloat myMaxNoise=0; #regionRunningValues //Min/Maxvalues. privatefloat myMaxa= )]TJ/F46 5.9776 Tf 6.226 0 Td [(float .MaxValue; privatefloat myMaxb= )]TJ/F46 5.9776 Tf 6.226 0 Td [(float .MaxValue; privatefloat myMaxc= )]TJ/F46 5.9776 Tf 6.226 0 Td [(float .MaxValue; privatefloat myMina= float .MaxValue; privatefloat myMinb= float .MaxValue; privatefloat myMinc= float .MaxValue; privatedouble mySpectacle=0; privatedouble myN=0; //Totalnumberofchordsseen. //Centriciystuff. privatedouble myThetaH=0; privatedouble myThetaF=0; #endregion #endregion #regionPrivateMembers /// < summary > ///Updatetheminimumandmaximumrangevalues. /// < /summary > privatevoid UpdateMinMax float v1, float v2, float v3, float f1, float f2, float f3 f myMina=Math.MinmyMina,v1; myMinb=Math.MinmyMinb,v2; myMinc=Math.MinmyMinc,v3; myMaxa=Math.MaxmyMaxa,v1; myMaxb=Math.MaxmyMaxb,v2; myMaxc=Math.MaxmyMaxc,v3; myMina=Math.MinmyMina,f1; myMinb=Math.MinmyMinb,f2; myMinc=Math.MinmyMinc,f3; myMaxa=Math.MaxmyMaxa,f1; myMaxb=Math.MaxmyMaxb,f2; myMaxc=Math.MaxmyMaxc,f3; g #endregion 116

PAGE 128

#regionConstructors /// < summary > ///Createthemelodyanalysiswiththetimeseriesstatisticsandthestreaminformation. /// < /summary > public MelodyAnalysisRunningStatisticsstats,PCMInfoinfo f myInfo=info;myStats=stats; myMaxNoise= float Math.Pow2.0f, float info.BitsPerSample/2f; g privatefloat myRunCDiff=0; /// < summary > ///Avaluefrom[0,1]thatdescribes"howmuch"the ///systemisdramaticallychangingchordsovertime. /// < /summary > /// < paramname="ch" >< /param > /// < paramname="ch2" >< /param > /// < returns >< /returns > privatefloat CDiffChordch,Chordch2 f Comparison < MusicalDetect > s=delegateMusicalDetecta,MusicalDetectb f if a.Note.Fundamental < b.Note.Fundamental return )]TJ/F19 5.9776 Tf 6.084 0 Td [(1; elseif a.Note.Fundamental > b.Note.Fundamental return 1; return 0; g ; ch.Sorts; ch2.Sorts; Chordshorter; Chordlonger; if ch.Count > ch2.Count f longer=ch; shorter=ch2; g else f longer=ch2; shorter=ch; g float sum=0; for int i=0;i < shorter.Count;i++sum+= float Math.AbsMusic.Z12ch[i].Note.Note+1f )]TJ/F19 5.9776 Tf 10.435 0 Td [( Music.Z12ch2[i].Note.Note+1f; for int i=shorter.Count;i < longer.Count;i++sum+=Music.Z12longer[i].Note.Note+1f; myRunCDiff= float myRunCDiff myN+sum/144f/myN+1f; return myRunCDiff; g #endregion /// < summary > ///Providea1secondsetoftimesamples. /// < /summary > /// < paramname="outBG" > Receivesthebackgroundthatshouldbeused < /param > /// < paramname="X" > Thesamplestoprocess. < /param > public VisualSpace[]AnalyzeZ2[]X f MusicalDetect[]md=Music.GuessNotesX,0, int myInfo.SampleRate; for int i=0;i < md.Length;i++md[i].TimeOn= int myN; myN++; myChordDetector.NewNotesmd; Chordch=myChordDetector.Latest; myChordDetector.Clear; //Calculatebasevalues. float sigma= float myStats.StdDeviation/2.0f Math.Pow2f,myInfo.BitsPerSample; float thetaF=Music.FundamentalCentricitych; float thetaH=Music.HarmonicCentricitych; float chi=Music.Consonancech; float mu=Music.Magnitudech; float delta=0; float ba=0; int bg=0; int []color= newint [ch.Count]; for int i=0;i < ch.Count;i++ f 117

PAGE 129

color[i]=Music.TiggerStripesch[i],sigma.ToArgb; if Math.Absch[i].Amp > ba f ba=Math.Absch[i].Amp; bg=color[i]; g g if sigma > 1sigma=1f; if sigma < 0sigma=0f; delta=0; float cdiff=0; //Computespectacle. if myLastCh!=null f delta=Music.ChangeOfChordmyLastCh,ch; cdiff=CDiffch,myLastCh; mySpectacle=delta+CDiffch,myLastCh; g float va=mu; float vb=mu+delta; float vc=thetaH; //mu+cdiff 1 )]TJ/F58 5.9776 Tf 7.02 0 Td [(chi mu; float fa=0; float fb=0; float fc=2f )]TJ/F19 5.9776 Tf 11.001 0 Td [(cdiff )]TJ/F19 5.9776 Tf 10.435 0 Td [( float mySpectacle; int elements=30+ int 1 )]TJ/F19 5.9776 Tf 7.087 0 Td [(chi 50+ int mySpectacle 50; UpdateMinMaxva,vb,vc,fa,fb,fc; float aa=myMaxa )]TJ/F19 5.9776 Tf 9.705 0 Td [(myMina; float bb=myMaxb )]TJ/F19 5.9776 Tf 9.65 0 Td [(myMinb; float cc=myMaxc )]TJ/F19 5.9776 Tf 9.76 0 Td [(myMinc; if aa==0aa=1; if bb==0bb=1; if cc==0cc=1; va=va )]TJ/F19 5.9776 Tf 5.092 0 Td [(myMina/aa; vb=vb )]TJ/F19 5.9776 Tf 5.037 0 Td [(myMinb/bb; vc=vc )]TJ/F19 5.9776 Tf 5.148 0 Td [(myMinc/cc; myLastCh=ch; myThetaF=myThetaF myN )]TJ/F19 5.9776 Tf 5.972 0 Td [(1+thetaF/myN; myThetaH=myThetaH myN )]TJ/F19 5.9776 Tf 5.972 0 Td [(1+thetaH/myN; VisualSpacevs= new VisualSpaceelements, newfloat [] f va,vb,vc g newfloat [] f fa,fb,fc g color, newint [] f bg g float mySpectacle; returnnew VisualSpace[] f vs g ; g g g 118

PAGE 130

A.0.14FourierTransformClass using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Threading; using Core.Mathematics.Linear; using Core.Mathematics.Stats; using Core.Mathematics.Algebra; using Core.Mathematics.NumberTheory; namespace Core.Mathematics.Analysis f /// < summary > ///Providesaclassthattakesinasetofsampledtimedomain ///PCMandconvertstofrequencydomain. /// < /summary > publicclass FourierTransform f /// < summary > ///Performtheinnersummandfromequation4 /// < /summary > /// < paramname="a" > Thecurrentsub )]TJ/F58 5.9776 Tf 6.011 0 Td [(indexa,ofa+bN1 < /param > /// < paramname="offset" > TheoffsetintoXtoworkrelativeto. < /param > /// < paramname="k" > ThebasefrequencyweareafterHz < /param > /// < paramname="N" > Thetotalsamplesize < /param > /// < paramname="N1" > ThefirstfactorN1 N2=N < /param > /// < paramname="N2" > ThesecondfactorN1 N2=N < /param > /// < paramname="N1" > TotalN < /param > /// < returns > Thecomplexinnersummand < /returns > privatestatic Zf bZ2[]X, int offset, float k, float b, float N1, float N2, float N, eWindowTypeuseWindow f Zt=0; float w=0; for int a=0;a < N2;a++ f int idx= int a+b N2; //Hanningwindowifspecified. //1/2 1 )]TJ/F58 5.9776 Tf 10.09 0 Td [(Cos2Pi x/N )]TJ/F58 5.9776 Tf 10.435 0 Td [(1 switch useWindow f case eWindowType.Hanning: w= float 0.5f 1.0f )]TJ/F19 5.9776 Tf 5.158 0 Td [(Math.Cos2.0f Math.PI Math.Floora+b N2/N; break ; default : w=1; break ; g t+=X[offset+idx].A Z.WN,a k w; g return t; g /// < summary > ///GivenanarrayofrealamplitudevaluesperformtheDFTassumingasamplerate ///ofX.Lengthandasinglesample. /// < /summary > /// < paramname="inRFs" >< /param > /// < paramname="X" >< /param > publicstaticvoid DFTZ2[]X, float []inRFs,eWindowTypeuseWindow f DFTX,X.Length,0,inRFs,useWindow; g /// < summary > ///Givenacomplexarrayofrawinputsamplesassumedtohavesampleratevaluesthis performs ///aDFTontherequestedRfvalues. /// < /summary > /// < paramname="X" > ThesamplespaceX[0].Aistimedomain,X[1].BisRf < /param > /// < paramname="rfs" > TheRfstoextract. < /param > publicstaticvoid DFTZ2[]X, int sampleRate, int offset, float []rfs,eWindowTypeuseWindow f 119

PAGE 131

if X.Length%sampleRate!=0 thrownew ArgumentException"The given data space is not congruent with the sample size "+sampleRate; float N=sampleRate; float 1OverN=1.0f/N; Zc=0; float w; for int k=0;k < rfs.Length;k++ f c=0; for float n=0;n < N;n++ f switch useWindow f case eWindowType.Hanning: w= float .5f 1.0f )]TJ/F19 5.9776 Tf 5.159 0 Td [(Math.Cos2.0f Math.PI n/N; break ; default : w=1; break ; g c+=X[offset+ int n].A w Z.WN,n rfs[k]; g X[offset+ int rfs[k]].B=c 1OverN; g g /// < summary > ///PerformanFFTononlythegiventargetedRFvaluesassumingasinglesamplein ///Xwhosesamplerateisthesizeofthegivenarray. /// < /summary > /// < paramname="inRFs" > Thefrequenciestoattack < /param > /// < paramname="X" > Thestructureconsistingof1tomanysamples. < /param > /// < paramname="useWindow" > IftruethenaHanningwindowisappliedtothetransform. < /param > publicstaticvoid RfFFTZ2[]X, float []inRFs,eWindowTypeuseWindow f RfFFTX,X.Length,0,inRFs,useWindow; g /// < summary > ///PerformanFFTonthegivenarrayXusingtheZ2.Arealvalues ///asinputs. /// < /summary > /// < paramname="inRFs" > Thefrequenciestoextract < /param > /// < paramname="offset" > TheoffsetintheXarraytoworkrelativeto < /param > /// < paramname="samplerate" > Thesamplerateofthedata < /param > /// < paramname="X" > Thesourceamplitudearray. < /param > publicstaticvoid RfFFTZ2[]X, int samplerate, int offset, float []inRFs,eWindowType useWindow f int f1,f2; float N1,N2; int N=samplerate; float 1OverN=1.0f/N; Zc=0; //Factorandturnthefactorstofloats ExtraNumberTheory.Factorsamplerate,outf1,outf2; N1=f1;N2=f2; Objectsync= new Object; Dictionary < int int > vals= new Dictionary < int int > ; for int i=0;i < inRFs.Length;i++ f float k=inRFs[i]; c=0; Zt=0; //Non )]TJ/F58 5.9776 Tf 6.457 0 Td [(parallelizedfortesting. //forintb=0;b < N1;b++ //c+=f bX,offset,k,b,N1,N2,N,useWindow Z.WN,b N2 k; //Parallelizeeachinnersummand. Parallel.For < Z > 0, int N1 )]TJ/F19 5.9776 Tf 5.972 0 Td [(1, = > 0, b,loopState,local2= > f local2+=f bX,offset,k,b,N1,N2,N,useWindow Z.WN,b N2 k; return local2; g outputValue= > f locksyncc+=outputValue; g ; 120

PAGE 132

X[offset+ int k].B=c 1OverN; g g g /// < summary > ///Specifieswhatkindofwindowingtechniquetobeused. /// < /summary > publicenum eWindowType f Hanning, None g g 121

PAGE 133

A.0.15SynthesizerClass usingSystem; using System.Collections.Generic; using System.Linq; using System.Text; using Core.Mathematics.Algebra; namespace Core.Mathematics.Analysis f /// < summary > ///Providesutilitiesforgeneratingdifferentdiscretewaveformsasaseriesof ///amplitudesamples. /// < /summary > publicclass Synthesizer:List < Z2 > f #regionPrivate privateint mySampleRate; #endregion #regionConstructors /// < summary > ///Createthesynthesizer. /// < /summary > public Synthesizer int sampleRate f mySampleRate=sampleRate; g #endregion #regionUtility /// < summary > ///Addthegivenfrequencyatthespecifiedamplitudeforthegivendurationinseconds. /// < /summary > publicvoid Add float rf, float amp, int dur f Z2[]sig=SynthesizeSignalmySampleRate, newfloat [] f rf g ,dur,eDurationType.Seconds, new float [] f amp g ; AddRangesig; g /// < summary > ///Addthegivenfrequencytothesignalstartingatthegiventime. /// < /summary > /// < paramname="rf" > Thefrequencytoadd < /param > /// < paramname="amp" > Thepowerlevel < /param > /// < paramname="dur" > Thedurationofthefrequencyinseconds < /param > /// < paramname="start" > Thestartingtimeoffsetinseconds < /param > publicvoid Add float rf, float amp, int dur, int start f Addrf,amp,dur,eDurationType.Seconds,start,eDurationType.Seconds; g /// < summary > ///Addthegivenfrequencytothesignalstartingatthegiven ///time. /// < /summary > /// < paramname="rf" > Thefrequencytoadd < /param > /// < paramname="amp" > Thepowerlevel < /param > /// < paramname="dur" > Thedurationofthefrequency,unitsspecifiedbyinDurType < /param > /// < paramname="start" > Thestartingtimeoffsettobeginadding < /param > /// < paramname="inDurUnits" > Howtointerprestthedurationvalue. < /param > /// < paramname="inStartUnits" > Whatunitsarethestartvaluetobeconsidered < /param > publicvoid Add float rf, float amp, int dur,eDurationTypeinDurUnits, int start, eDurationTypeinStartUnits f Z2[]X=SynthesizeSignalmySampleRate, newfloat [] f rf g ,dur,inDurUnits, newfloat [] f amp g ; if inStartUnits==eDurationType.Secondsstart=start mySampleRate; //Addsilenceuntilwereachstartifnotenoughspaceallocatedyet while Count < start+1Add new Z2Z.Zero,Z.Zero; for int i=0;i < X.Length;i++ f int offset=i+start; if offset==Count f AddX[i]; 122

PAGE 134

g else f Z2zt= this [offset]; zt.A= this [offset].A+X[i].A; zt.B= this [offset].B+X[i].B; this [offset]=zt; g g g /// < summary > ///Addnoisetothesignalfromthegivenstartingpointsampleitemto ///forthegivenduration. /// < /summary > /// < paramname="start" > Thestartingsampletobeginaddingnoise < /param > /// < paramname="dur" > Thedurationoftimetoaddnoseinseconds < /param > /// < paramname="noiseFloor" > Themaximumamplitudeinthenoise < /param > /// < paramname="inDurUnits" > Theunitstointerpretthedurationvaluein < /param > /// < paramname="inStartUnits" > Theunitstointerpretthestartoffsetin < /param > publicvoid AddNoise int start,eDurationTypeinStartUnits, int dur,eDurationTypeinDurUnits, float noiseFloor f Randomr= new Random int DateTime.UtcNow.Ticks << 32; if inStartUnits==eDurationType.Secondsstart=start mySampleRate; if inDurUnits==eDurationType.Secondsdur=mySampleRate dur; //Addsilenceuntilwereachstartifnotenoughspaceallocatedyet while Count < start+1Add new Z2Z.Zero,Z.Zero; for int i=0;i < dur;i++ f int offset=i+start; if offset==Count f Zz= float r.NextDouble noiseFloor; Add new Z2z,Z.Zero; g else f Z2zt= this [offset]; zt.A= float this [offset].A.A+r.NextDouble noiseFloor; this [offset]=zt; g g g #endregion #regionProperties /// < summary > ///Requestthecurrenttotaldurationinsecondsofdata. /// < /summary > publicint Duration f get f return int Math.Ceiling double Count/mySampleRate; g g publicint SampleRate f get f return mySampleRate; gg #endregion #regionUtilityMethods /// < summary > ///GenerateatimeseriesofNsamplespersecondthatcontains ///thegivenarrayoffrequenciesembeddedacrosstheentiresampleatthegivenvolume. /// < /summary > /// < paramname="amp" > Theamplitudestoapplytoeachfrequencyrespectively. < /param > /// < paramname="rfs" > Thefrequencylisttobeembeddedintheentiresample < /param > /// < paramname="sampleRate" > Thenumberofsamplestoprovide < /param > /// < paramname="duration" > Thedurationthesignalshouldlast < /param > /// < paramname="inTyp" > Howtointerpretthedurtionvalue'sunits < /param > publicstatic Z2[]SynthesizeSignal int sampleRate, float []rfs, int duration,eDurationType inTyp, float []amp f int dur=inTyp==eDurationType.Seconds?sampleRate duration:duration; Z2[]td= new Z2[dur]; double 2PI= float Math.PI 2.0d; float N= float sampleRate; float s=0; 123

PAGE 135

for int n=0;n < dur;n++ f s=0; //Addeachofthefrequenciesin. for int j=0;j < rfs.Length;j++s+= float amp[j] Math.Cos 2PI rfs[j] float n/N; td[n].A=s; g return td; g /// < summary > ///GenerateatimeseriesofNsamplespersecondthatcontains ///thegivenfrequencyembeddedacrosstheentiresampletimeatthegivenvolume /// < /summary > /// < paramname="amp" > Theamplitudetoapplytoeachfrequency < /param > /// < paramname="rf" > Thefrequencytobeembeddedintheentiresample < /param > /// < paramname="sampleRate" > Thenumberofsamplestoprovide < /param > /// < paramname="duration" > Thedurationofthesignalinseconds. < /param > publicstatic Z2[]SynthesizeSignal int sampleRate, float rf, int duration, float amp f return SynthesizeSignalsampleRate, newfloat [] f rf g ,duration,eDurationType.Seconds, new float [] f amp g ; g /// < summary > ///GenerateatimeseriesofNsamplespersecondthatcontains ///thegivenfrequenciesembeddedacrosstheentiresampletimeatthegivenvolume. /// < /summary > /// < paramname="amp" > Theamplitudetoapplytoeachfrequency < /param > /// < paramname="rfs" > Thefrequenciestobeembeddedintheentiresample < /param > /// < paramname="sampleRate" > Thenumberofsamplestoprovide < /param > /// < paramname="duration" > Thedurationofthesignalinseconds. < /param > publicstatic Z2[]SynthesizeSignal int sampleRate, float []rfs, int duration, float amp f float []amps= newfloat [rfs.Length]; for int i=0;i < rfs.Length;i++amps[i]=amp; return SynthesizeSignalsampleRate,rfs,duration,eDurationType.Seconds,amps; g #endregion g /// < summary > ///Utiltyenumusedbythesynthesizemethod. /// < /summary > publicenum eDurationType f /// < summary > ///Thedurationshouldlastthegivennumberofseconds. /// < /summary > Seconds, /// < summary > ///Thedurationshouldlastthegivennumberofsamplevalues. /// < /summary > Samples g g 124

PAGE 136

A.0.16ComplexNumberClass usingSystem; using System.Collections.Generic; using System.Linq; using System.Text; namespace Core.Mathematics.Algebra f //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Providesthecontainerforacomplexnumberwithareal ///andimaginaryportion. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstruct Z:IField2 f #regionConstants privateconstfloat 2PI= float Math.PI 2d; #endregion //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///The"Empty"valueforthisstructure /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic readonlyZEmpty= new Z float .MinValue, float .MinValue; //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Thezeroelement. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic readonlyZZero= new Z0,0; #regionConstructors //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Createacomplexnumber /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public Z float a, float b f A=a; B=b; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Createaunitlengthcomplexnumberfromtheexponentialformbeing ///Z=e^ f iargz g where j z j =1. /// < /summary > /// < paramname="argz" > Theargument/anglewhichcanbefrom0toN < /param > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public Z float argz f A= float Math.Cosargz; B= float Math.Sinargz; g #endregion #regionFields //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Therealportionofthecomplexnumber /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicfloat A; //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Theimaginaryportionofthecomplexnumber /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicfloat B; 125

PAGE 137

#endregion #regionIField2Interface publicfloat X1 f get f return A; g set f A=value; g g publicfloat X2 f get f return B; g set f B=value; g g #endregion #regionProperties //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Askifthisistheemptystructure /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicbool IsEmpty f get f return A==Empty.A&&B==Empty.B; gg //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Getthemagnitudeofthiscomplexnumber. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicfloat Magnitude f get f return float Math.SqrtA A+B B; g g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Gettheargumentofthiscomplexnumberfrom0 )]TJ/F58 5.9776 Tf 10.071 0 Td [(2PI ///ThisisNOTtheprincipalargument /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicfloat Arg f get f //90wheretangentisundefined if A==0&&B > 0 return float Math.PI/2.0f; //270wheretangentisundefined. if A==0&&B < 0 return float 3.0f Math.PI/2.0f; return float Math.Atan2B,A+ 2PI% 2PI; g g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Getthecomplexconjugate. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public ZConjugate f get f returnnew ZA, )]TJ/F19 5.9776 Tf 5.631 0 Td [(B; g g #endregion #regionUtiltities //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Assumingthiscomplexnumberisinexponentialform ///raiseittothegivenpower /// < /summary > /// < paramname="n" > Thepowertoraisethecomplexnumberto < /param > /// < returns > Thenewnumber < /returns > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 126

PAGE 138

public Ze float n f return e float Math.PowMagnitude,n,Arg n; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Createaunitlengthexponentialformofacomplexnumberwiththegivenargument. /// < /summary > /// < paramname="arg" > Theargument < /param > /// < paramname="mag" > Themagnitude < /param > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic Ze float mag, float arg f returnnew Zarg mag; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///CreateanW nwhichise^ f)]TJ/F58 5.9776 Tf 10.411 0 Td [(i2PI/N arg g /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic ZW float N, float arg f returnnew Z float )]TJ/F19 5.9776 Tf 5.878 0 Td [(Math.PI 2.0d/N arg; g #endregion #regionOperatorOverloads //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Divideacomplexnumberbyareal /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic Z operator /Za, double d f returnnew Z float a.A/d, float a.B/d; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Dividetwonumbers /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic Z operator /Za,Zb f //Doa/b. Zc=b.Conjugate; double m=b b.Conjugate.A; return a c/m; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Assignmentofarealvalue. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic implicit operator Z double x f returnnew Z float x, float 0; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Multiplybyaconstant. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic Z operator Zc1, double c f returnnew Z float c1.A c, float c1.B c; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Multiplybyaconstant. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic Z operator double c,Zc1 f returnnew Z float c1.A c, float c1.B c; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Differencebetweentocomplexnumbersc1 )]TJ/F58 5.9776 Tf 10.318 0 Td [(c2. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic Z operator )]TJ/F19 5.9776 Tf 5.8 0 Td [(Zc1,Zc2 f 127

PAGE 139

returnnew Zc1.A )]TJ/F19 5.9776 Tf 5.687 0 Td [(c2.A,c1.B )]TJ/F19 5.9776 Tf 5.686 0 Td [(c2.B; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Themultiplicationoperation /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic Z operator Zc1,Zc2 f float a=c1.A; float b=c1.B; float c=c2.A; float d=c2.B; returnnew Za c )]TJ/F19 5.9776 Tf 5.241 0 Td [(b d,a d+b c; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Theadditionoperator. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstatic Z operator +Zc1,Zc2 f returnnew Zc1.A+c2.A,c1.B+c2.B; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Theequality. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstaticbooloperator ==Zc1,Zc2 f if ReferenceEqualsc1,c2 returntrue ; elseif Objectc1==null jj Objectc2==null returnfalse ; return c1.A==c2.A&&c1.B==c2.B; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Realnumberequality. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstaticbooloperator ==Zc1, double v f return c1.A==v&&c1.B==0; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Realnumberequality. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstaticbooloperator !=Zc1, double v f return c1.B!=0 jj c1.A!=v; g //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Theinequality. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ publicstaticbooloperator !=Zc1,Zc2 f return !c1==c2; g #endregion //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /// < summary > ///Objectoverrideforequality. /// < /summary > //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ public override bool Equalsobjectobj f returnthis ==Zobj; g public overridestringToString f if B==0 return ""+A; else f if B < 0 return ""+A+" )]TJETq1 0 0 1 236.04 102.401 cm[]0 d 0 J 0.772 w 0 0 m 0.398 0 l SQq1 0 0 1 236.438 102.215 cm[]0 d 0 J 0.398 w 0 0 m 2.192 0 l SQq1 0 0 1 238.63 102.401 cm[]0 d 0 J 0.772 w 0 0 m 0.398 0 l SQBT/F19 5.9776 Tf 240.647 102.015 Td [(i"+Math.AbsB; else return ""+A+" + i"+B; g 128

PAGE 140

g public override int GetHashCode f return ToString.GetHashCode; g g g 129

PAGE 141

A.0.17VortexVisual usingSystem; using System.Drawing; using System.Threading; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Core.Drawing; using Core.Util; using Core.Mathematics; using Core.Mathematics.Geometry; using Core.Mathematics.Algebra; using Toolkit.Media.DSP; using Toolkit.Xna; using Toolkit.Media.DSP.Filters; using Toolkit.Media.UI; using Rectangle=System.Drawing.Rectangle; using Color=System.Drawing.Color; using Core.Mathematics.Stats; namespace Apps.PictographReader.Plugins.Visuals f /// < summary > ///Providesananimationvisualthatcreatesaspinningvortexbaseduponthe ///inputparameters. /// < /summary > publicclass VortexVisual:IVisualizer f #regionStaticMembers privatestaticfloat 2PI= float Math.PI 2f; #endregion #regionPrivateMembers privatebool myEnabled= true ; private RandomourRandom= new Random int EpochTime.Now; //Vectorthatdefinesrotationanglesinradiansabouteachaxispereachrenderaction. protected List < VortexPlusData > mySpinnies= new List < VortexPlusData > ; private ZmyDFT= new Z0,0; private VisualSpacemyLastVS=VisualSpace.Empty; #endregion #regionPrivateUtility /// < summary > ///Plotanarccenteredaboutx,y,zwithstarting,endingandcurrentanglesofthegiven radius /// < /summary > privatevoid PlotArcIGraphicsCanvasg, float x, float y, float z, float start, float end, float cur, float rad, int color f float x2,y2; //Calulatethecurrentradiallineanddrawit. x2= float rad Math.Coscur; y2= float rad Math.Sincur; float incSz= float end )]TJ/F19 5.9776 Tf 10.801 0 Td [(start/60f; float i=start; float lX= float rad Math.Cosi; float lY= float rad Math.Sini; for ;i < =cur;i+=incSz f x2= float rad Math.Cosi; y2= float rad Math.Sini; g.DrawLinelX+x,lY+y,z,x2+x,y2+y,z,color; lX=x2; lY=y2; g g 130

PAGE 142

/// < summary > ///Plotthelineandincrementit. ///Returnstruewhenitsdone. /// < /summary > privatevoid PlotIGraphicsCanvasg f try f XnaGraphicsCanvasgraphics=XnaGraphicsCanvasg; graphics.Projection=Toolkit.Xna.eXnaCanvasPerspective.Perspective; if !Inc return ; lockmySpinnies f foreachVortexPlusDatadatainmySpinnies f float br=data.mySize )]TJ/F19 5.9776 Tf 10.435 0 Td [(data.myAngle )]TJ/F19 5.9776 Tf 5.732 0 Td [(data.mySAngle/data.myEAngle data.mySize; if br > 0&&data.Spectacle > .4f graphics.FillCircledata.myX,data.myY,data.myZ,br,data.Color; //Plotanarc int cnt= int 1f+data.MaxElements data.Spectacle; float rinc=data.mySize/ float cnt; float r=data.mySize; float z=data.myZ; int c=0; for int i=0;i < cnt;i++ f PlotArcgraphics,data.myX,data.myY,z,data.mySAngle,data.myEAngle,data.myAngle,r,data .myColors[c++%data.myColors.Length]; r )]TJ/F19 5.9776 Tf 5.112 0 Td [(=rinc; z )]TJ/F19 5.9776 Tf 5.112 0 Td [(=rinc; g g g //Endlock g catch Exceptionex f Log.Errorex; g g /// < summary > ///Incrementtheanimationstothenextphase. /// < /summary > privatebool Inc f lockmySpinnies f bool draw= false ; for int i=0;i < mySpinnies.Count;i++ f if mySpinnies[i].IsDone f mySpinnies.RemoveAti; i )31()]TJ/F19 5.9776 Tf 11.234 0 Td [(; g else f mySpinnies[i]=mySpinnies[i].Inc; draw= true ; g g return draw; g g #endregion #regionConstructors /// < summary > ///Createavortexvisualrenderer. /// < /summary > public VortexVisual fg #endregion #regionUtilityMethods /// < summary > ///Resetthevortex. /// < /summary > 131

PAGE 143

publicvoid Clear f mySpinnies.Clear; g /// < summary > ///Addasampletobeprocessed. /// < /summary > publicvoid UpdateVisualSpacevs f if !myEnabled return ; if !myLastVS.IsEmpty f lockmySpinnies f //Startingangle float pStart= float vs.Values[0] 2PI; //Endingangle float pEnd= float pStart+vs.Values[1] 2PI 100f+vs.Spectacle 2PI 100; //Radius float radius=vs.Values[0] 1.5f; //Setupcentertobesomegrowthawayfromfocusbaseduponthepri/sec/thirdvalues. float specRange=.25f; float y= )]TJ/F19 5.9776 Tf 6.151 0 Td [(2.0f+vs.Focus[0] 2f )]TJ/F19 5.9776 Tf 10.29 0 Td [(specRange+specRange vs.Spectacle+ float ourRandom.NextDouble; float x= )]TJ/F19 5.9776 Tf 6.151 0 Td [(2.0f+vs.Focus[1] 2f )]TJ/F19 5.9776 Tf 10.29 0 Td [(specRange+specRange vs.Spectacle+ float ourRandom.NextDouble; float z= )]TJ/F19 5.9776 Tf 6.151 0 Td [(2.0f+vs.Focus[2] 2f+ float ourRandom.NextDouble; //Velocity float velocity= float pEnd )]TJ/F19 5.9776 Tf 5.917 0 Td [(pStart/80 )]TJ/F19 5.9776 Tf 10.679 0 Td [(30.0f vs.Spectacle; VortexPlusDatasd= new VortexPlusDatapStart,pEnd,velocity,x,y,z,radius,vs. ForegroundColors,vs.Spectacle,vs.MaxElements; if mySpinnies.Count < 30 mySpinnies.Addsd; g g myLastVS=vs; g publicvoid PerformDrawingIGraphicsCanvascanvas f if !myEnabled return ; Plotcanvas; g #endregion #regionProperties /// < summary > ///Getthetypeofgraphicssystemthisexpects. /// < /summary > public eGraphicsSupportGraphicsSupport f get f return eGraphicsSupport.Xna; g g #endregion g #regionUtilityClasses publicstruct VortexPlusData f privateint myCIdx; //Indexintocolortouse. publicfloat mySAngle; //Startinganglepostion publicfloat myEAngle; //Endingangleposition publicfloat myAngle; //Currentrotationalposition publicfloat myRotateVelocity; //Angleincrement publicfloat myX; //Tranlsation publicfloat myY; //Tranlsation publicfloat myZ; //Translation publicfloat mySize; //Lengthofline. publicint []myColors; publicfloat Spectacle; publicint MaxElements; 132

PAGE 144

privatebool myFirstPassDone; /// < summary > ///Isthisdonerotating /// < /summary > publicbool IsDone f get f if float .IsNaNmyAngle jj float .IsNaNmyEAngle returntrue ; if !myFirstPassDone returnfalse ; else return myAngle < =mySAngle; g g /// < summary > ///Getthecolortouseonthecurrentlegofthespinnyguy. /// < /summary > publicint Color f get f return myColors[myCIdx]; g g /// < summary > ///Rotatatetothenextposition /// < /summary > public VortexPlusDataInc f if !myFirstPassDone myAngle+=myRotateVelocity; else myAngle )]TJ/F19 5.9776 Tf 5.112 0 Td [(=myRotateVelocity; myCIdx=myCIdx+1%myColors.Length; if myAngle > =myEAngle&&!myFirstPassDonemyFirstPassDone= true ; returnthis ; g /// < summary > ///Createaspinnydataobject. /// < /summary > /// < paramname="color" > RGBcolor < /param > /// < paramname="cx" > CenterXofspinner < /param > /// < paramname="cy" > CenterYofspinner < /param > /// < paramname="endAngle" > Endingangleinradians < /param > /// < paramname="rad" > Radiusofspinner < /param > /// < paramname="startAngle" > Startingangleofspinnerinradians < /param > /// < paramname="velocity" > Velictyofthespinnersrotationinradians < /param > public VortexPlusData float startAngle, float endAngle, float velocity, float cx, float cy, float cz, float rad, int []color, float spectacle, int elements f mySAngle=startAngle; myEAngle=endAngle; myAngle=mySAngle; myRotateVelocity=velocity; myX=cx; myY=cy; myZ=cz; mySize=rad; myColors=color; myCIdx=0; Spectacle=spectacle; MaxElements=elements; myFirstPassDone= false ; g g #endregion g 133

PAGE 145

A.0.18VisualizerInterface usingSystem; using System.Collections.Generic; using System.Linq; using System.Text; using Core.Mathematics.Algebra; using Core.Mathematics; using Core.Drawing; using Toolkit.Media.DSP; using Toolkit.Media; using Toolkit.Media.UI; namespace Apps.PictographReader.Plugins.Visuals f /// < summary > ///Providesaninterfaceconnectedtosomeimplementationthatgeneratesan ///animationorvisualdrawingbaseduponaparameterspace. /// ///Theunderlyingimplementationisthoughttohaveno"brains"andonlydrawswhatitistold. /// < /summary > public interfaceIVisualizer:IGraphicsEnabled f /// < summary > ///Remove/Cleanupeverythingforanotherrun. /// < /summary > void Clear; /// < summary > ///Constructthevisualbasedupontheparamterspace. /// < /summary > void UpdateVisualSpaceinSpace; g g 134

PAGE 146

A.0.19VisualizerSpace usingSystem; using System.Collections.Generic; using System.Linq; using System.Text; using Core.Util; namespace Toolkit.Media.UI f /// < summary > ///Providesastructurethatdescribesananimationspriteasasetofgeneric ///parameterstobeinterpretedbysomegraphicrenderer. /// < /summary > publicstruct VisualSpace f #regionPublicConstants publicstatic readonlyVisualSpaceEmpty= new VisualSpace )]TJ/F19 5.9776 Tf 6.205 0 Td [(1,null,null,null,null, )]TJ/F19 5.9776 Tf 6.281 0 Td [(1; #endregion #regionProperties /// < summary > ///Athresholdvaluefrom0to1where1means ///absolutelyspectauclarvisualizationandzeromeans ///nothingspecial. /// < /summary > publicfloat Spectacle; /// < summary > ///Maximumnumberofelementstobecreated. ///Designedtolimittheunderlyinganimator. /// < /summary > publicint MaxElements; /// < summary > ///Athreespacevaluelisttobeinterpretedbythevisualizer. /// ///Thevaluescanbeanythingbutcommonlyinterpretedas ///primary, ///secondary, ///tiertiery ///respectively. /// < /summary > publicfloat []Values; /// < summary > ///Providesthecentroidelementthatdescribessomefocalregionofthe ///animation. /// < /summary > publicfloat []Focus; /// < summary > ///TheprimarycolorRGBAvaluetobeused. /// < /summary > publicint []ForegroundColors; /// < summary > ///Alistofcolorvaluestobeusedinbackgroundrenderings. /// < /summary > publicint []BackgroundColors; publicfloat Mag; #endregion #regionConstructors /// < summary > ///Createthevisualizerspace. /// < /summary > public VisualSpace int inMaxElements, float []inValue, float []inFocus, int []inColors, int []inBGColors, 135

PAGE 147

float inSpectacle f MaxElements=inMaxElements; Values=inValue; Focus=inFocus; ForegroundColors=inColors; BackgroundColors=inBGColors; Spectacle=inSpectacle; Mag=0; g #endregion public overridestringToString f StringBuildersb= new StringBuilder; sb.AppendExtraString.ToStringValues; sb.Append","; sb.AppendExtraString.ToStringFocus; sb.Append","; sb.AppendSpectacle; return sb.ToString; g #regionStateProperties /// < summary > ///Askifthisvisualspaceisanemptyspace. /// < /summary > publicbool IsEmpty f get f return MaxElements== )]TJ/F19 5.9776 Tf 5.596 0 Td [(1&&Spectacle== )]TJ/F19 5.9776 Tf 5.595 0 Td [(1&&ForegroundColors==null&&Values== null&&Focus==null; g g #endregion g g 136

PAGE 148

A.0.20SurveyResults usingSystem; using System.IO; using System.Collections.Generic; using System.Linq; using System.Text; using Core.Mathematics.Linear; using Core.Util; using Core.IO; namespace ThesisAnalysis f /// < summary > ///Aclassthatparsesanemailresponseandcomputes ///thescoreofatakensurvey. /// < /summary > publicclass SurveyResults f #regionPrivateMembers //Scoresforeachsurvey1perperson private List < SurveyScore > myScores= new List < SurveyScore > ; privateint myFail=0; #endregion #regionPrivateUtil /// < summary > ///Scorecomputationthatisaninversescore. /// < /summary > privatefloat s float a f return 5f )]TJ/F19 5.9776 Tf 5.435 0 Td [(a/5f; g /// < summary > ///Scorecomputationthatisadirectscore. /// < /summary > privatefloat t float a f return a/5f; g /// < summary > ///Computethescoregivenasetofanswers. /// < /summary > privatefloat ComputeScore float []answers f float []mappings= f 0,5,4,0,2,1 g ; float []importance= f 2,3,8,8,10,10,5 g ; //Mapanswerstovalues. for int i=0;i < answers.Length;i++answers[i]=mappings[ int answers[i]]; return sanswers[0] importance[0]+ sanswers[1] importance[1]+ tanswers[2] importance[2]+ tanswers[3] importance[3]+ tanswers[4] importance[4]+ tanswers[5] importance[5]+ tanswers[6] importance[6]; g #endregion #regionConstructors /// < summary > ///Createasurveyresultsprocessorbyloadingeveryfileinthegivendirectory ///withextension.txtassumingeachfileisanemailresponsefromthesurvey. /// < /summary > public SurveyResultsFilePathinDir f foreachFilePathfpininDir.GetFiles".txt"Addfp.Text; g #endregion #regionUtility /// < summary > ///Addtheresultsofoneemailtotheprocessor. /// < /summary > /// < paramname="inMsg" > Theemailmessage < /param > 137

PAGE 149

publicvoid AddStringinMsg f try f StringReadersr= new StringReaderinMsg; Stringline; List < String > results= new List < String > ; int run=0; Stringuser=""; //ReadeachlineandlookfortheresultsIgnoredebug while line=sr.ReadLine!=null f if line.Trim=="" continue ; elseif line.Contains"Run:" f run= int .Parseline.Substringline.IndexOf":"+1.Trim; continue ; g elseif line.Contains"TimeStamp:" f user=line.Substringline.IndexOf":"+1.Trim; continue ; g //Foundresultssection,readtilthelogsection. if line.Contains"RESULTS" f while line=sr.ReadLine!=null&&!line.Contains"LOG" f if line.Trim=="" continue ; results.Addline; g g g SurveyScoress= new SurveyScore; float bestScore=ComputeScore newfloat [] f 5,5,1,1,1,1,1 g ; //Computethescoreforeach. for int i=0;i < results.Count;i++ f Strings=results[i]; Stringgenre=ExtraString.SubStrings,0,s.IndexOf":".Trim.ToUpper; s=ExtraString.SubStrings,s.IndexOf":"+1,s.Length; String[]opts=s.Split newchar [] f ',' g ,StringSplitOptions.RemoveEmptyEntries; float []v= newfloat [opts.Length]; for int j=0;j < opts.Length;j++v[j]= int .Parseopts[j]; ss.Add new QuestionScorerun,user,genre, float ComputeScorev/bestScore 100f; g myScores.Addss; g catch Exceptionex f Log.Errorex; myFail++; g g /// < summary > ///CreateCSVfromthescore. /// < /summary > public StringToRawCSV f StringBuildersb= new StringBuilder; float f=0; float fcnt=0; sb.Append"User, Grade % n r n n"; for int i=0;i < myScores.Count;i++ f foreachQuestionScoreqsinmyScores[i] f sb.Appendqs.Genre.ToUpper+","+qs.Score+" n r n n"; f+=qs.Score; fcnt++; g g sb.Append" n r n n n r n n n r n n"; TitleSummerts= new TitleSummer; sb.Append"Genre Grade Ave % n r n n"; for int i=0;i < myScores.Count;i++ f foreachQuestionScoreqinmyScores[i] ts.Addq.Genre,q.Score; g foreachStringkeyints.Titlessb.Appendkey+","+ts.GetScorekey+" n r n n"; sb.Append" n r n n n r n n n r n n"; TopperLowertl= new TopperLower true ,5; sb.Append"Top 5 Scores, Grade % n r n n"; 138

PAGE 150

for int i=0;i < myScores.Count;i++tl.AddmyScores[i]; foreachQuestionScoresintlsb.Appends.Genre+","+s.Score+" n r n n"; sb.Append" n r n n n r n n n r n n"; tl= new TopperLower false ,5; sb.Append"Bottom 5 Scores, Grade % n r n n"; for int i=0;i < myScores.Count;i++tl.AddmyScores[i]; foreachQuestionScoresintlsb.Appends.Genre+","+s.Score+" n r n n"; Console.WriteLine"Overall Grade "+f/fcnt; return sb.ToString; g /// < summary > ///Getthescores. /// < /summary > public SurveyScore[]Scores f return myScores.ToArray; g #endregion #regionProperties /// < summary > ///Getthenumberoffailedsurveys. /// < /summary > publicint Failed f get f return myFail; gg /// < summary > ///Getthetotalnumberofsurveystaken. /// < /summary > publicint Count f get f return myScores.Count; gg #endregion g #regionInternalClasses /// < summary > ///Classthatmanagesthescoretoafullsurvey. /// < /summary > publicclass SurveyScore:List < QuestionScore > f public SurveyScore fg /// < summary > ///Getthebestscore. /// < /summary > public QuestionScoreBest f get f QuestionScoreb=null; foreachQuestionScoreqsin this f if b==null jj qs.Score > b.Score b=qs; g return b; g g /// < summary > ///Gettheworstscore. /// < /summary > public QuestionScoreWorst f get f QuestionScoreb=null; foreachQuestionScoreqsin this f if b==null jj qs.Score < b.Score b=qs; g return b; g g /// < summary > ///Computetheoverallscore /// < /summary > 139

PAGE 151

publicfloat Score f get f double v=0; foreachQuestionScoreqsin this v+=qs.Score; return float v/ float Count; g g g /// < summary > ///Classthatmanagesascoretoasinglequestion. /// < /summary > publicclass QuestionScore f publicint Run; public StringUser; publicfloat Score; public StringGenre; public QuestionScore int run,Stringuser,Stringgenre, float score f Run=run; User=user; Genre=genre; Score=score; g public overridestringToString f return Genre+": "+Score; g g publicclass TitleSummer f //Values. private Dictionary < String, float > myD= new Dictionary < string, float > ; //Conunts. private Dictionary < String, float > myC= new Dictionary < string, float > ; /// < summary > ///Putorgetwithouthavingtoworryiftheitemalreadyexists. /// < /summary > publicvoid AddStringkey, float v f if myD.ContainsKeykey f float f=myD[key]; f+=v; myD[key]=f; float cnt=myC[key]; cnt++; myC[key]=cnt; g else f myD.Addkey,v; myC.Addkey,1; g g /// < summary > ///Getallthetitlesofthissummation. /// < /summary > public String[]Titles f get f return myD.Keys.ToArray; g g /// < summary > ///Getthescoreforthegiventitle. /// < /summary > publicfloat GetScoreStringtitle f float s=myD[title]; return s/myC[title]; g g /// < summary > ///Holdstop/bottomNscores. /// < /summary > publicclass TopperLower:List < QuestionScore > f privatebool myTop= true ; privateint myCnt=0; 140

PAGE 152

public TopperLower bool doTop, int cnt f myTop=doTop; myCnt=cnt; g publicvoid AddSurveyScoress f foreachQuestionScoreqinss f if Count < myCnt f base.Addq; continue ; g //Findandreplacelow/highscore. for int i=0;i < Count;i++ f if myTop f if this [i].Score < q.Score f this [i]=q; break ; g g elseif this [i].Score > q.Score f this [i]=q; break ; g g g g g #endregion g 141