By now, almost everyone knows that everything is a vibration…that the things that make up reality are vibrating at different rates relative by frequency, or the number of times they vibrate per second. This fact, of course, has led to the idea that these vibrations can be measured and analyzed by various means and therefore used to modify, change or even transform the things that they make up. Over the years many different methods have been developed to effect change on human bodies, minds and spirits using light, sound, color, aroma, tactile vibration and other forms of sensory stimulation. (See the Vibrational Science Library on the InnerSense website for additional information about sensory science). Frequency analysis of human biometrics has been around for decades. Brainwaves, heart rate/variability, and other measures of metabolic rate are better known than the lesser known and understood analysis of the voice. However, it’s turning out that voice may be the biometric of choice for identifying who a person is and what they want. After all, it’s the mechanism the universe gave us to say who we are and what we want. Commonly used in forensic science, security and vocal coaching, analysis and diagnosis of the human voice has become all the rage in sound, music, vocal and other forms of vibrational therapy in recent years even though it’s been brewing since the early 90’s.

Various theories have arisen over the years resulting in the development of methods, protocols and devices for myriads of applications like physical and mental status, nutritional evaluation, emotional catharsis/cathesis, motivation and others. The purpose of this paper is to sort out these different protocols to create a standard of reference for therapists with regard to vibrational signatures. The MicroStructure of Waves To begin, it’s important to understand the basic microstructure of sound and light waves. They travel through mediums in different modes of vibration, either back and forth or straight forward, but otherwise are virtually the same. Frequency is only one of the factors in waveform analysis and synthesis. Phase and Amplitude play an equal role in describing a wave which together define the Waveshape…the actual architecture of the wave. Figure One: Wave Properties: Frequency, Amplitude, Phase & Waveshape Single sine/cosine waveshapes are simple to understand and illustrate…being just the familiar simple gently sloping curve occurring over time. Frequency is also easy to grok. It’s just the cycles or number of waves per time period. In the image above, there are three wave peaks occurring across a time span of three seconds, in other words…one cycle per second or one hertz (1hz). In the image, Amplitude presents as the height or power of the wave. Complex waves are more complicated to understand and diagram. Sound engineers are taught that complex waves are made up of a series of simple sine waves. The truth is though that these single wave components are either a sine wave, cosine wave or some combination thereof, which controls the Phase of the signal, or how it peaks over time. However, the human body is like an instrument that vibrates across a really wide range of frequencies. The human voice, for example, traverses a broad band of the music scale. The highest frequency on record is G10 or 25kHz by Brazilian singer Georgia Brown in 2004 and the lowest is G-7 or 0.189Hz, eights octaves below the lowest G on a piano, for Tim Storms in 2012; so, at the extremes, a full 17 octave range! This is one of the reasons that finding the fundamental frequency of a biometric signal is so difficult…human signals float around within a range of frequencies and rarely stay on the same frequency for very long. In order to resolve this issue, timed samples are taken and averaged, thereby discarding all of the harmonic data present. The only way to accurately identify a natural harmonic fundamental frequency is to identify the lowest frequency of the human voice that has a complete whole number harmonic overtone series. Figure Two: Frequency Ranes of Musical Instruments Including Voice. All other components of the signal are noise with the exception of certain forms of sibilance. This causes a major problem in voice analysis and feedback…without the fundamental frequency it is impossible to see the harmony within a signal. The mean or average frequency shows where the middle of the scale is, but that means little regarding the true harmonic signature of the signal. Two frequencies of 100 and 200 Hz. average to 150 Hz., which has nothing to do with either. However, if “wavelet” theory is applied it’s possible to observe the crossover points and procession of peaks in the partials and find the true fundamental frequency. Figure Three shows true fundamental frequencies being generated by a human voice in real time. Simultaneously, it is showing the real time pitch and notating it on manuscript paper while indicating its note’s position on a 140 key piano keyboard. The sound frequency is instantly converted into its correlated light and color, which can output to a computer screen, wall, ceiling or other surface with a projector or MIDI color synthesizer. Once frequency accuracy has been verified, phase and amplitude can be measured and analyzed in complex waves. Frequency in Sound Therapy Although a proper frequency analysis is required as the starting point to identify an individual’s personal vibrational signature, it’s extremely limiting to rely on frequency alone because of difficulties inherent in its accurate capture, analysis and method of reproduction due to:

  1. Human beings are like music instruments whose brain, heart and voice frequencies vary across a very wide range of frequencies. Without knowing which overtones are related it’s impossible to see the harmony in the signal. To accomplish this the true fundamental frequency must be found and that cannot be done with a timed sample that has been averaged, which only provides the mean or average. True mode can only be ascertained by real-time measurement of the specific frequencies rather than quantizing the signal to musical notes.

  2. In frequency analysis, the unit of measurement chosen must be a function of the sampling rate used to take the sample. For example, frequency is measured in Hertz (Hz.) or cycles per second. Therefore, any sampling rate must reduce down to 1 cycle per second in the zero octave to be accurate. Most systems available report in semitones not hertz, so they can be off as much as 7.2 Hz from 248.8 to 256 Hz and 7.5 Hz between 256 to 263.5 Hz. At the high end of a female soprano voice the measurement can be off as much as 25.7 Hz!

  3. The human voice has particular vowel formants that are tied to the particular dialect they area using. A formant is the spectral shaping that results from an acoustic resonance of the human vocal tract.[1][2] Therefore, when people utilize more of one vowel than another it shifts the frequencies toward those formants. For American English the formants are:

Figure Three: Frequency Formants for American English

  1. In reproduction the relative phases of all harmonics of the fundamental must be maintained. It is impossible to start separate light, sound, color and vibrational generators at once so that the peaks of each of partial remains in phase. All generators must be run and maintained in a fully phase-synchronous manner for phase relationships to be preserved and transmitted. The only way to do that is to utilize a continuous phase synchronization monitoring and adjustment algorithm, which resynchronizes each and every oscillator when any change is made to any parameter of any single oscillator.

Figure Four: Digital versus Analog Square Waves

  1. There’s a lot of discussion these days about whether samples should be taken and reproduced by analog or digital means. The truth is that each has advantages and disadvantages. Analog devices are generally considered superior because they analyze and reproduce the true waveform of a signal whereas digital provides only an approximation of the original due to the missing pieces created by the intermittent sampling of the signal instead of including the continuous wave. However, research has shown that digital may be the method of choice because it’s faster, more accurate and precise regarding frequency and can be controlled a lot easier…not to mention much less expensive. Analog frequency analyzers that offer phase, amplitude and temperature control are many tens of thousands of dollars because of the need for lock-in amplifiers and environmental regulators that just don’t exist on simple machines.

  1. As an example, a Sawtooth wave consists of all even and odd harmonics. However, a classic Square Wave is composed of only odd overtones. In a digitally produced square wave the sharp edges and abrupt 90˚ turns at the time transitions along the wave that creates a hard, transient sound that can be dangerous to speakers and transducers. These artifacts result in noise and sometimes clipping, whereas the analog version has a smoother transition that is more leveled out and spiraled creating a more natural flow between the curves of the wave.

  • Until recently it was impossible to create digital waveshapes that are identical to analog versions. However, with tools like Fourier additive synthesis and polynomial transition region algorithms the architecture of the wave can be controlled and digital signals can become more natural-like. Below is a square wave produced using Fourier additive synthesis on a Sensorium LSV III system.

  • The transition zones of this digital wave are more or less identical to the natural analog wave shown in figure five. All things considered, polynomial digital is now the preferred method due to cost, ease of use, accuracy and reproducibility. The Sensorium LSV III utilizes both polynomial and Fourier synthesis.

So, the bottom line is that relying solely on frequency is not preferred because of the inability to determine the harmonic architecture, sample rate issues, and frequency formants. Voice Analysis and Feedback Whereas analysis and diagnosis of a human heart (ECG), brain (EEG) or voice (VG) can be used in sound therapy for a starting point, it’s usually and most often voice analysis. There are several different schools of thought about the best way to analyze the voice and interpret the results. The results are used as a type of diagnosis from which a therapy can be devised to calm, stimulate or otherwise transform the issue at hand.

  • Fundamental Frequency – the lowest note in realtime that has a full set of whole tone-related harmonics or overtones. All other frequencies are noise and unrelated to the fundamental. This is the true fundamental frequency of the person’s or instrument’s timbre, which has a related set of overtones that cause their voices to sound the way they do. This is the current fundamental as opposed to the combinations that accumulate over time as a person talks.

  • Timbre – the actual sound created by an instrument or voice created by the fundamental frequency along with its full set of related harmonics…the vocal signature. This signature is useful in creating sensory resonance programs that synchronize light, sound, music and color to a person’s personal signature. This creates the relative overtone Amplitudes and WaveShape of the signal.

  • Mode – the music note with most accumulated magnitude from an expression about a particular subject over time. This is the cumulative fundamental frequency.

  • Main Note – the music note spoken most often about a particular subject accumulated over time. The number of times each note is hit while thinking or speaking about a particular subject over time.

  • Key signature – each note of the octave has a set of related notes that play in harmony with that note as a fundamental tonic. Keys can be any of the 12 Chromatic tones. People tend to think and speak in one or more of these keys when expressing thoughts or words about a particular subject over time.

  • Stressed or Weak Music Notes – accumulated from a thought or expression about a particular subject over time.

  • Evidence Based Feedback from user/clients about how particular sounds, music or colors make them feel. Some are calling this the Soul Note.

  • Soul Song – music created with the information above to create music designed specifically for that person.

  • Music Note to Color Correlation – Music Notes can be transposed up by 40+ Octaves into the light range to correlate with the exact color based on the Law of the Octaves.

The entire point of analyzing a human biometric is to observe, capture, analyze and evaluate the harmony contained within it. However, due to the limitations of current frequency analysis technology this has so far had to be obtained by reversing the situation to find and evaluate the disharmony, noise and dis-coherence, which has led to the allopathic paradigm of healing that attempts to find and fix what’s wrong. What if there is a way to identify what’s right and simply enhance it? What a person is willing to absorb doesn’t necessary mean that it’s good for them. Conversely, the dis-coherence in a person does not necessary mean that it’s bad. Some have said that a person’s signature contains all fo the bad as well as the good so it should not be fed directly back to them for fear of amplifying the bad. However, that’s only true when the signal contains noise or dissonance. A true fundamental frequency has no noise as all of the overtones are harmonically related to the fundamental. Most of the methods that are available are based on the allopathic paradigm that says that there’s something wrong and it must be fixed. Their algorithms find the disharmonies in body, mind, emotion and spirit so that they can be treated and healed. However, having access to the true harmony in a person’s frequency signature allows the identification of what’s right, which can then be enhanced. This approach is much more elegant and efficient than the allopathic method, and so much easier on both client and therapist. Clients are not asked to ignore the dissonance, but rather just put it aside for the moment so that attention can be given to enhancing what’s right. This method is called Ktisis and leads to the ultimate transformation…Allasso – transformation from an evolutionary, material creature into an eventuated spiritual being. In other words…transcending the human state to who we will be on the other side. Fortunately, that state can be obtained while still in the flesh. This has become our goal and mission. Frequency in Brainwaves and Heartbeats The frequency of heartbeats matches the frequency of brainwave delta waves ranging from about one cycle per second to around two. Music with rhythm or beat that matches a normal heart rate has been shown to be profoundly relaxing. Brainwaves ranging from zero to 128 cycles per second correspond to the vibrotactile range of the body’s corpuscular system and the body’s ability to feel. Both (sub-gamma) brainwaves and heartbeats lie in the subsonic range below the range of human hearing. Electrocardiogram (EKG), Heart Rate Variability (HRV) and Electro-Encephalographic (EEG) are the biometrics of choice for measuring the frequencies of the heart and brain. Whereas most biofeedback systems are indirect by providing only notifications of the successful completion of a goal, direct systems can measure, receive, analyze and feedback vibrations that are directly related to the biometric being used. Most systems divide up the brain signal into different frequency bands such as sub-delta (or S.O. {for Slow Oscillation}), delta, theta, alpha, gamma and zeta waves, depending on their protocols. Various systems have been developed to treat with frequencies at those ranges. For example, theta range is accessed for hypnosis and lucid dreaming, whereas the delta range is utilized in sleep therapy. Alpha, of course, is the range in which clients are asked to relax into. Beta is the normal awake state with Gamma being the latest range to be researched. 40 Hz with both light and sound is now currently being touted as a treatment for Alzheimer’s. Frequency Therapy and Feedback