This week I managed to mix and incorporate the Jetliner String Quartet recording for 737. I also was able to record eight phrases on my electric cello. All in all, I recorded two phrases for each of the following: 737, DC-8, 707, & 747. While I haven’t played cello regularly in nearly 40 years, I find that I am slightly better at it than playing the trombone. I’m still not particularly good at playing the instrument, but if you slap a bunch of effects on it, it does sound nice and spacey.
I’ve also been re-editing the string orchestra samples. One of the first things I did for the Rotate project was to add samples of Musiversal’s Budapest String Orchestra that I had recorded for my previous album. These samples were added right after the backing tracks were recorded. Accordingly, I added a lot of them, and now that the recordings are getting kind of thick, I want to thin out the string orchestra samples so they do not compete as much with the string quartet recordings. I managed to thin out TriStar and 737 in this manner. All in all, it was a decent amount of work accomplished for a week in which I was driving to tech rehearsals in Andover, MA for more than half of the week. It puts me a bit ahead of the game in terms of what I hope to accomplish next week.
Since I’ve been posting teasers related to the next album project over the last couple of updates, I’ll share a bit more. I’m pleased to announce that I have working algorithms for two of the six movements that I plan on recording the backing tracks for this coming summer. At the rate I’m crafting these algorithms, I could be ready to record those backing tracks sometime in early 2024. Regardless, I will start sharing examples from these algorithms in early in the new year.
As I posted a link to the new mix of DC-8 featuring the Jetliner String Quartet record, I’ll repost the string score for the movement. This is the only movement that uses quarter note arpeggiations. I’m also fond of the D# diminished chord over an E pedal at rehearsal C. I think it’s a particularly tasty harmony.
I had hoped to post this on Friday, Saturday, or Sunday, but it has been a busy time. The good news is that I got more work done than I had expected to. I mixed and incorporated the string quartet recordings for eight of the nine movements. Some of the movements had multiple usable takes, so in some instances I chose to double (or even in 1 instance, triple) track the string quartet recordings to thicken things up a bit. Ultimately, I was able to add string quartet recordings to TriStar, A300, DC-8, 727, 707, DC-10, DC-9, and 747.
This will be a busy week for me as I am in tech week for a production of A Wrinkle in Time up in Andover. That being said, I expect to be able to complete the final string quartet mix, and to be able to get started recording some electric cello, which will put me a bit ahead of schedule. Since I have little to share this week, I’ll share a bit more about my next album project that I unveiled last week.
My plan is to have the album consist of 18 tracks, which should make a good middle ground between ME7ROPOL17AN 7RANSPOR7A71ON AU74R17Y‘s lots of short tracks approach, and Rotate‘s a few long tracks approach. As is the case with Rotate, the drum machine and synth parts will be generated by algorithms written in PureData. However, there will be three different broad models for these algorithms, so that this forthcoming album will feature more variety. The plan is to record the backing tracks to 6 of the movements during the summer of 2024, another 6 (using different algorithms) during the following summer, and a final 6 movements (using a third set of algorithms) during the summer of 2026.
Since I’ve released the recording of TriStar featuring the string quartet recording, I’ll re-share the score for the quartet for those who want to follow along . . .
Well, it looks like I’m a Trombone Champ! I managed to record 7 trombone phrases this week, which is not a lot of work, but it is just enough for me to have finished all of the trombone recording I had wanted to do. Thus, I can put the trombone back up in the attic until my next major recording project. My embouchure probably only improved marginally over the two and a half weeks of recording. I think if I plan on recording much again with the instrument, I should take it out a few weeks before I plan on starting the project, and get my lip in better shape.
Three of the phrases I recorded were for 727. The remaining four phrases were trombone recordings for the center sections of A300, DC-10, DC-9, and 747. While I’m ahead of where I thought I’d be last week, it doesn’t really change the schedule much. Tomorrow I will be recording a string quartet in Providence. These recordings will span the transition from the center section to the final section of each movement. I will likely be editing and mixing these recordings over the next couple of weeks. I’m expecting that time frame as the next couple of weeks will be busy for me. I’ll be taking Thursday and Friday off next week to go to an event in Boston. The following week I will be going into tech week for a production of A Wrinkle in Time in Andover. This means that much of next week will be spent finalizing my sound design work for the production, so I may get little to no work on Rotate done next week. Accordingly, the new schedule for the rest of the semester is . . .
I had mentioned last week that I may have some information to share regarding progress on a closely related project I have been working on. Since the next couple of weeks may also be light weeks I won’t share everything all at once, so I have things to share for first half of November. That being said, I’ve made significant progress on the plan for my next studio album as Darth Presley. I plan on taking three to four years to complete the next one.
While I’m proud of my work on both ME7ROPOL17AN 7RANSPOR7A71ON AU74OR17Y and Rotate, I feel like both albums are a bit too consistent. Every movement of each of the two projects is very similar, and vary mainly in tempo, pitch collection, and sometimes instrumentation. This is why I want to spend more time on the next studio album. I have some other material I can likely release in the next few years on the side: songs with lyrics, live recordings from Rotate,and other material. While I have more work to share about the next project, I’ll save it for the next couple of weeks.
I’m afraid I’m not as pleased with this month’s entry as I had hoped to be. The instrument I developed worked fairly well on the Organelle, but when I used it in combination with a wind controller, it was not nearly as expressive as I had hoped. I had also hoped to use the EYESY with a webcam, but I was not able to get the EYESY to recognize either of my two USB webcams. That being said, I think the instrument I designed is a good starting point for further development.
The instrument I designed, Additive Odd Even, is an eight-voice additive synthesizer. Additive synthesis, as the name implies, is an alternative approach to subtractive synthesis. Subtractive synthesis was the most common approach for the first decades of synthesis, as it requires the fewest / least expensive components in comparison to most other approaches. Subtractive synthesis involves taking periodic waveforms that have rich harmonic content, and using filters to subtract some of that content to create new sounds.
Additive synthesis was theorized by rarely attempted since the beginning of sound synthesis. Technically speaking the early behemoth, the Teleharmonium, used additive synthesis. Likewise, earlier electronic organs often used some variant of additive synthesis. One of the few true early additive synthesizers was the Harmonic Tone Generator. However, this instrument’s creator, James Beauchamp only made two of them.
Regardless, additive synthesis involves adding pure sine tones together to create more complex waveforms. In typical early synthesizers, this was impractical, as it would require numerous expensive oscillators in order to accomplish this approach. As a point of reference, the Harmonic Tone Generator only used six partials.
Additive Odd Even is based upon Polyphonic Additive Synth by user wet-noodle. In my patch, knob one controls the transposition level, allowing the user to raise or lower the pitch chromatically up to two octaves in either direction. The second knob controls the balance of odd versus even partials. When this knob is in the middle, the user will get a sawtooth wave, and when it is turned all the way to the left, a square wave will result. Knob three controls both the attack and release, which are defined in terms of milliseconds, ranging from 0 to 5 seconds. The final knob controls the amount of additive synthesis applied, yielding a multiplication value of 0 to 1. This last knob is the one that is controlled by the amount of breath pressure from the WARBL. Thus, in theory, as more breath pressure is supplied, we should hear more overtones.
This instrument consists only of a main routine (main.pd) and one abstraction (voice.pd). Knob one is controlled in the main routine, while the rest exist in the abstraction. As we can see below, voice.pd contains 20 oscillators, which in turn provide 20 harmonic partials for the sound. We can see this in the way in which the frequency of each successive oscillator is multiplied by integers 1 through 20. A bit below these oscillators, we see that the amplitudes of these oscillators is multiplied by successively smaller values from 1 down to .05. These values roughly correspond to 1/n, where n is the harmonic value. Summing these values together would result in a sawtooth waveform.
We see more multiplication / scaling above these values. Half of them come directly from knob 2, which controls the odd / even mix. These are used to scale only the even numbered partials. Thus, when the second knob is turned all the way to the left, the result is 0, which effectively turns off all the even partials. This results in only the odd partials being passed through, yielding a square waveform. The odd numbered partials are scaled using 1 minus the value from the second knob. Accordingly, when knob 2 is placed in the center position, the balance between the odd and even partials should be the same, yielding a sawtooth wave. Once all but the fundamental is scaled by knobs 2 & 4, they are mixed together, and mixed with the fundamental. Thus, we can see that neither knob 2 nor 4 affects the amplitude of thefundamental partial. This waveform is then scaled by .75 to avoid distortion, and then scaled by the envelope, provided by knob three.
In August I had about one month of data loss. Accordingly, I lost much of the work I did on the PureData file that I used to generate the accompaniment for Experiment 5. Fortunately I had the blog entry for that experiment to help me reconstruct that program. I also added a third meter, 7/8, in addition to the two meters used in Experiment 5 (4/4 and 3/4). Most of the work to add this is adding a bunch of arrays, and continuing the expansion of the algorithm that was already covered in the blog entry for Experiment 5.
That being said, using an asymmetrical meter such as 7/8 creates a challenge for the visual metronome I added in experiment 5. Previously I was able to put a select statement, sel 0 4 8 12 that comes from the counter that tracks the sixteenth notes in a given meter. I could then connect each of the four leftmost outlets of that sel statement to a bang. Thus, each bang would activate in turn when you reach a downbeat (once every 4 sixteenth notes).
However, asymmetrical meters will not allow this to work. As the name suggests, in such meters the beat length is inconsistent. That is there are beats that are longer, and ones that are shorter. The most typical way to count 7/8 is to group the eighth notes with a group of 3 at the beginning, and to have two groups of 2 eighths at the end of the measure. This results in a long first beat (of 3 eighths or 6 sixteenth notes), followed by two short beats (of 2 eighths or 4 sixteenth notes).
Accordingly, I created a new subroutine called pd count, which routes the sixteenth note count within the current measure based upon the current meter. Here we see that the value of currentmeter or a 0 sent by initialize is sent to a select statement that is used to turn on one of three spigots, and shut off the others. The stream is then sent to one of two select statements that identify when downbeats occur. Since both 4/4 an 3/4 use beats that are 4 sixteenth notes long, both of those meters can be sent to the same select statement. The other sel statement, sel 0 6 10, corresponds to 7/8. The second beat does not occur until the sixth sixteenth note, while the final downbeat occurs 4 sixteenth notes later at count 10.
One novel aspect of this subroutine is that it has multiple outlets. Each outlet is fed a bang. Each outlet of the subroutine is sent to a different bang, so the user can see the beats happen in real time. Note that this is next to a horizontal radio button, which displays the current meter. Thus, the user can use this to read both the meter, and which beat number is active.
I had to essentially recreate the code inside pd count inside of pd videoautomation in order to change the value of knob 5 of the EYESY on each downbeat. Here the output from the select statements are sent to messages of 0 through 3, which correspond to beats 1 through 4. These values are then used as indexes to access values stored in the array videobeats.
I did not progress with my work on the EYESY during this experiment, as I had intended to use the EYESY in conjunction with a webcam, but unfortunately I could not get the EYESY to recognize either of my two USB webcams. I did learn that you can send MIDI program changes to the EYESY to select which scene or program to make use of. However, I did not incorporate that knowledge into my work.
One interesting aspect of the EYESY related to program changes is that it runs every program loaded onto the EYESY simultaneously. This allows seamless changes from one algorithm to another in real time. There is no need for a new program to be loaded. This operational aspect of the EYESY requires the programs be written as efficiently as possible, and Critter and Guitari recommends loading no more than 10MB of program files onto the EYESY at any given time so the operation does not slow down.
As stated earlier, I was disappointed in the lack of expression of the Additive Odd Even patch when controlled by the WARBL. Again, I need to practice my EVI fingering. I am not quite use to reading the current meter and beat information off of the PureData screen, but with some practice, I think I can handle it. While the programming changes for adding 7/8 to the program that generates the accompaniment is not much of a conceptual leap from the work I did for Experiment 5, it is a decent musical step forward.
Next month I hope to make a basic sample instrument for the Organelle. I will likely add another meter to the algorithm that generates accompaniment. While I’m not quite sure what I’ll do with the EYESY, I do hope to move forward with my work on it.
Well, my sabbatical is about half over. I got a respectable amount of work done this week, all things considered. I got nine trombone phrases recorded. This included two A phrases each for A300, DC-10, and 747. I also recorded one B phrase each for 737, DC-8, & 707. Ultimately this isn’t much work for the week, but there has been a family emergency that has been keeping me busy since Tuesday. Thus, as I said it’s a respectable amount of work, all things considered.
It isn’t clear when this family emergency will be resolved. Furthermore next week my work load as a sound designer for an upcoming production of A Wrinkle in Time will be ramping up. Next weekend will be the recording session for the string parts, which means the following week will likely be focused on editing and mixing those recordings. All of this is a long winded way of saying that realistically speaking, I may not complete the trombone recordings for two to three weeks. Thus, my revised recording schedule for the remainder of the semester will likely be . . .
I’m still satisfied with this schedule, as cutting out many of the synth oriented tracks is fine since the backing tracks already have a significant amount of synthesizers. Even if I don’t complete much work next week, I’ll still be able to report next week, as I’ve been working on a related side project, and have been making enough progress on it that I may be ready to start releasing information on it next week.
In the interest of having some visual material, please find below the score for the string arrangement of 707. The B section of this movement is nominally in D minor, featuring the notes: D, E, F, F#, G, A, Bb, and C#. The A section in contrast only uses a single note, A.
It has been a productive week for me resulting in 15 finished phrases. I finished my pedal steel work, recording one phrase each for Rotate A300, 727, DC-10, DC-9, & 747. This allowed me to get a head start on trombone recordings. Ultimately I recorded two phrases each for TriStar, 737, DC-8, 707, & DC-9.
Recording trombone is quite a challenge for me, although it is a different challenge than playing the pedal steel. The latter instrument is very complicated, and not particularly intuitive. The last time I played trombone on a regular basis was over thirty years ago. I still have a very mental knowledge of how to play the instrument correctly. That being said, my embouchure just isn’t up to the job. It is very challenging for me to play even moderately high notes. I have equivalent problems playing pedal tones (extremely low notes) on the instrument as well.
It will be interested to see if after a couple of weeks of recording on the instrument if my embouchure shows any sign of improvement. For the time being though, I will simply write the trombone passages (mainly brass hits) in a range that fits my meager abilities. Furthermore, a lot of editing, a generous portion of pitch correction, and helping of plate reverb can do wonders to hide three decades of neglect.
It has been a couple of weeks since I presented one of my string arrangements. My arrangement for A300, featured below, features the second smallest pitch collection of the nine movements of Rotate. The B section of A300 features only five notes (B, C#, D, F#, G#), while the A section features four pitches (B, C#, F#, A#). These limited pitch groups yield some unique harmonies for the arrangement.
It has been quite a productive week. I was able to record 17 phrases on pedal steel guitar. I recorded two each for A300, 727, DC-10, DC-9, & 747. I recorded three phrases for DC-8 and 707. I also recorded a phrase for the center section of 737. On Saturday I booked Alumni Hall to record some piano tracks on the Yamaha C7 grand in that space. All in all I managed to record 10 phrases, two each for TriStar, 737, A300, 727, and DC-10.
Last week I explained some of the basics of how a pedal steel guitar works. This week I’ll go into a little more detail. The second movement of Rotate, 737, is nominally in F major. In the center section of the piece I decided to use 3 seventh chords: an F major seventh, a D minor seventh, and an A dominant seventh. Let’s investigate how you can do this on a pedal steel guitar. This will allow us to review what we learned about tuning and the foot pedals from last week.
Above, we see the open strings of an E9 pedal steel guitar. Playing a major seventh chord in this tuning is simple, you simply play strings 2-6 simultaneously (with the low B on the left being string 10). In order to get an F major seventh, then we’d simply play those strings with the steel over where the first fret would be.
There several ways to get a minor seventh chord. Last week I went over how you could use the first two pedals of the instrument to get a chord built on scale degree four or six. Remember that the first pedal changes all of your B strings to C#, and the middle pedal changes all of your G# strings to A. When we press just the first pedal, the E major chord we get from strings 3-6 is now a C# minor chords. Likewise, when we press the first two pedals at the same time, our E major chord is now an A major chord. We are going to use these two pedals to create a D minor seventh chord.
Let’s think about that chord in the context of C major. We could also think of that chord as being an F major chord (a IV chord) with an additional D (scale degree two). We can get a IV chord by using the first two foot pedal. The additional scale degree two we can get from string 7 (F# is scale degree two in E major). Since we are thinking in C major for this chord, we would have to strum strings 3-7 with the first two pedals down, and our steel positioned over where the 8th fret would be (C is 8 half steps above E).
How about our A dominant seventh chord? We found that major seventh chords on a pedal steel guitar are easy. Here’s where the knee levers come into play. Again, there is no standard for how many knee levers a pedal steel guitar has, nor is there a standard configuration. My instrument has three knee levers. They would be labelled LKL, LKR, and RKR. Those abbreviations stand for left knee left, left knee right, and right knee right. Thus, I have two knee levers for my left leg, and one for my right.
While there are no standards, there is some logic used in setups. For instance LKL and LKR on my instrument both affect the E strings. This makes sense because you’d never want to use both levers at the same time, which is important as it is pretty much impossible to move your knee to left and to the right at the same time. On my instrument LKL raises the E strings to F, while LKR lowers the E strings to D#. The final knee lever, RKR, lowers the D string to C# and the D# string to D. Thus, it is this knee lever that allows me to lower the D# to D, which when combined with strings 3-6 gives a dominant seventh chord. So, in order to get an A dominant seventh chord, I would string strings 2-6 with RKR engaged with the steel positioned over where the fifth fret would be (A is 5 half steps above E).
Well, I’m a third of the way into my sabbatical, and the past week has been pretty successful. I’ve finished my fretless bass recordings, and have started recording pedal steel guitar. I recorded phrases for the center sections of seven movements: 737, A300, DC-8, 727, DC-10, DC-9, & 747. The fretless phrases I recorded for 737 and DC-8 replaced recordings I made last week where I wasn’t satisfied with what I played. I’m much happier with the new versions.
In terms of the pedal steel recordings, I’ve only begun to scratch the surface, recording four phrases, two each for TriStar and 737. Pedal steel is a fascinating, but very complicated instrument. I haven’t played it much in past few months, so a significant amount of time was spent tuning the 10 strings, calibrating the tuning for a couple of the foot pedals, and reacquainting myself with the instrument.
A standard pedal steel guitar has 10 strings and uses E9 tuning. This tuning system was developed by a few prominent players, including Buddy Emmons. It is called E9 tuning, as it generally resembles the notes of an E9 chord, though notice that it has both a D natural, and a D#. Notice as well that the top two strings are actually lower in pitch than the third string from the top. One of the things that is fairly convenient about the tuning system is it features four consecutive strings that form a major triad (strings 3 through 6 – with 10 being the lowest string).
In typical pedal steel playing, the players left hand puts a steel on the fretboard. Typically the placement of the steel reflects what key you are in. For instance, in G major, you would place the steel above where the third fret would be, as G is three half steps above E. In order to get other notes (and harmonies) besides those given by the strings, the player typically uses the pedals and knee levers rather than moving the steel.
While there is no standard for pedal and knee lever configurations, most instruments have three pedals, and one or more knee levers. My instrument is an old GFI SM-10 with three pedals, and three knee levers. Since this is sufficiently complicated, I will only explain the two pedals I used in recording this week. The first pedal changes all the B strings to C#s (raising the string a whole step). The second pedal changes all the G# strings to As, raising the string a half step). With these two pedals and using the aforementioned strings that form a major triad (strings 3 through 6), you can get an E major chord, a C# minor chord (using pedal 1), an Esus chord (using pedal 2), or an A major chord (using both pedals 1 & 2). If we were to think of this in terms of E major, this will get us the chords on scale degrees 1, 4, and 6. Pretty clever all in all. Perhaps next week I will go into detail about some of the other pedals and how they can be used.
Last week’s work really set me up for success this week. I managed to record a dozen fretless bass phrases this week. I recorded one phrase for TriStar, 737, DC-8, and 707. I recorded two phrases for A300, 727, DC-10, and DC-9. Accordingly I only have five fretless bass phrases to record for next week. That being said, rather than get a head start on pedal steel guitar recordings, I may add more fretless bass phrases, or re-record some of the phrases I’ve already recorded in order to have more exciting bass parts.
Again, in the interest of having some visual material to share, here’s the string arrangement for DC-8. In the first six measures you can see arpeggiations of a progression in G major: B7, Em7, CM7, Em7, D7, to GM7. The final seven measures shows a static section where the upper voices slowly arpeggiate a D# diminished chord while the cello stays on an E pedal.
It has been a busy month for me, so I’m afraid this experiment is not as challenging as it could be. I used the Organelle patch Constant Gardener to process sound from my lap steel guitar. This patch uses a phase vocoder, which allows the user to control speed and pitch of audio independently from each other. While I won’t go into great detail about what a phase vocoder is and how it works, it uses a fast Fourier transform algorithm to analyze an audio signal and to reinterpret it as time-frequency representation.
This process is dependent upon the Fourier analysis. The idea behind Fourier analysis is that complex functions can be represented as a sum of simpler functions. In audio this idea is used to separate complex audio signals into its constituent sine wave components. This idea is central to the concept of additive synthesis, which is based upon the idea that any sound, no matter how complex, can be represented by a number of sine wave elements that can be summed together. When we convert an audio signal to a time-frequency representation we get a three-dimensional analysis of the sound where one dimension is time, one dimension is frequency, and the third dimension is amplitude.
Not only can we use this data to resynthesize a sound, but in doing so, we can treat time and frequency separately. That is we can slow a sound down without making the pitch go lower. Likewise, we could raise the pitch of a sound without making the sound wave shorter.
Back to Constant Gardener. This patch uses knob 1 to control the speed (or time element) of the re-synthesis. Knob 2 controls the pitch of the resynthesis. The third knob controls the balance between the dry audio input to the Organelle with the processed (resynthesized) sound. The final knob controls how much reverb is added to the sound. The aux button (or foot pedal) is used to turn the phase vocoder resynthesis on or off.
The phase vocoder part of the algorithm is sufficiently difficult such that I won’t attempt to go through it here, rather I will go through the reverb portion of the patch. As stated previously, knob four controls the balance between the dry (non-reverberated) and the reverberated sound. Thus value is then sent to the screen as a percentage, and is also sent to the variable reverb-amt using a number from 0 to 1 inclusive.
When the value of reverb-amt is recieved, it is sent to a subroutine called cg-dw. I’m not sure why the author of the patch used that name (perhaps cg stands for constant gardener), but this subroutine basically splits the signal in two, and modifies the value the will be returned out of the left outlet to be the inverse of the value of the right outlet (that is 1 – the reverb amount). Both values are passed through a low pass filter with cutoff frequency of 5 Hz, presumably to smooth out the signal.
The object lop~ 10000 receives its input from a chain that can be traced back to the input of the dry audio coming from the Organelle’s audio input. This object is a low pass filter, which means that the frequencies below the cutoff frequency, in this case 10,000 Hz, to pass through the filter, which in return attenuates the frequencies above the cutoff frequency. More specifically, lopis a one-pole, which means that the amount of attenuation is 6 dB per octave. A reduction of 6 dB effectively is half the power of the original. Thus, if the cutoff frequency of a low pass filter is set to 100 Hz, the power at 200 Hz (doubling a frequency raises the pitch an octave) is half of what it would normally be, and at 400 Hz, the power would be a quarter of what it would normally be.
In analog synthesis a two pole (12 dB / octave reduction) or a four pole (24 dB / octave) filter would be considered more desirable. Thus, a one pole filter can be thought of as a fairly gentle filter. This low pass filter is put in the signal chain to reduce the high frequency content to avoid aliasing. Aliasing is the creation of artifacts when a signal is not sampled frequently enough to faithfully represent a signal. Human beings can hear up to 20,000 Hz, but audio demands at least one positive value and one negative value to represent a sound wave. Thus, CD quality sound uses 44,100 samples per second. The Nyquist frequency, the frequency at which aliasing starts is half the sample rate. In the case of CD quality audio, that would be 22,050 Hz. Thus, our low pass filter reduces these frequencies by more than half.
The signal is then passed to the object hip~ 50. This object is a one-pole high pass filter. This type of filter attenuates the frequencies below the cutoff frequency (in this case 50 Hz). Human hearing goes down to about 20 Hz. Thus, the energy at this frequency would be attenuated by more than half. This filter is inserted into the chain to reduce thumps and low frequency noise.
Finally we get to the reverb subroutine itself. The object that does most of the heavy lifting in this subroutine is rev~ 100 89 3000 20. This is a stereo input, four output reverb unit. Accordingly the first two inlets would be the left and right input. The other four inlets are covered by creation arguments (100 89 3000 20). These four values correspond to: output value, liveness, crossover frequency, and high frequency dampening. The output value is expressed in decibels. When expressed in this manner we can think of a change of 10 dB as doubling or halving the volume of a sound. We often consider the threshold of pain (audio so loud that it is physically painful to us) as starting around 120 dB. Thus, 100 dB, while considered to be loud, is 1/4 as loud as the threshold of pain. The liveness setting is really a feedback level (how much of the reverberated sound is fed back through the algorithm). A setting of 100 would yield reverb that would go on forever, while the setting 80 would give us short reverb. Accordingly, 89 gives us a moderate amount of reverb.
The last two values, cross over frequency and high frequency dampening work somewhat like a low pass filter. In the acoustic world low frequencies reverberate very effectively, while high frequencies tend to be absorbed by the environment. That is why a highly reverberant space like a cave or a cathedral has a dark sound to its reverb. In order to model this phenomenon, most reverb algorithms have an ability to attenuate high frequencies built into them. In this case 3,000 Hz is the frequency at which dampening begins. Here dampening is expressed as a percentage. Thus, a dampening of 0 would mean no dampening occurs, while 100 would mean that all of the frequencies about the crossover frequency are dampened. Accordingly, 20 seems like a moderate value. The outlets from pd reverbare then multiplied by the right outlet of cg-dw, applying the amount of reverb desired, and sent to the right and left outputs using throw~ outL and throw~ outR respectively.
For the EYESY I used the patch Mirror Grid Inverse – Trails. The EYESY’s five knobs are used to control: line width, line spacing, trails fade, foreground color, and background color. EYESY programming is accomplished in the language Python, and utilizes subroutines included in a library designed for creating video games called pygame.
An EYESY program is typically in four parts. The first part is where Python libraries are imported (pygame must be imported in any EYESY program). This particular program imports: os, pygame, math, and time. The second part is a setup function. This program uses the code def setup(screen, etc): pass to accomplish this. The third part is a draw function, which will be executed once for every frame of video. Accordingly, while this is where most of the magic happens, it should be written to be as lean as possible in order to run smoothly. Finally, the output should be routed to the screen.
In terms of the performance, I occasionally used knob 2 to change the pitch. I left reverb at 100%, and mix around 50% for the duration of the improvisation. I could have used the keyboard of the Organelle to play specific pitched versions of the audio input. Next month I hope to tackle additive synthesis, and perhaps use a webcam with the EYESY. Given that I’ve given a basic explanation of the first two parts of an EYESY program, in future months I hope to go through EYESY programs in greater detail.