Experiment 2: Analog Style


May has been a busy month for me. Thus, my second experiment in my project funded by Digital Innovation Lab at Stonehill College investigates the use of the preset patch Analog Style (designed by Critter and Guitari). To be specific, I am using a WARBL wind controller with EVI fingering to control the patch. I am using the breath control on the WARBL to control the cutoff frequency of Analog Style (using MIDI controller 24).

Due to the busyness of the end of the semester, this experiment features no original programming on the organelle. However, I did create a program in Pure Data to create accompaniment and drive the EYESY. To accompany the experiment, I used the H.E.A.P, the Housatonic Electronic Algorithmic Philharmonic. This a fun, frivolous name I’ve given to a small, battery powered synthesizer / sampler setup I’ve assembled for live performance. It consists of three synthesizers / samplers: a Volca Sample 2 (which provides 10 channels of late 1980s style lo fidelity digital sampling), a Volca Keys (which can be used as an analog monophonic or 3 note polyphonic synthesizer), and a Volca FM 2 (which is a clone of the 1980s classic, the Yamaha DX7, the best selling synthesizer of all time). I’ve also begun to think of the EYESY, as we’ll see later, as part of the H.E.A.P.

I won’t go into great detail about the program that generates the accompaniment for Experiment 2, as I have other blog posts that go into detail about various algorithms included in the program. Ultimately, the program is intended to generate relatively generic, but fairly usable R&B esque slow jams. The music is in common time using sixteenth notes. The portion of the program including and beneath % 16 (mod 16) ensures that the resulting music will have 16 pulses per measure. Likewise, the instrumentation and musical patterns change every four measures. This is enabled by the part of the program including and beneath % 64 (four measures of sixteenth notes adds up to 64).

The Volca Sample is being used to provide the drum beat, and some string pizzicato (see pd makepizz). The Volca Keys provides some synthesized bass patterns that run two measure loops. The Volca FM provides four chord, four note chord progressions that repeat every two measures. To create these chord progressions I used some music programming techniques that I’ve covered in a previous blog entry, though in this experiment I am use the brass friendly key of G minor.

One of the newer programming tricks I used in this program is an algorithm designed to drive the EYESY. While the EYESY generates hypnotic, interactive video animations, left to its own devices it can get a bit repetitive fairly quickly. In order to generate anything that seems even remotely dynamic some one needs to perform the EYESY by rotating its five knobs. This is an impossibility for any performing musician, save for a vocalist. Thankfully, we can do the equivalent of turning the knobs on the EYESY through MIDI using controllers 21 through 25.

The algorithm I’ve created to drive the EYESY is designed to make slow, evolving changes. To control these changes I’ve created a table called videostatus. It consists of five positions that contain a one or a zero to denote whether changes should be make or not made to a given knob during the current four measure phrase.

The subroutine pd videochoice is triggered at the beginning of every four measure phrase. It generates five different random numbers that are either a zero or a one. These results are then stored in the table videostatus.

The subroutine pd videoautomation updates the knob positions on the EYESY once every sixteenth note. It is passed the current sixteenth note number modded by the number 224, which corresponds to 14 measures of sixteenth notes. The subroutine contains five nearly identical columns, each of which corresponds to each of the five knobs on the EYESY. First the algorithm checks the current states of each of the five positions of videostatus table. When that value is one, that allows the current sixteenth note number to pass through the spigot. This value is passed through an expr statement that displaces the sixteenth note number. The column corresponding to knob one is not displaced, but each subsequent column is displaced by one measure (16 sixteenths), which is then modded to stay between 0 and 224. The statement moses 112 is used to determine whether we should be counting up to 112, or counting down to 0. This is accomplished by having numbers that are greater than 112 to pass through expr (224-$f1), which causes the result to get small as the input value is increased. The result of this is then passed to one of the five controller values (21-25) on MIDI channel 16 (the channel I’ve set my EYESY to).

Since I went over the mother patch in the previous experiment, I’ll start with going over main.pd for the Analog Style patch. We can see that knob one controls the tuning of the patch, while knob two creates an offset frequency for a second oscillator. The third knob sets the resonance of the low pass filter, while the fourth knob (the one I am controlling using the WARBL) sets the cutoff frequency of filter. To learn more about what low pass filters are, check out my blog entry on Low Pass Filters in LogicPro. However, to summarize briefly in relationship to the WARBL, when the amount of breath coming through is low, that in turn sets the cutoff frequency to be low as well, resulting in less sound (and only low frequency sound) to come out of the Organelle.

We can also see that this patch allows for sequencing when the aux button is down. However we will not go through how sequencing works today. We will however go into simple, which is the subroutine that creates the sound. We can see two oscillators, blsaw, in this subroutine that generate sawtooth waves. For more information on subtractive synthesis waveforms (including sawtooth waves), check out my blog entry on Subtractive Synthesis Waveforms in Logic Pro. One of those two blsaw oscillators is modified by the offset of knob two. The mixture of these two oscillators is passed to a low pass filter, moog~. This object also receives a center frequency to its center inlet, and a resonance value to its right most inlet. The outlet of this object is then attenuated slightly, *~ .75, before being sent to the subroutine’s outlet.

Again, I’ve found that the accompaniment generated by experiment.pd to be generic, but also fairly usable. It should be relatively easy to change the tempo, phrase length, or any number of musical patterns to create music that is stylistically different. Also, I enjoy slow evolving nature of EYESY generated video. I feel that turning on and off changes to various combinations of the five knobs add a degree of subtlety that aid in the dynamic nature of the video.

I am disappointed in my performance on the WARBL. I am still getting used to the EVI fingering on the instrument, so there are some very sour notes from time to time. However, I am very pleased with the range of the WARBL, as well as the subtle breath control the instrument provides. The fingering makes jumping octaves and fifths very easy. In future experiments I hope to get into hacking existing Organelle patches. I also plan to come up with variants of the videoautomation algorithm to create more sudden, less subtle changes to the EYESY’s settings.

Experiment 1: Granular Freezer

The first experiment in my project funded by Digital Innovation Lab at Stonehill College investigates the use of the preset patch Granular Freezer (designed by Critter and Guitari) to process audio . In this case the audio is coming from a lap steel guitar. In that I’m just getting used to both the Organelle and the EYESY, I will simply be improvising using a preset with the EYESY.

While describing his concept of stochastic music, composer Iannis Xenakis essentially also described the idea of granular synthesis in his book Formalized Music: Thought and Mathematics in Music . . .

“Other paths also led to the same stochastic crossroads . . . natural events such as the collision of hail or rain with hard surfaces, or the song of cicadas in a summer field. These sonic events are made out of thousands of isolated sounds; this multitude of sounds, seen as a totality, is a new sonic event. . . . Everyone has observed the sonic phenomena of a political crowd of dozens or hundreds of thousands of people. The human river shouts a slogan in a uniform rhythm. Then another slogan springs from the head of the demonstration; it spreads towards the tail, replacing the first. A wave of transition thus passes from the head to the tail. The clamor fills the city, and the inhibiting force of voice and rhythm reaches a climax. It is an event of great power and beauty in its ferocity. Then the impact between the demonstrators and the enemy occurs. The perfect rhythm of the last slogan breaks up in a huge cluster of chaotic shorts, which also spreads to the tail. Imagine, in addition, the reports of dozens of machine guns and the whistle of bullets adding their punctuations to this total disorder. The crowd is then rapidly dispersed, and after sonic and visual hell follows a detonating clam, full of despair, dust, and death.”

This passage by Xenakis goes into dark territory very quickly, but it is a scenario that harkened back to the composer’s past when he lost an eye during a conflict with occupying British tanks during the Greek Civil War.

Granular synthesis was theorized before it was possible. Its complexity requires computer technology that at the time it was theorized was not yet powerful enough. Granular synthesis takes an audio waveform, sample, or audio stream and chops it into grains and then resynthesizes the sound by combining them into clouds. The synthesist can often control aspects such as grain length, grain playback speed, cloud density, as well as others. Since the user can control playback speed of grains independently of the speed to which a sound travels through a group of grains, granular synthesis allows one to stretch or compress the timing of sound without affecting pitch, as well as being able to change the pitch of a sound without affecting the duration of a sound.

Granular Freezer is sort of a budget version of Mutable Instruments Clouds. I call it a budget version, as Clouds allows for the control of more than a half dozen parameters of the sound, while Granular Freezer gives the user control of four parameters, namely grain size, spread, cloud density, and wet / dry mix. These four parameters correspond to the four prominent knobs on the Organelle. Spread can be thought of as a window, that kind of functions like a delay time. In other words it is a window of past audio that the algorithm draws from to create grains. As you turn the spread down the sound starts to sound like an eccentric slap (short) delay. Wet / dry mix is a very standard parameter in audio processing. It is the ratio of processed sound to original sound that is sent to the output. The Aux button can be used to freeze the sound, that is hold the current spread in memory until the sound is unfroze. You may also use a pedal to trigger this function. One feature of the algorithm that I didn’t use was using the MIDI keyboard to transpose the audio.

Since this is my first experiment with the Organelle, I’m not going to go through the algorithm in great detail, but I will give it an overview. All patches for the Organelle use a patch called a mother patch. The mother patch should never be altered. This controls input to and output from the Organelle. Such values are passed to and from the mother patch in order to avoid input and output errors. You can download the mother patch so you can develop instruments on a computer without having access to an Organelle. This is also a good way to check for coding errors before you upload a patch to your Organelle.

If we double click on the subroutine pd MIDI, we can see, amongst other things how the Organelle translates incoming MIDI controller data (controller numbers 21, 22, 23, & 24) to knob data . .

When we double click on pd audioIO, we can see how the mother patch handles incoming audio (left) as well as outgoing audio (right) . . .

Any given Organelle patch typically consists of a main program (by convention, named main.pd), as well as a number of self contained subpatches called abstractions. In the case of Granular Freezer, in addition to the main program there are three abstrations: latch-notes.pd, grain.pd, and grainvoice.pd. Given the complexity of the program, I will only go over main.pd.

In the upper right hand quadrant we see the portion of the algorithm that handles input and output. It also controls the ability to freeze the audio (pd freezer). The objects catch~ grainoutL and catch~ grainoutR receive the granularly processed audio from grainvoice.pd. The object r wetdry receives data from knob 4, allowing the user to control the ratio of unprocessed sound to processed sound.

The lower half of main.pd sets the numerical range for each of the four knobs, sends the data to wherever it is needed (as was done with wetdry), and sets & sends the text to be displayed on the Organelle’s screen. This last item causes the function of each knob to be displayed along with its current status. In the message box at the bottom of the screen, we see a similar item related to the aux key screenLine5. The part of the program that updates the display for screenLine5 is in pd freezer.

In the improvisation I run a six string lap steel using C6 tuning through a volume pedal, and then into the Organelle. I also use a foot switch to trigger the freeze function on the patch. Musically, I spend the bulk of the time improvising over an E pedal provided by an eBow on the open low E string. A lot of the improvisation is devoted to arpeggiating Am, C#m, and Em chords on the upper three strings of the instrument. During the beginning of the improvisation the mix is at 80% (only 20% of the original sound). I quickly move to 100% mix. Near the end I move to sliding pitches with the eBow. During this portion of the improvisation I experiment with the spread, and there’s a portion where I move the spread close to its minimal amount, which yields that eccentric slap delay sound (starts around 8:28). As I move the spread towards its upper end, I can use it as a harmonizer, as the spread holds the previous pitch in its memory as I move onto a new harmonizing pitch. At the end (around 9:15) I move the grain size down to the bottom (1 millisecond), and then turn the cloud density down to the bottom to function as a fade out.

With the EYESY I simply improvised the video using the audio of the video recording I made. With the mode I was using the first two knobs seemed to control the horizontal and vertical configuration of the polygons. These controls allow you to position them in rows, or in a diamond lattice configuration. You can even use these controls to get the polygons to superimpose to an extent. The middle knob seems to increase the size of the polygons, making them overlap a considerable amount. The two knobs on the right control the colors for the polygons and the background respectively. I used these to start from a green on black configuration in honor of the Apple IIe computer.

The video below will allow you to watch the EYESY improvisation and the musical improvisation simultaneously . . .

As I mentioned earlier, I consider Granular Freezer to be a budget version of Clouds. I have the CalSynth clone of Clones, Monsoon, in a synthesizer rack I’m assembling. It is a wonderful, expressive instrument that creates wonderful sounds. While Granular Freezer is a more limited instrument, it is still capable of creating some wonderful sounds. There are already a couple variants of Granular Freezer posted on PatchStorage. As I get more familiar with programming for the Organelle, I should be able to create a version of the patch that allows for far greater control of parameters.

All in all, I feel this was a successful experiment. I should be able to reuse the audio for other musical projects. However, there was some room for improvement. There was a little bit of 60 cycle hum that was coming through the system. This was likely due to a cheap pickup in my lap steel. Likewise, it would be nice if I cleaned up my studio so it would make for a less cluttered YouTube video. As I mentioned earlier it would be good to add more parameters to the Granular Freezer patch. I used Zoom to capture the video created by the EYESY, and it added my name to the lower left hand corner of the screen. Next time I may try using QuickTime for video capture to avoid that issue. By the end of the summer I hope to develop some PureData algorithms to control the EYESY to perform the video in an automated fashion. See you all back here next month for the next installment.

Digital Innovation Grant

I am proud to announce that I have received a Digital Innovation Grant from the Digital Innovation Lab at the MacPháidín Library. The project will be centered on learning sound synthesis in Pure Data using Critter & Guitari’s Organelle. An additional component of the project will be in using video synthesis using the EYESY (also made by Critter & Guitari). The EYESY generates video in real-time that reacts to MIDI data, and / or audio. Programs for the EYESY can be written in Python or openFrameworks/LUA.

The format of this project will be a series of monthly experiments that will include both the Organelle and EYESY. Some of these will use the Organelle as an audio processor (similar to a guitar effects pedal), while others will use it as a synthesizer / sampler using a variety of approaches to sound synthesis / sampling. Many of these experiments will make use of computer-assisted composition algorithms created in Pure Data, building off of research I’ve been doing already over the past year. Each experiment will also be a brief musical composition inspired by the Experimental Music as defined by Lejaren Hiller in his seminal book Experimental Music: Composition with an Electronic Computer.

The outcome of these experiments will be monthly informal reports made through this blog, as well as a series of YouTube videos of the works themselves, which will be embedded into the blog entries. A year from now, once I’ve created 12 such musical experiments, I will give a lecture recital that presents these experiments. One final outcome will be to consider adding Pure Data oriented assignments to VPM 248: Sound Synthesis. I look forward to working on this project, and will commence working on it once the materials come in.