Experiment 5: 2opFM

As was the case with Experiment 4, this month’s experiment has been a big step forward, even if the music may not seem to indicate an advance. This is due to the fact that this is the first significant Organelle patch that I’ve created, and that I’ve figured out how to connect the EYESY to wifi, allowing me to transfer files to and from the unit. The patch I created, 2opFM, is based upon Node FM by m.wisniowski.

As the name of the patch suggests it is a two operator FM synthesizer. For those who are unfamiliar with FM synthesis, one of the two operators is a carrier, which is an audible oscillator. The other operator is an oscillator that modulates the carrier. In its simplest form, we can think of FM synthesis as being like vibrato. One oscillator makes sound, while the other oscillator changes the pitch of the carrier. We can think of two basic controls that affect the nature of the vibrato, the speed (frequency) of the modulating oscillator, and the depth of modulation, that is how far away from the fundamental frequency does the pitch move.

What makes FM synthesis different from basic vibrato is that typically the modulating oscillator is operating at a frequency within the audio range, often at a frequency that is higher in pitch than that of the carrier. Thus, when operating at such a high frequency, the modulator doesn’t change the pitch of the carrier. Rather it functions to shape the waveform of the carrier. FM Synthesis is most often accomplished using sine waves. Accordingly, as one applies modulation we start to add harmonics, yielding more complex timbres.

Going into detail about how FM synthesis shapes waveforms is well beyond the scope of this entry. However, it is worth noting that my application of FM synthesis here is very rudimentary. The four parameters controlled by the Organelle’s knobs are: transposition level, FM Ratio, release, and FM Mod index. Transposition is in semitones, allowing the user to transpose the instrument up to two octaves in either direction (up or down). FM Ratio only allows for integer values between 1 and 32. The fact that only integers are used here means that the result harmonic spectra will all conform to the harmonic series. Release refers to how long it will take a note to go from its operating volume to 0 after a note ends. The FM Mod index is how much modulation is applied to the carrier, with a higher index resulting in more harmonics added to a given waveform. I put this parameter on the leftmost knob, so that when I control the organelle via my wind controller, a WARBL, increases in air pressure will result in richer waveforms.

As can be seen in the screenshot of main.pd above, 2opFM is an eight voice synthesizer. Eight voice means that eight different notes can be played simultaneously when using a keyboard instrument. Each individual note is passed to the subroutine voice. Determining the frequency of the modulator involves three different parameters: the MIDI note number of the incoming pitch, the transposition level, and the FM ratio. We see the MIDI note number coming out of the left outlet of upack 0 0 on the right hand side, just below r notesIN-$0. For the carrier, we simply add this number to the transposition level, and then convert it to a frequency by using the object mtof, which stands for MIDI to frequency. This value is then, eventually passed to the carrier oscilator. For the modulator, we have to add the MIDI note number to the transposition level, convert it to a frequency, again using mtof, and multiplying this value by the FM Ratio.

In order to understand the flow of this algorithm, we first have to understand the difference between control signals and audio signals. Audio signals are exactly what they sound like. In order to have high quality sound audio signals are calculated at the sample rate, which is 44,100 samples per section by default. In Pure Data, calculations that happen at this rate are indicated by a tilde following the object name. In order to save on processor time, calculations that don’t have to happen so quickly are control signals, which are denoted by a lack of a tilde after the object name. These calculations typically happen once every 64 samples, so approximately 689 times per second.

Knowing the difference between audio signals and control signals, we can now introduce two other objects. The object sig~ converts a control signal into an audio signal. Likewise, i converts a float (decimal value) to an integer, by truncating the value towards zero, that is positive values are rounded down, while negative values are rounded up.

Keep in mind the loadbang portion of voice.pd is used to initialize the values. Here we see the transposition level set at zero, the FM ratio set to one, and the FM Mod index set to 1000. The values from the knobs come in as floats between 0 and 1 inclusive. Thus, typically in order to get useable values we have to perform mathematical operations to rescale them. Under knob 1, we see the transposition level multiplied by 48, and then we subtract 24 from that value. This will yield a value between -24 (two octaves down) and +24 (two octaves up). Under knob 2, we see the FM ratio multiplied by 31, and then we add one to this value, resulting in a value between 1 and 32 inclusive. Both of these values are converted to integers using i.

The scaled value of knob 1 (transposition level) is then added to the MIDI note number, and then turned into a frequency using mtof and converted to an audio signal using sig~. This is now the fundamental frequency of carrier oscillator. In order to get the frequency of the modulator, we need to multiply this value to the scaled value of knob 2. This frequency is then fed to the leftmost inlet of the modulating oscillator (osc~). The output of the modulating oscillator is then multiplied by the scaled output of knob 4 (0 to 12,000 inclusive), which is then in turn multiplied by the fundamental frequency of the carrier oscillator before being fed to the leftmost inlet of the carrier oscillator (osc~). While there is more going on in this subroutine, this explains how the core elements of FM synthesis are accomplished in this algorithm.

The algorithm that is used to generate the accompaniment is largely the same as the one used for Experiment 4 with a couple of changes. First, after completing Experiment 4 I realized I had to find a way to set the sixteenth note counter to zero every time the meter is changed. Otherwise, occasionally when changing from one meter to another there may be one or two extra beats. However, reseting the counter to zero ruins the subroutine that sends automation values. Thus, I had to use two counters, one that keeps counting without being reset (on the right) and one that gets reset every time the meter changes (on the left).

Initially I made a mistake by choosing the new meter at the beginning of a phrase. This caused a problem called a stack overflow. At the beginning of a phrase you’d chose a new meter, which would cause the phrase to reset, which would a new meter to be chosen, which then repeats in an endless loop. Thus, I had chose a new meter at the end of a phrase.

Inside pd choosemeter, we see the phrase counter being sent to two spigots. This spigots are turned on or off from depending on the value of currentmeter. The value 0 under r channel sets the meter for the first phrase to 4/4. Using the nomenclature of r channel is a bit confusing, as s channel is included in loadbang, and was simply included to initialize the MIDI channel numbers for the algorithm. In a future iteration of this program, I may retitle these values as s initializevalues and r initializevalues to make more sense.

Underneath the two spigots, we see sel 64 and sel 48, This finds the end of the phrase when we are in 4/4 (64) or 3/4 (48). Originally I had used the values 63 and 47 as these would be the numbers of the last sixteenth note of each phrase. However, when I did that I found that the algorithm skipped the last sixteenth note of the phrase. Thus, by using 64 and 48 I actually am choosing the meter at the start of a phrase, but now reseting the counter to zero no longer triggers a recursive loop. Regardless, whenever either sel statement is triggered the next meter is chosen randomly, with zero corresponding to 4/4 and 1 corresponding to 3/4. This value is then sent to other parts of the algorithm using send currentmeter, and the phrase is reset by sending a bang to send phrasereset.

As previously noted, I figured out how to connect the EYESY to wifi, allowing me to transfer files two and from the unit. Amongst other things, this allows me to download new EYESY programs from patchstorage.com and add them to my EYESY. I downloaded a patch called Image Shadow Grid, which was developed by dudethedev. This algorithm randomly grabs images stored inside a directory, adds them to the screen, and manipulates them in the process.

I was able to customize this program without changing any of the code simply by changing out the images in the directory. I added images related to a live performance of a piece from my forthcoming album (as Darth Presley). However, up to this point I’d been using the EYESY to respond mainly to audio volume. Some algorithms, such as Image Shadow Grid, use triggers to incite changes in the video output. The EYESY has several trigger modes. I chose to use the MIDI note trigger mode. However, in order to trigger the unit I now have to send MIDI notes to the EYESY.

This constitute the other significant change to the algorithm that generates the accompaniment. I added a subroutine that sends notes to the EYESY. Since the pitch of the notes don’t matter, I simply send the number for middle C (60) to the unit. Otherwise the subroutine which determines whether a note should be sent to the EYESY functions just like those used to generate any rhythm, that is it selects between one of three rhythmic patterns for each of the two meters, each pattern of which is stored as an array.

As with last month, the significant weakness in this month’s experiment is my lack of practice on the WARBL. Likewise, it would have been useful if I had the opportunity to add a third meter to the algorithm that generates the accompaniment. The Organelle program 2opFM is not quite as expressive with the WARBL as it had been in experiment 2 and 4. Changes in the FM Mod Index aren’t as smooth as I had hoped they’d be. Perhaps if I were to expand the patch further, I’d want to add a third operator, and separate the FM Mod Index into two parts, one where you set the maximum level of the index, and another where you set the current level, so you can set it where the maximum breath pressure on the WARBL only yields a subtle amount of modulation.

In terms of the EYESY, I doubt I will begin writing programs for it next month, I may experiment with already existing algorithms that allow the EYESY to use a webcam. However, hopefully by October I can be hacking some Python code for the EYESY. My current plan is to experiment with some additive synthesis next month, so stay tuned.

Experiment 4: NES EWI

While this month’s experiment may not seem musically much more advanced than Experiment 2, this month has actually been a significant step forward for me. I finished reading Organelle: How to Program Patches in Pure Data by Maurizio Di Berardino. More importantly, I finally got the WiFi working on my Organelle which allows me to transfer patches back and fourth between my laptop and the Organelle. I used that feature to transfer a patch called NESWave Synth by a user called blavatsky. This patch uses waveforms from the Nintendo Entertainment System, the Commodore 64, and others as a basis for a synthesizer. However, the synthesizer allows one to mix in some granular synthesis, add delay, add a a phasor, and other fancy features.

I made one minor tweak to NESWave Synth. In experiment 2 I used my WARBL wind controller to control the filter of Analog Style, I wanted to do the same with NESWave. On Analog Style, the resonance of the filter is on knob 3 and the cutoff frequency is on knob 4. On NESWave Synth, these to settings are reversed. So, I edited NESWave Synth so resonance is on knob 3 and cutoff frequency is on knob 4. I titled this new version of the patch NES EWI. This patch me to go from controlling Analog Style to NES EWI without changing the settings on my WARBL.

As NESWave Synth / NES EWI has a lot of other features / settings. During this experiment, I setup all the parameters of the synth the way I wanted, and didn’t make any changes in the patch as I performed, although again the breath pressure of the WARBL was controlling the filter cutoff frequency. Another user noted that NESWave Synth is complicated enough to warrant patch storage, although to the best of my knowledge no one has implemented such a feature yet.

The tweak I made to NESWave Synth is insignificant enough to not warrant coverage here. Accordingly I’ll go over changes I made to the PureData algorithm that generated the accompaniment for Experiment 2. Experiment 2 uses the meter 4/4 exclusively. I’ve been wanting to build an algorithm that randomly selects a musical meter at the beginning of each phrase. While the basic mechanics of this is easy, in order to add a second meter, I have to double the number of arrays that define the musical patterns.

In Experiment 4 I add the possibility of 3/4. Choosing between the two meters is simple. Inside the subroutine pd instrumentchoice I added a simple bit of code that randomly chooses between 0 and 1, and then sends that value out as currentmeter.

However, making this change causes three problems. The first is that the length of each measure now has change from 16 sixteenth notes to 12 sixteenth notes. That problem is solved in the main routine by adding an object that receives the value of currentmeter and uses that to select between an option that passes the number 16 or the number 12 to the right inlet of a mod operation on the global sixteenth note counter. This value will overwrite the initial value of 16 in the object % 16. As I write this, I realize I need to also reset the counter to 0 whenever I send a new meter so every phrase starts on beat 1. I can make that change in the next iteration of the algorithm.

The next problem is that I had to change the length of each phrase from 64 (4 x 16) for 4/4 to 48 (4 x 12) for 3/4. This is solved exactly the same way by passing the value of currentmeter to an object that selects either 64 or 48, and passes that value to the right inlet of a mod operation, overwriting the initial value of 64. Note that I also pass the value of currentmeter to a horizontal radio button so I can see what the current meter is. I can’t say that I actually used this in performance, but as I practice with this algorithm I should be able to get better at changing on the fly between a 4/4 feel and a 3/4 feel. Also, this should be much easier for me when I play an instrument I am more comfortable with than the EWI. I have also added a visual metronome using sel 0 4 8 12 each of which passes to a separate bang object. Doing this will cause each bang to flash on each successive beat. In future instances of this algorithm I may choose to just have it show where beat one is, as counting beats will become more complicated as I add asymetrical meters.

The final problem is that every subroutine that generates notes (pd makekick, pd makesnare, pd makeclap, pd makehh, pd makecymbal pd maketoms pd makecowbell pd makepizz, pd makekeys, pd makefm) needs to be able to switch between using patterns for 4/4 and patterns for 3/4. While I made these changes to all 10 subroutines, it is the same process for each, so I’ll only show one version. Let’s investigate the pd makekick subroutine. The object inlet receives the counter for the current sixteenth note, modded to 16 or 12. This value is then passed to the left inlet of two different spigot objects. In PureData spigots pass the value of the left inlet to the outlet if the value at the right inlet is greater than zero. Thus, we can take the value of currentmeter to select which spigot gets turned on, and conversely, which one gets turned off, using the 0 and 1 message boxes.

Now we know which meter is active, we will then pass to one of two subroutines which picks the current pattern. One of these subroutines pd makekick_four is for 4/4. The other, pd makekick_three is for 3/4. Both have essentially the same structure, so let’s look inside pd makekick_four. This subroutine uses the same structure as pd makekick. Again, the inlet receives the current value of the sixteenth note counter. Again, we use spigots to route this value, however, this time we use three spigots as there are three different patterns that are possible. This routing is accomplished using the current value of pattern. The 0 position of this array stores the value for kick drum patterns. Technically speaking, there are four different patterns, 0, 1, 2, 3, with the value 0 meaning that there is no active pattern, resulting in no kick drum pattern. Again, a sel statement that passes to a series of 0 and 1 message boxes turn on and off the three spigots. The object tabread objects below read from three different patterns kick1_four, kick2_four, kick3_four. The value at this point is passed back out to the pd makekick subroutine. Since the possible values, 0, 1, 2, or 3, are the same whether the pattern is in 4/4 or 3/4, these values are then passed to the rest of the subroutine, which either makes an accented note, an unaccented, note, or calculates a 50% chance of an unaccented occuring. A fourth possibility, no note, happens when the value is 0. By not including 0 in the object sel 1 2 3 we insure no note will happen in that case.

While I still haven’t looked into programming the EYESY, I did revise the PureData algorithm that controls the EYESY. In Experiment 2, all of the changes to the EYESY occur very gradually, resulting in slowly evolving imagery. In Experiment 4, I wanted to add some abrupt changes that occur on the beat. Most EYESY algorithms use knob 5 to control the background color. I figure that would be most apparent change possible, so in the subroutine pd videochoice I added code that would randomly choose four different colors (one for each potential beat), and to store them in an array called videobeats. Notice that each color is defined as a value between 0 and 126 (I realized while writing this, I could have used random 128) as we use MIDI to control the EYESY, and MIDI parameters have 128 different values.

Now we have revise pd videoautomation to allow for this parameter control. The first four knobs use the same process that we used in Experiment 2. For the fifth knob, the 4 position in the array videostatus, we first check to see whether changes should happen for that parameter by passing the output of tabread videostatus to a spigot. When tabread videostatus returns a one, the spigot will turn on, otherwise, it will shut down. When the spigot is open, the current value of sixteenth note counter is passed to an object that mods the by 4. When this value is 0, we are on a beat. We then have a counter that keeps track of what the current beat number is. We then however, have to mod that either 4 for 4/4 or 3 for 3/4. The is accomplished using expr (($f1) % ($f2)). Here we pass the current meter to the right inlet, corresponding to $f2. We do thus using the value of currentmeter to select between a 4 and 3 message box. We can then get the color corresponding to the beat by returning the value of videobeats, and send that out to the EYESY using ctrlout 25 (25 being the controller number for the fifth knob of the EYESY).

In terms of the video, it is clear that I really need to practice my EVI fingerings. I am considering forcing the issue by booking a gig on the instrument to force me to practice more. I find that the filter control on Analog Style felt more expressive to me than the control on NES EWI. Though perhaps I need to recalibrate the breath control on the WARBL. I hope to do a more substantial hack next month, perhaps creating a two operator FM instrument. I also hope to connect the EYESY to WiFi so I can add more programs to it.