Experiment 12: MBTA

As I mentioned in my previous experiment, it has been a busy and difficult semester for me for family reasons. Accordingly, I am two and a half months behind schedule delivering my 12th and final experiment for this grant cycle. Additionally, I feel like my work for the past few months on this process has been a bit underwhelming, but unfortunately this work what my current bandwidth allows for. I hope to make up for it in the next year or so.

Anyway, the experiment for this month is similar to the one done for experiment 11. However, in this experiment I am generate vector images that reference maps of Boston’s subway system (the MBTA). Due to the complexity of the MBTA system I’ve created four different algorithms, reducing the visual data at any time to one quadrant of the map, thus, the individual programs are called: MBTA NW, MBTA NE, MBTA SE, & MBTA SW.

Since all four algorithms are basically the same, I’ll use MBTA – NE as an example. For each example Knob 5 was used for the background color. There were far more attributes I wanted to control than knobs I had at my disposal, so I decided to link them together. Thus, for MBTA – NE knob 1 controls red line attributes, knob 2 controls the blue and orange line attributes, knob 3 controls the green line attributes, and knob 4 controls the silver line attributes. Each of the four programs assigns the knobs to different combinations of colored lines based upon the complexity of the MBTA map in that quadrant.

The attributes that knobs 1-4 control include: line width, scale (amount of wiggle), color, and number of superimposed lines. The line width ranges from one to ten pixels, and is inversely proportional to the number of superimposed lines which ranges from on to eight. Thus, the more lines there are, the thinner they are. The scale, or amount of wiggle is proportional to the line width, that is the thicker the lines, the more they can wiggle. Finally, color is defined using RGB numbers. In each case, only one value (the red, the green, or the blue) changes with the knob values. The amount of change is a twenty point range centered around the optimal value. We can see this implemented below in the initialization portion of the program.

	RElinewidth = int (1+(etc.knob1)*10)
	BOlinewidth = int (1+(etc.knob2)*10)
	GRlinewidth = int (1+(etc.knob3)*10)
	SIlinewidth = int (1+(etc.knob4)*10)
	etc.color_picker_bg(etc.knob5)
	REscale=(55-(50*(etc.knob1)))
	BOscale=(55-(50*(etc.knob2)))
	GRscale=(55-(50*(etc.knob3)))
	SIscale=(55-(50*(etc.knob4)))
	thered=int (89+(10*(etc.knob1)))
	redcolor=pygame.Color(thered,0,0)
	theorange=int (40+(20*(etc.knob2)))
	orangecolor=pygame.Color(99,theorange,0)
	theblue=int (80+(20*(etc.knob2)))
	bluecolor=pygame.Color(0,0,theblue)
	thegreen=int (79+(20*(etc.knob3)))
	greencolor=pygame.Color(0,thegreen,0)
	thesilver=int (46+(20*(etc.knob4)))
	silvercolor=pygame.Color(50,53,thesilver)
	j=int (9-(1+(7*etc.knob1)))

The value j stands for the number of superimposed lines. This then transitions into the first of four loops, one for each of the groups of lines. Below we see the code for red line portion of program. The other three loops are fairly much the same, but are much longer due to the complexity of the MBTA map. An X and a Y coordinate are set inside this loop for every point that will be used. REscale is multiplied by a value from etc.audio_in which is divided by 33000 in order to change that audio level into a decimal ranging from 0 to 1 (more or less). This scales the value of REscale down to a smaller value, which is added to the numeric value. It is worth noting that because audio values can be negative, the numeric value is at the center of potential outcomes. Scaling the index number of etc.audio_in by (i*11), (i*11)+1, (i*11)+2, & (i*11)+3 lends a suitable variety of wiggles for each instance of a line.

	j=int (9-(1+(7*etc.knob1)))
	for i in range(j):
		AX=int (320+(REscale*(etc.audio_in[(i*11)]/33000)))
		AY=int (160+(REscale*(etc.audio_in[(i*11)+1]/33000)))
		BX=int (860+(REscale*(etc.audio_in[(i*11)+2]/33000)))
		BY=int (720+(REscale*(etc.audio_in[(i*11)+3]/33000)))
		pygame.draw.line(screen, redcolor, (AX,AY), (BX, BY), RElinewidth)

I arbitrarily limited each program to 26 points (one for each letter of the alphabet). This really causes the vector graphic to be an abstraction of the MBTA map. The silver line in particular gets quite complicated, so I’m never really able to fully represent it. That being said, I think that anyone familiar with Boston’s subway system would recognize it if the similarity was pointed out to them. I also imagine any daily commuter on the MBTA would probably recognize the patterns in fairly short order. However, in watching my own video, which uses music generated by a PureData algorithm that will be used to write a track for my next album, I noticed that the green line in the MBTA – NE and MBTA – SW needs some correction.

The EYESY has been fully incorporated into my live performance routine as Darth Presley. You can see below a performance at the FriYay series at the New Bedford Art Museum. You’ll note that the projection is the Random Lines algorithm that I wrote. Likewise graduating senior Edison Roberts used the EYESY for his capstone performance as the band Geepers! You’ll see a photo of him below with a projection using the Random Concentric Circles algorithm that I wrote. I definitely have more ideas of how to use the EYESY in live performance. In fact, others have started to use ChatGPT to create EYESY algorithms.

Ultimately my work on this grant project has been fruitful. To date the algorithms I’ve written for the Organelle and EYESY have been circulated pretty well on Patchstorage.com (clearly the Organelle is the more popular format of the two) . . .

2opFM (Organelle) 2 likes, 586 views, 107 downloads
Additive Odd / Even (Organelle) 6 likes, 969 views, 184 downloads
Bass Harmonica (Organelle) 7 likes, 825 views, 174 downloads
Basic Circle (EYESY) 307 views, 7 downloads
Wavetable Sampler (Organelle) 2 likes, 796 views, 123 downloads
Basic Circles (EYESY) 1 like, 279 views, 16 downloads
Random Lines (EYESY) 198 views, 18 downloads
Random Concentric Circles (EYESY) 132 views, 18 downloads
Colored Rectangles (EYESY) 1 like, 149 views, 31 downloads
Random Rectangles (EYESY) 168 views, 26 downloads
Random Radii (EYESY) 1 like, 169 views, 16 downloads
Constellations (EYESY) 1 like, 264 views, 14 downloads
MBTA (EYESY) 21 views
Total (Organelle) 4 patches, 17 likes, 3,176 views, 588 downloads
Total (EYESY) 23 patches, 4 likes, 1,687 views, 146 downloads
Total: 27 patches, 21 likes, 4,863 views, 734 downloads

Experiment 1: Granular Freezer

The first experiment in my project funded by Digital Innovation Lab at Stonehill College investigates the use of the preset patch Granular Freezer (designed by Critter and Guitari) to process audio . In this case the audio is coming from a lap steel guitar. In that I’m just getting used to both the Organelle and the EYESY, I will simply be improvising using a preset with the EYESY.

While describing his concept of stochastic music, composer Iannis Xenakis essentially also described the idea of granular synthesis in his book Formalized Music: Thought and Mathematics in Music . . .

“Other paths also led to the same stochastic crossroads . . . natural events such as the collision of hail or rain with hard surfaces, or the song of cicadas in a summer field. These sonic events are made out of thousands of isolated sounds; this multitude of sounds, seen as a totality, is a new sonic event. . . . Everyone has observed the sonic phenomena of a political crowd of dozens or hundreds of thousands of people. The human river shouts a slogan in a uniform rhythm. Then another slogan springs from the head of the demonstration; it spreads towards the tail, replacing the first. A wave of transition thus passes from the head to the tail. The clamor fills the city, and the inhibiting force of voice and rhythm reaches a climax. It is an event of great power and beauty in its ferocity. Then the impact between the demonstrators and the enemy occurs. The perfect rhythm of the last slogan breaks up in a huge cluster of chaotic shorts, which also spreads to the tail. Imagine, in addition, the reports of dozens of machine guns and the whistle of bullets adding their punctuations to this total disorder. The crowd is then rapidly dispersed, and after sonic and visual hell follows a detonating clam, full of despair, dust, and death.”

This passage by Xenakis goes into dark territory very quickly, but it is a scenario that harkened back to the composer’s past when he lost an eye during a conflict with occupying British tanks during the Greek Civil War.

Granular synthesis was theorized before it was possible. Its complexity requires computer technology that at the time it was theorized was not yet powerful enough. Granular synthesis takes an audio waveform, sample, or audio stream and chops it into grains and then resynthesizes the sound by combining them into clouds. The synthesist can often control aspects such as grain length, grain playback speed, cloud density, as well as others. Since the user can control playback speed of grains independently of the speed to which a sound travels through a group of grains, granular synthesis allows one to stretch or compress the timing of sound without affecting pitch, as well as being able to change the pitch of a sound without affecting the duration of a sound.

Granular Freezer is sort of a budget version of Mutable Instruments Clouds. I call it a budget version, as Clouds allows for the control of more than a half dozen parameters of the sound, while Granular Freezer gives the user control of four parameters, namely grain size, spread, cloud density, and wet / dry mix. These four parameters correspond to the four prominent knobs on the Organelle. Spread can be thought of as a window, that kind of functions like a delay time. In other words it is a window of past audio that the algorithm draws from to create grains. As you turn the spread down the sound starts to sound like an eccentric slap (short) delay. Wet / dry mix is a very standard parameter in audio processing. It is the ratio of processed sound to original sound that is sent to the output. The Aux button can be used to freeze the sound, that is hold the current spread in memory until the sound is unfroze. You may also use a pedal to trigger this function. One feature of the algorithm that I didn’t use was using the MIDI keyboard to transpose the audio.

Since this is my first experiment with the Organelle, I’m not going to go through the algorithm in great detail, but I will give it an overview. All patches for the Organelle use a patch called a mother patch. The mother patch should never be altered. This controls input to and output from the Organelle. Such values are passed to and from the mother patch in order to avoid input and output errors. You can download the mother patch so you can develop instruments on a computer without having access to an Organelle. This is also a good way to check for coding errors before you upload a patch to your Organelle.

If we double click on the subroutine pd MIDI, we can see, amongst other things how the Organelle translates incoming MIDI controller data (controller numbers 21, 22, 23, & 24) to knob data . .

When we double click on pd audioIO, we can see how the mother patch handles incoming audio (left) as well as outgoing audio (right) . . .

Any given Organelle patch typically consists of a main program (by convention, named main.pd), as well as a number of self contained subpatches called abstractions. In the case of Granular Freezer, in addition to the main program there are three abstrations: latch-notes.pd, grain.pd, and grainvoice.pd. Given the complexity of the program, I will only go over main.pd.

In the upper right hand quadrant we see the portion of the algorithm that handles input and output. It also controls the ability to freeze the audio (pd freezer). The objects catch~ grainoutL and catch~ grainoutR receive the granularly processed audio from grainvoice.pd. The object r wetdry receives data from knob 4, allowing the user to control the ratio of unprocessed sound to processed sound.

The lower half of main.pd sets the numerical range for each of the four knobs, sends the data to wherever it is needed (as was done with wetdry), and sets & sends the text to be displayed on the Organelle’s screen. This last item causes the function of each knob to be displayed along with its current status. In the message box at the bottom of the screen, we see a similar item related to the aux key screenLine5. The part of the program that updates the display for screenLine5 is in pd freezer.

In the improvisation I run a six string lap steel using C6 tuning through a volume pedal, and then into the Organelle. I also use a foot switch to trigger the freeze function on the patch. Musically, I spend the bulk of the time improvising over an E pedal provided by an eBow on the open low E string. A lot of the improvisation is devoted to arpeggiating Am, C#m, and Em chords on the upper three strings of the instrument. During the beginning of the improvisation the mix is at 80% (only 20% of the original sound). I quickly move to 100% mix. Near the end I move to sliding pitches with the eBow. During this portion of the improvisation I experiment with the spread, and there’s a portion where I move the spread close to its minimal amount, which yields that eccentric slap delay sound (starts around 8:28). As I move the spread towards its upper end, I can use it as a harmonizer, as the spread holds the previous pitch in its memory as I move onto a new harmonizing pitch. At the end (around 9:15) I move the grain size down to the bottom (1 millisecond), and then turn the cloud density down to the bottom to function as a fade out.

With the EYESY I simply improvised the video using the audio of the video recording I made. With the mode I was using the first two knobs seemed to control the horizontal and vertical configuration of the polygons. These controls allow you to position them in rows, or in a diamond lattice configuration. You can even use these controls to get the polygons to superimpose to an extent. The middle knob seems to increase the size of the polygons, making them overlap a considerable amount. The two knobs on the right control the colors for the polygons and the background respectively. I used these to start from a green on black configuration in honor of the Apple IIe computer.

The video below will allow you to watch the EYESY improvisation and the musical improvisation simultaneously . . .

As I mentioned earlier, I consider Granular Freezer to be a budget version of Clouds. I have the CalSynth clone of Clones, Monsoon, in a synthesizer rack I’m assembling. It is a wonderful, expressive instrument that creates wonderful sounds. While Granular Freezer is a more limited instrument, it is still capable of creating some wonderful sounds. There are already a couple variants of Granular Freezer posted on PatchStorage. As I get more familiar with programming for the Organelle, I should be able to create a version of the patch that allows for far greater control of parameters.

All in all, I feel this was a successful experiment. I should be able to reuse the audio for other musical projects. However, there was some room for improvement. There was a little bit of 60 cycle hum that was coming through the system. This was likely due to a cheap pickup in my lap steel. Likewise, it would be nice if I cleaned up my studio so it would make for a less cluttered YouTube video. As I mentioned earlier it would be good to add more parameters to the Granular Freezer patch. I used Zoom to capture the video created by the EYESY, and it added my name to the lower left hand corner of the screen. Next time I may try using QuickTime for video capture to avoid that issue. By the end of the summer I hope to develop some PureData algorithms to control the EYESY to perform the video in an automated fashion. See you all back here next month for the next installment.