Experiment 1: Granular Freezer

The first experiment in my project funded by Digital Innovation Lab at Stonehill College investigates the use of the preset patch Granular Freezer (designed by Critter and Guitari) to process audio . In this case the audio is coming from a lap steel guitar. In that I’m just getting used to both the Organelle and the EYESY, I will simply be improvising using a preset with the EYESY.

While describing his concept of stochastic music, composer Iannis Xenakis essentially also described the idea of granular synthesis in his book Formalized Music: Thought and Mathematics in Music . . .

“Other paths also led to the same stochastic crossroads . . . natural events such as the collision of hail or rain with hard surfaces, or the song of cicadas in a summer field. These sonic events are made out of thousands of isolated sounds; this multitude of sounds, seen as a totality, is a new sonic event. . . . Everyone has observed the sonic phenomena of a political crowd of dozens or hundreds of thousands of people. The human river shouts a slogan in a uniform rhythm. Then another slogan springs from the head of the demonstration; it spreads towards the tail, replacing the first. A wave of transition thus passes from the head to the tail. The clamor fills the city, and the inhibiting force of voice and rhythm reaches a climax. It is an event of great power and beauty in its ferocity. Then the impact between the demonstrators and the enemy occurs. The perfect rhythm of the last slogan breaks up in a huge cluster of chaotic shorts, which also spreads to the tail. Imagine, in addition, the reports of dozens of machine guns and the whistle of bullets adding their punctuations to this total disorder. The crowd is then rapidly dispersed, and after sonic and visual hell follows a detonating clam, full of despair, dust, and death.”

This passage by Xenakis goes into dark territory very quickly, but it is a scenario that harkened back to the composer’s past when he lost an eye during a conflict with occupying British tanks during the Greek Civil War.

Granular synthesis was theorized before it was possible. Its complexity requires computer technology that at the time it was theorized was not yet powerful enough. Granular synthesis takes an audio waveform, sample, or audio stream and chops it into grains and then resynthesizes the sound by combining them into clouds. The synthesist can often control aspects such as grain length, grain playback speed, cloud density, as well as others. Since the user can control playback speed of grains independently of the speed to which a sound travels through a group of grains, granular synthesis allows one to stretch or compress the timing of sound without affecting pitch, as well as being able to change the pitch of a sound without affecting the duration of a sound.

Granular Freezer is sort of a budget version of Mutable Instruments Clouds. I call it a budget version, as Clouds allows for the control of more than a half dozen parameters of the sound, while Granular Freezer gives the user control of four parameters, namely grain size, spread, cloud density, and wet / dry mix. These four parameters correspond to the four prominent knobs on the Organelle. Spread can be thought of as a window, that kind of functions like a delay time. In other words it is a window of past audio that the algorithm draws from to create grains. As you turn the spread down the sound starts to sound like an eccentric slap (short) delay. Wet / dry mix is a very standard parameter in audio processing. It is the ratio of processed sound to original sound that is sent to the output. The Aux button can be used to freeze the sound, that is hold the current spread in memory until the sound is unfroze. You may also use a pedal to trigger this function. One feature of the algorithm that I didn’t use was using the MIDI keyboard to transpose the audio.

Since this is my first experiment with the Organelle, I’m not going to go through the algorithm in great detail, but I will give it an overview. All patches for the Organelle use a patch called a mother patch. The mother patch should never be altered. This controls input to and output from the Organelle. Such values are passed to and from the mother patch in order to avoid input and output errors. You can download the mother patch so you can develop instruments on a computer without having access to an Organelle. This is also a good way to check for coding errors before you upload a patch to your Organelle.

If we double click on the subroutine pd MIDI, we can see, amongst other things how the Organelle translates incoming MIDI controller data (controller numbers 21, 22, 23, & 24) to knob data . .

When we double click on pd audioIO, we can see how the mother patch handles incoming audio (left) as well as outgoing audio (right) . . .

Any given Organelle patch typically consists of a main program (by convention, named main.pd), as well as a number of self contained subpatches called abstractions. In the case of Granular Freezer, in addition to the main program there are three abstrations: latch-notes.pd, grain.pd, and grainvoice.pd. Given the complexity of the program, I will only go over main.pd.

In the upper right hand quadrant we see the portion of the algorithm that handles input and output. It also controls the ability to freeze the audio (pd freezer). The objects catch~ grainoutL and catch~ grainoutR receive the granularly processed audio from grainvoice.pd. The object r wetdry receives data from knob 4, allowing the user to control the ratio of unprocessed sound to processed sound.

The lower half of main.pd sets the numerical range for each of the four knobs, sends the data to wherever it is needed (as was done with wetdry), and sets & sends the text to be displayed on the Organelle’s screen. This last item causes the function of each knob to be displayed along with its current status. In the message box at the bottom of the screen, we see a similar item related to the aux key screenLine5. The part of the program that updates the display for screenLine5 is in pd freezer.

In the improvisation I run a six string lap steel using C6 tuning through a volume pedal, and then into the Organelle. I also use a foot switch to trigger the freeze function on the patch. Musically, I spend the bulk of the time improvising over an E pedal provided by an eBow on the open low E string. A lot of the improvisation is devoted to arpeggiating Am, C#m, and Em chords on the upper three strings of the instrument. During the beginning of the improvisation the mix is at 80% (only 20% of the original sound). I quickly move to 100% mix. Near the end I move to sliding pitches with the eBow. During this portion of the improvisation I experiment with the spread, and there’s a portion where I move the spread close to its minimal amount, which yields that eccentric slap delay sound (starts around 8:28). As I move the spread towards its upper end, I can use it as a harmonizer, as the spread holds the previous pitch in its memory as I move onto a new harmonizing pitch. At the end (around 9:15) I move the grain size down to the bottom (1 millisecond), and then turn the cloud density down to the bottom to function as a fade out.

With the EYESY I simply improvised the video using the audio of the video recording I made. With the mode I was using the first two knobs seemed to control the horizontal and vertical configuration of the polygons. These controls allow you to position them in rows, or in a diamond lattice configuration. You can even use these controls to get the polygons to superimpose to an extent. The middle knob seems to increase the size of the polygons, making them overlap a considerable amount. The two knobs on the right control the colors for the polygons and the background respectively. I used these to start from a green on black configuration in honor of the Apple IIe computer.

The video below will allow you to watch the EYESY improvisation and the musical improvisation simultaneously . . .

As I mentioned earlier, I consider Granular Freezer to be a budget version of Clouds. I have the CalSynth clone of Clones, Monsoon, in a synthesizer rack I’m assembling. It is a wonderful, expressive instrument that creates wonderful sounds. While Granular Freezer is a more limited instrument, it is still capable of creating some wonderful sounds. There are already a couple variants of Granular Freezer posted on PatchStorage. As I get more familiar with programming for the Organelle, I should be able to create a version of the patch that allows for far greater control of parameters.

All in all, I feel this was a successful experiment. I should be able to reuse the audio for other musical projects. However, there was some room for improvement. There was a little bit of 60 cycle hum that was coming through the system. This was likely due to a cheap pickup in my lap steel. Likewise, it would be nice if I cleaned up my studio so it would make for a less cluttered YouTube video. As I mentioned earlier it would be good to add more parameters to the Granular Freezer patch. I used Zoom to capture the video created by the EYESY, and it added my name to the lower left hand corner of the screen. Next time I may try using QuickTime for video capture to avoid that issue. By the end of the summer I hope to develop some PureData algorithms to control the EYESY to perform the video in an automated fashion. See you all back here next month for the next installment.

Digital Innovation Grant

I am proud to announce that I have received a Digital Innovation Grant from the Digital Innovation Lab at the MacPháidín Library. The project will be centered on learning sound synthesis in Pure Data using Critter & Guitari’s Organelle. An additional component of the project will be in using video synthesis using the EYESY (also made by Critter & Guitari). The EYESY generates video in real-time that reacts to MIDI data, and / or audio. Programs for the EYESY can be written in Python or openFrameworks/LUA.

The format of this project will be a series of monthly experiments that will include both the Organelle and EYESY. Some of these will use the Organelle as an audio processor (similar to a guitar effects pedal), while others will use it as a synthesizer / sampler using a variety of approaches to sound synthesis / sampling. Many of these experiments will make use of computer-assisted composition algorithms created in Pure Data, building off of research I’ve been doing already over the past year. Each experiment will also be a brief musical composition inspired by the Experimental Music as defined by Lejaren Hiller in his seminal book Experimental Music: Composition with an Electronic Computer.

The outcome of these experiments will be monthly informal reports made through this blog, as well as a series of YouTube videos of the works themselves, which will be embedded into the blog entries. A year from now, once I’ve created 12 such musical experiments, I will give a lecture recital that presents these experiments. One final outcome will be to consider adding Pure Data oriented assignments to VPM 248: Sound Synthesis. I look forward to working on this project, and will commence working on it once the materials come in.

The History / Impact of Drum Machines: Part 2

The Roland TR-808 is easily speaking the most influential drum machine of all time. Produced from 1980 through 1983, this analog drum machine has been used in in thousands of songs, has been name checked in the lyrics of dozens of songs, has arguably influenced drum machine design more than any other product, and has been copied & emulated in both hardware & software. However, it would take the second most influential drum machine to take over the charts.

Initially the 808 was considered a commercial failure. Roland intended the instrument to be used by musicians to make demo recordings. The logic was that by owning an 808, a songwriter could avoid hiring a drummer for a demo session, which would likely also reduce the recording costs as well. However, professional songwriters felt that using the artificial sounding 808 in a demo session would make their song less likely to be taken on by a recording artist, thus limiting a songwriter’s potential income. Furthermore, while it retailed for far less than some other more advanced drum machines, at $1,195 it was quite expensive ($4,558 in 2022 when adjusted for inflation).

It took a younger generation of musicians who bought 808s second hand (reportedly for as little as $100 in 1983) to embrace the instrument because of its artificial nature, not in spite of it. Namely, I’m talking about hip-hop as well as musicians who created dance music in a style that would evolve into techno and electronica. In particular, Afrika Bambaataa’s “Planet Rock” (1982) was the first hip hop tune to widely show the potential for the 808.

However, there were some chart toppers early on that featured the 808. “Sexual Healing” (1982) by Marvin Gaye was the first hit to use the 808. Other (non-hip hop) hits to use the 808 would appear on the charts well after the 808 was no longer in production. For instance, Whitney Houston’s “I Wanna Dance With Somebody” didn’t appear until 1987.

As suggested earlier the true game changer in terms of influence on the charts was the LinnDrum. Produced from 1982 through 1985, the LinnDrum was the first widely available drum machine that used digitally sampled recordings of real drum sounds. The LinnDrum, also called the LM-2, was an updated version of the LM-1 (produced from 1980 through 1983). Both machines used 8 bit samples, which is comically low by contemporary standards, but was the state of the art in the early eighties. The LM-1 featured a sample rate of 28 kHz, while the LM-2 increased that to 35kHz, producing frequencies up to 14 kHz and 17.5 kHz respectively.

While the LM-2 was significantly less expensive than the LM-1, both were quite expensive with the LM-1 retailing for about $4,995, and the LM-2 selling for $2,995 ($19,055 and $9,426 respectively in 2022 dollars). Despite the expense, several high profile bands and artists used these drum machines. The Human League, Gary Numan, Michael Jackson, and Prince all used the LM-1. The cheaper price of the LM-2 spread its usage to bands and artists including: Peter Gabriel, Fleetwood Mac, Stevie Wonder, Giorgio Moroder, ABC, Devo, and John Carpenter. Given that many of these artists were prominent chart toppers of the era, it’s clear that this is when drum machines truly start to have a significant impact on the market.

The History / Impact of Drum Machines: Part 1

In a previous blog entry I mused about the impact of drum machines on musicians, specifically: how many musicians have lost work because of drum machines? How would we even begin to answer that question? How would we even measure what constitutes work? What records would we use to assess how much work was or wasn’t lost over the period of decades?

Let’s try a thought experiment to assess the situation. Let’s start by looking at top forty hits over a course of decades. This gives us a quantifiable, manageable data set through which to assess the situation. However, this data set is also so highly selective in that it does not begin to scratch the surface of the great variety of music that is out there that is not measured by the Billboard top forty.

That being said, in the interest of exploring the issue, let’s continue that thought experiment. Functionally speaking, when taking charts into account, drum machines have zero influence before 1969. when Robin Gibb’s “Saved by the Bell” hits #2 in the UK. I think it would be fair to say that during the next ten to 15 years drum machines remained somewhat of a rare novelty in terms of recorded music. It is also worth knowing that even when drum machines were used, they were at times used in addition to acoustic drum sets, such as in “Heart of Glass” (1978) by Blondie. Both “Heart of Glass” and “In the Air Tonight” by Phil Collins used both a Roland CR-78 drum machine and live acoustic drums.

Drum machines also began to play a role outside of mainstream pop and rock. Jazz visionary Miles Davis started using a drum machine live with his band in 1974, using percussionist James Mtume to perform the machine. “Rockit” (1983) by jazz keyboardist Herbie Hancock not only used an Oberheim DMX drum machine, but also used percussion provided by turntablists Grand Mixer D.ST and Grandmaster Caz. In 1976, French Composer Jean-Michel Jarre released his album Oxygéne using a Korg Mini-Pops 7 drum machine. With this album Jarre was creating a vision of what an electronic / synthetic approach to making music could be. While other composers, perhaps most notably Wendy Carlos, Jean-Jacques Perrey, and Gershon Kingsley, Jarre was amongst the first of these pioneers to incorporate a drum machine.

As you may have gathered, I find the history of drum machines to be very interesting, and I am easily sidetracked. In the interest of having posts of manageable sizes, I will leave it here for now, and come back to the topic again soon.

Musings on Midjourney AI

There are some very clear and just criticisms of AI generated art. For instance there are the copyright issues (see https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data for example), the race issues (see https://futurism.com/dall-e-mini-racist or https://www.vox.com/recode/23405149/ai-art-dall-e-colonialism-artificial-intelligence), as well as others. There are also alarmist positions that AI art generators will take away jobs from working artists. While it is easy to to be sympathetic with such a concern I have not seen any actual data or evidence that points to such a trend in any measurable way. I don’t want to be sidetracked in this post, but perhaps in a future post I might explore the question of have drum machines impacted job opportunities for percussionists, which right now may be the closest comparison for which we have a decades long track record.

Regardless of such concerns, I believe that these technological tools are here with us to stay, and that any effort to put the AI genie back in the bottle will likely be unsuccessful. Even if large groups of nations ban or at least significantly curtail the legality of AI art generators, the internet has no boundaries, and any nation that lacks such regulation would likely become the base for any such continued activity. Therefore, for me personally, I think the best course of action is to engage with the technology, discover what it is good at, what it is not good at, and determine whether there may be useful applications for AI generated art. While it will likely take decades to fully digest the ability of AI art generators to transform (both good and bad) culture, I believe there’s no better time to start than the present. As one might imagine, my route into the topic of AI art is through the lens of what could this mean to working musicians.

We live in a visual society. Most individuals are more adapt at processing visual stimuli than they are at processing sound. This presents challenges for musicians.

Social media sites such as Instagram are nearly a necessary component of a working musician’s self promotion strategy. That being said, as a tool, it is really oriented to sharing visual images more than audio. When you do want to share audio in Instagram or TikTok, they typically require a visual component as part of a video file. There are certainly tools for adding images to audio for social media, and there are simple tools for adding titles to audio for use on social media. However, such tools can make one’s videos look very similar to thousands of others on a platform as such tools are often widely adopted.

Furthermore, most social media experts advise having one post per day, and often three or more in order to engage with an audience. This labor is also multiplied by the number of social media sites you use. Even if one eschews social media, musicians frequently have the need for visual content for album covers and concert posters. While there are musicians who enjoy creating visual content, time spent creating such content can often be at the expense of writing, recording, or performing.

There are many time effective solutions for content creation for musicians, including leaving a camera setup in your studio at all times, leaving a corner of your house or apartment ready to go (clean, well lit, uncluttered) for quick video posts, or putting a phone mount on your dashboard so you can post from your car with little effort. That being said, AI art generators can be another tool that musicians can use to help self manage their visual image and social media sites.

When thinking of visual content for social media feeds it is useful to think of an artist’s brand. Most often it is convenient for an artist’s brand to match their personality, so they don’t have to think very hard about whether a given image is consistent with their brand. Fortunately, my artist brand and personality pretty much both reduce to “eccentric weirdo.” AI art generators can be used to create images that are consistent with many, but perhaps not all, visual brands. Those who are interested in experimenting with using such tools to generate content for musicians may want to spend some time in a Facebook or Reddit group devoted to generated AI art to see what sorts of things AI art generators do well, and what they do not do well. Suffice it to say the abilities of AI generators increase whenever someone generates an image, so if the quality and abilities seem to be insufficient to your needs or tastes, you may want to wait a few weeks or a few months, and check back to see how the abilities have progressed.

In future posts I may offer up some of my personal experiences engaging with these tools, but for now, I leave this as an introduction to vein of exploration.

Landscape Update: January 8th, 2023

My grant project for 2022 was very successful. Ultimately it resulted in the recording, editing, and mixing of 33 tracks, three more than was proposed. Seven of these tracks were recorded by Michael DeQuattro, five were recorded by Carl Bugbee, and two were recorded by Dustin Patrick. All of the synthesizer, guitar, and bass tracks for the project have been recorded. It should only take one more grant cycle to complete the project. Four of the movements have only one track left to record, another four only need two more tracks for completion, while five have three tracks remaining. I plan on taking 2023 off from the project, but hope to return to it in 2024. In the meantime, I leave you with a fresh mix of Landscape 13: River that includes Michael DeQuattro’s drum set recording and my bass part in addition to Carl Bugbee’s guitar recording.

Landscape Update: December 3rd, 2022

November has yet again been an unproductive month for me, but Michael DeQuattro recorded two drum parts this month: Landscape 9: Desert and Landscape 10: Rocky Coast. I hope to at least one more recording in during this calendar year. This month I’ll leave you with an updated version of Landscape 11: Farmland which includes my harmonica and bass tracks.

Landscape Update: November 12th, 2022

October was not a very productive month for me. However previously I had rewritten the piano part of Landscape 2: Snow as a vibraphone part. Percussionist Dustin Patrick made a great recording of that vibraphone part. That recording also means that I reached the quota, that is I’ve had 30 tracks recorded for this grant cycle. In the month and a half that I have left on the grant I hope to get at least two more tracks recorded. On my end that will likely mean that I’ll record some harmonica tracks. This month I’ll leave you with an updated version of Landscape 4: Sand Dunes including my bass tracks and a new recording of the drumset part from Michael DeQuattro.

Landscape Update: October 9th, 2022

October was a more productive month than September. Michael DeQuattro recorded the drum part for Landscape 4: Sand Dunes. I managed to eke out a harmonica recording for Landscape 11: Farmland. I’m hoping to get one or two recordings of harmonica parts recorded during the remainder of October. Until then I’ll leave you with the current realization of Landscape 2: Snow, now featuring my bass part, and Michael DeQuattro’s drum part.