Landscapes Update: February 1st, 2020

During January I revised Landscape 1: Forest by adding a musique concrete layer. The sound sources I used include horn recordings from Landscape 7: Mountains, as well as bass harmonica recordings from Landscape 4: Sand Dunes. In all cases, I used Audacity to change the pitch level, as well as to stretch out the samples. My plans for February include revising the pedal steel part for Landscape 2: Snow. I’ll leave you with an updated realization of Landscape 1: Forest, including the new musique concrete layer.

Feedback in FM Synthesis

FM Synthesis can be difficult to understand. Those of us who spent time programming a Yamaha DX7, the leading FM synthesizer of the 1980s, also know how confusing it can be to program. Fortunately for those who are nostalgic for classic synthesizers of the 1980s Digital Suburban develed Dexed, a DX7 emulator.

The good news is that Dexed works just like a DX7, allowing you to port over classic patches. The bad news is that Dexed works just like a DX7, in that it can still be confusing and awkward to program. However, the better you understand FM synthesis, the better equipped you’ll be to tackle Dexed.

In this post we’ll investigate feedback, which in FM synthesis is when some of the output an operator is fed back to modulate itself. In a DX7, there are eight levels of feedback available (0-7 inclusive). No feedback is present at level 0, while at level 7 there is (presumably) 100% feedback.

I tested feedback in Dexed by using algorithm 32, which is the only algorithm in which a carrier modulates itself. I turned off all the operators except for operator six, which is set at full volume, and at a ratio of 1.00 (the first harmonic). Each pitch is at A4 (440Hz) with full velocity.

It is interesting to see and hear the results of the test. Predictably at feedback level 0 a simple sine wave results. At feedback level 1 the second harmonic starts to appear at less than 1/4 the strength of the first harmonic. At feedback level 2, this second harmonic somewhat stronger (at approximately 1/4 strength). At feedback level 3, the first four harmonics are present with the strength of each being about 1/3 the strength of the previous harmonic. At feedback level 4, the harmonic spectrum of the first eight partials of a sawtooth wave become recognizable. We get the first 18 partials at level 5. At level 6 we get what could be called a hyper sawtooth wave, with a very strong peak at partial 34, with lesser peaks running up from partials 26 through 46. Finally, at level 7 we get a white noise spectrum with added strong partials at the first two harmonics.

This analysis bears out when looking at the resulting waveforms in Audacity. We start with a pure sine tone, and with each increase of the level we start to see the sine wave lean to the left a bit. By feedback level 3, a smoothed sawtooth wave is clearly visible. At level 5 we see a pretty close approximation of a sawtooth wave. The waveform at level 6 appears to be 34 periods of sawtooth waves shaped into sawtooth wave type shape at the frequency of the first harmonic. Furthermore, we get a significant amount of positive side DC offset in the waveform, leading to some distortion (we had actually gotten some DC offset at level 5 as well). The waveform at level 7 has a clear profile of white noise, though it seems to have occasional fragments of a noisey square wave. Interesting enough we also get a small amount of negative side DC offset.

Ultimately, what we learn is that feedback shapes an oscillator’s sine wave into a sawtooth wave, peaking at level 5, moving into a hyper sawtooth wave at level 6, and becoming largely white noise at level 7.

Landscapes Update: January 1st, 2020

So I finished the composing for the Landscapes project on December 14th, 2019. Or did I? My wife, who teaches writing, would tell anyone that the key to good writing is revision. Any part that has already been recorded I will consider to be finished, but I plan on making at least one revision every month to one of the Landscapes movements. One of the things I may add is a layer of Musique Concréte to some movements, as it may make the pieces more marketable to festivals and conferences, and it’ll add another layer of timbral interest.

I plan on writing a grant for 2021 that will fund recording efforts for the Landscapes project. Until then I intend on continuing to record parts for the movements on my own. Accordingly, in December I finished a recording of the bass part for Landscape 5: Marsh, which is included below. Today I have made a YouTube playlist of the Landscapes project, so if you wish to hear them continuously, you may. I will continue to post updates on the Landscapes project every month or so, and will update the playlist as I add more recordings. Anyway, as promised, here is a recording of Landscape 5: Marsh featuring Carl Bugbee (from the prominent Rhode Island cover band Take it to the Bridge) on electric guitar, and myself on electric bass.

Low Pass Filter Demonstration

Of the various filters used in subtractive synthesis, the low pass filter is by far the most commonly used. Accordingly it is useful to examine how this filter alters sound. To that end, I’ve made a couple of videos that demonstrate three different filters in Logic Pro’s Retro Synth instrument.

Before getting too deep in the process, I’ll start with some basic information. A low pass filter attenuates frequencies above a set center frequency. Filters are often described in terms of their slope, that is the amount that higher frequencies are attenuated. Slope can be described in terms of decibels per octave. Thus, a 24dB filter dampens frequency content by 24 decibels per octave. To put it another way if the center frequency is set at 100 Hz, audio at 200 Hz should be attenuated by 24 decibels, while audio at 400 Hz should be attenuated by 48 decibels. Thus, the higher the slope, the more effective the filter is at attenuating filtered frequency content. Slope can also be described in terms of poles, which translates out to 6dB. Accordingly, a 24dB filter is also called a 4 pole filter, while a 12dB filter is called a 2 pole filter.

The three filters demonstrated in these videos are a 24dB low pass (described as being Lush), a 12 dB low pass (described as being Creamy for some unknown reason), and a 6dB low pass (described as being Lush). Each is demonstrated with a 4 second, 55 Hz sawtooth wave (A1, where C4 is middle C). In each pass, the center frequency is swept up from the lowest to the highest frequency setting for the filter. Thus, we hear harmonics add in over the course of four seconds.

Additionally, these videos also demonstrate how the filters in question respond to difference resonance settings, which begs the question, what on God’s green earth is resonance? Resonance feeds the audio at the center frequency of the filter back through the filter. At moderate settings this can allow harmonics to be accentuated when the center frequency matches the frequency of a sound’s harmonic. At very high settings quality analog filters self resonate, which means they produce a sine wave at the center frequency even when no sound is patched into the filter. Because resonance creates a peak at the center frequency, it can increase the perceived slope of a filter. Each video features nine passes, three for each filter (24dB, 12dB, and 6dB respectively). The first pass of each group features no resonance, while the second has the resonance set at 50%, and the final has the resonance set at 100%.

What do we learn from these videos? While it would be technically incorrect to say that these filters all self resonate, we can say that they are coded to emulate self resonating filters, so for all intents and purposes, these filters are functionally self resonating. Thus, when the resonance is turned up to 100% we hear a sine tone sweep up the entire frequency range of the filter in addition to the filtered 55Hz sawtooth wave. Furthermore, we can see that sweep in a linear fashion in Logic’s graphic equalizer, confirming it responds in a linear fashion in pitch space, or exponentially in frequency space. The resulting wave form is basically a sine wave laid out over the structural form of a longer period sawtooth waveform. One odd thing we notice is that the 12dB (Creamy) filter peaks severely when the resonance is turned up to 100%. I found this to be true at every key velocity.

We also hear that the filters effectively accentuate harmonics when the resonance is set at 50%. This allows us to hear the exponential curve of the filter. As the center frequency moves up linearly in terms of octave pitch space, it accentuates increasing numbers of harmonics as the more harmonics are grouped within an octave as you sweep up the frequency range.

We can also hear and see how much more effective the higher slope filters are than the lower slope filters. We can see how the higher slope filters effectively squelch higher frequencies when the center frequency is low. Likewise, we can see how much more curved the output waveform is when the center frequency is low.

Here we can see the waveforms as each filter is tested . . .

Here we see the spectral analysis of each tone as evolves in Logic Pro . . .

So Much Noise

Did you know that there is a technical definition of noise? Did you know that there are six main colors of noise? The most common type of noise is white noise, which consists of random fluctuations such that there is equal energy content per bandwidth. This can be thought of as being similar to a flat frequency response. Pink noise consists of random fluctuations with equal energy per octave. Brown (also called Red) noise consists for random fluctuations where the energy level of each bandwidth is related to the squared inverse of the frequency (1/f2). When listening to these three types of noise, it sounds like pink and brown noise are progressively lower in frequency than white noise. That is because more of their energy is concentrated in lower frequencies in comparison to white noise.

Blue noise features energy levels that are proportional to frequency, resulting in a 3dB increase per octave. Violet (or Purple) noise utilizes energy levels that are proportional to the square of the frequency, resulting in a 6dB increase per octave. When comparing blue and violet noise to white noise, they will sound higher in frequency than white noise, as increasing amounts of their energy is concentrated in higher frequencies. Finally, Grey noise is basically white noise that has been filtered to correspond with equal loudness curves, so that the while the energy level of each bandwidth will not be measurably equal, but will be perceived by human beings as being the same loudness.

To demonstrate white noise, I generated four seconds in Logic Pro’s Retro Synth. You can listen to the results below. On the first pass, the waveform is displayed in Audacity, on the second pass it is displayed as a spectrum in Logic Pro.

Landscapes Update: December 1st, 2019

Landscape 12: Autumn Forest is complete, and I have started the first phrase of Landscape 13: River, which will be the last piece in the series. I edited and mixed an orchestral reading of Landscape 7: Mountains. The reading, which was done in early November, was by the Musiversal Lisbon Orchestra https://www.musiversal.com/).

There’s been some changes on the Musiversal front. They are discontinuing the 30 piece Lisbon orchestra, and are adding a different 30 piece orchestra. This comes with some good news and some bad news.  The bad news, as a consumer, is that they are raising their prices a bit. However, this is really good news, when you worked out how little the musicians were getting paid, they really do deserve more money. On the good side of things, they are allowing composers to purchase only seven minutes of time again, rather than having a 14 minute minimum, making it a bit more economical.

They’re also changing the instrumentation a bit. The new 30 piece orchestra only has 2 horns instead of 4, but they are adding a harp and percussionist. I’m actually pretty enthusiastic about that change. I don’t get to write for harp much, and who doesn’t love some timpani? Accordingly, I recomposed the orchestral part I wrote for Landscape 10: Rocky Coast, which I will hopefully have read in late winter 2020.

I’ll leave you with the new realization of Landscapes 7: Mountains with the added orchestral part, as well as Carl Bugbee’s guitar tracks. This piece was a bit tricky. Four phrases in work have orchestral backing. Two of these feature a dotted quarter hemiola, so I rewrote these to be in a compound meter in a different tempo to avoid syncopation in the orchestral part. Another difficulty for the piece is that it is in Gb major. However, only one of the phrases was easily notated in concert Gb major. Given instrumental transposition, it made the most sense to notate the other three phrases in E major, and add sharps where needed. Ultimately it made the most sense to write each phrase with a measure of rest of the entire orchestra between phrases, and to put the phrases in a different order than they appear in the piece in the arrangement. Even though two of the phrases segue into each other, it was easier to have the orchestra record them in separate passes, and edit them together in LogicPro.

Oscillator Sync in Subtractive Synthesis

Did you ever wonder what oscillator sync does in subtractive synthesis? Simply put, in sync mode oscillator 2 restarts its waveform every time oscillator restarts its waveform. That’s pretty simple to understand, but it is a bit trickier to visualize, and harder still to try to predict what the auditory outcome will be.

Thus, I have done a test of oscillator sync using Logic Pro’s RetroSynth. Each test involves two passes, each of which is 16 seconds long. Both instances use 55 Hz sawtooth waves (A1, where middle C is C4). In the first pass, we are listening only to the synced oscillator. In the second pass, we hear a 50/50 mix of oscillator one and two.

In both cases however, I am automating the sync value. For all intents and purposes, each pass starts out with the frequency of the synced oscillator matching that of oscillator one, and gradually increasing until at the end of the 16 second pass, the frequency of oscillator two is about sixteen times the frequency of oscillator one.

Watching the audio while it plays in Audacity (below), we see on the first pass, long period sawtooth waves gradually shorten into shorter period sawtooth waves. This change will be audible in an apparent rise in frequency. On the second pass, you’ll see these increasingly shorter period sawtooth waves superimposed on the long period sawtooth wave of the first oscillator.

As we watch a spectral analysis of the sound displayed in an EQ plugin on the output channel of Logic Pro (below),  we will see each successive harmonic of the 55 Hz fundamental rise in volume as the first pass progresses.  You’ll notice that the rate of those harmonic peaks increases over the course of the pass. This illustrates that the sync knob of RetroSynth is exponential in nature, that is it moves up the frequency spectrum using consistent octaves, not consistent frequency bands.  For instance in first octave of motion (55 Hz through 110 Hz), we have two harmonics represented (55 Hz & 110Hz). In the second octave of motion (110 Hz- 220 Hz), we have three harmonics represented (110 Hz, 165 Hz, and 220 Hz). In the next octave (220 Hz through 440 Hz) we have 5 harmonics presented (220 Hz, 275 Hz 330 Hz, 385 Hz, and 440 Hz), and so forth. Thus, more harmonics are presented in the same span of time, giving the aural impression of speeding up. You should hear these harmonics cycle though the harmonic series. On the second pass, you’ll see the same thing, only with the fundamental 55 Hz tone constantly there, softening the feeling of the frequency increasing a bit.

Ultimately, the increased angularity of the sound waves, as seen in Audacity results in a richer harmonic content (as seen in the harmonic analysis). This demonstrates the point of Oscillator sync in subtractive synthesis, namely to create richer harmonic content than what is available in the basic waveforms of subtractive synthesis.

Subtractive Synthesis Waveforms in Logic Pro’s RetroSynth

If you’re like me, you may wonder, how accurate are the waveforms in LogicPro’s RetroSynth subtractive synthesis emulator. It turns out, they’re pretty accurate. I tested the sine, triangle, sawtooth, square and pulse waves. At first glance, RetroSynth seems to only offer triangle, sawtooth, square, and pulse waves (noise as well, but that’s for another day) . . .

However, if you look at the amplifier portion of the emulator, there is a knob labeled “Sine Level.” Thus to get a sine wave, you have to pull down the filter CF all the way to the bottom, and pull up the Sine Level . . .

For the test I put in a whole note with a key velocity of 100 for each waveform. I used the note A1 (middle C=C4), resulting in a 55 Hz tone. You can see and hear the results in this video . . .

Note however that there are some weird artifacts during the square wave, which come from data compression.

Those of you who know your waveforms know that a sine tone is a pure tone that has no overtones (harmonics). A triangle wave is a sum of all the odd harmonics, where the fundamental is harmonic 1, with the amplitude of each partial is 1/(n squared). A sawtooth wave features all harmonics with an amplitude of 1/n. A square wave includes all the odd numbered harmonics with the amplitude of 1/n. A pulse wave is a variable square wave, and the harmonic content is reliant upon the width of the pulse. I used a graphic equalizer in Logic Pro to display the harmonic content of each waveform.

Landscapes Update: November 3rd, 2019

Hello all!  I have finished my writing quota for October, so Landscape 11: Farmland is complete. I’ve started working on November’s goal, Landscape 12: Autumn Forest, completing the first phrase. I have also written an orchestral part for Landscape 10: Rocky Coast, which will likely get recorded in Spring 2020.

Last month I finished spending the budget for my grant, with Carl Bugbee recording the guitar part to Landscape 8: Palm Glade, and Nara Shahbazyan recording cello parts for Landscapes 2 and (Snow and Beach) respectively. Next weekend the orchestral part for Landscape 7: Mountains will get recorded by Musiversal’s Lisbon Orchestra. Thus, there will be some great musical updates that I’ll share with you all in the future, but for this month, I’ll share with you Landscape 8: Palm Glade featuring Carl Bugbee’s guitar tracks!

Mixing Tips

Mixing Tips:

Students often ask me for tips about how to mix a song effectively. Mixing is a full-blown art in and of itself. There is no one way to mix a track well, nor are there rules about how to mix a track correctly. However, there are decent guidelines and best practices on how to proceed and get started.

First of all, you will want to use a quality program like ProTools or Logic Pro. There are numerous other programs that will allow you similar amounts of control over your work. Pro Tools is considered the industry standard for media creation. Personally, I use LogicPro, as I feel it has the best value in terms of features per dollar. Lesser programs like Audacity and GarageBand don’t give you sufficient control of your tracks to fine tune a mix. That being said, if all you can afford right now is a tool like Audacity or GarageBand, then do the best that you can with the following guidelines, and save your pennies up for a better tool.

Put individual EQs (Graphic Equalizers) on all of your recorded tracks. Use them to filter out all of the frequencies below the given instrument’s lowest frequency, as well as all of the frequencies above its highest overtones. In Logic Pro, you can click on the analyzer button in EQ plugins so you can see what frequencies are active in a track, which can help you filter out external noise. You can make small boots of frequency bands that make the instrument sound better, or more characteristic. You will likely need to pull down the master gain in the EQ a bit if you’ve done any boosting to prevent distortion / clipping. Any synthesized / sampled materials or loops that are resident in your DAW (Digital Audio Workstation) software don’t really need and EQ, unless you are going for a particular effect. For more information on the effective use of EQ you can read my  post on “tips for using equalization.”

Put compressors on any recorded voice, guitar, and bass tracks. The default setting I use is a 3:1 ratio.  Make sure that your threshold is low enough that compression kicks in when the track is at its loudest, but not so low that it is kicking in most of the time. Your compression plugin should give you visual feedback for when the compression is kicking in. If you are using both compression and EQ on an input channel, place the compressor before the EQ, as the compressor can undo some of the dynamic changes you set in your EQ.

For music in a popular vein (pop, rap, hip-hop, heavy metal, rock, etc.), you can also put a compressor on the output channel as well. This will help avoid clipping on transients in your final mix. Avoid using compression on the output channel in classical, jazz, or folk music, as overuse of compression diminishes dynamic contrasts between sections (for that reason, I personally avoid using it in popular tracks as well).

When you have more than one track of a given instrument playing simultaneously, use the opportunity to pan them to increase your stereo image. Pan multiple instrumental tracks as far as you can without the panning sounding obvious or artificial (unless you want it to sound artificial). A nice starting place to try is panning halfway, which would be + and – 32 on a scale of +/- 64. A more subtle setting might be + and – 16 on the same scale.  Sometimes panning hard left and hard right can work well for some tracks. In addition to panning you can also use EQ to help tracks using the same instrument sound distinct from one another.

In a typical mix, the lead vocals and bass should be centered.  If you are using a single track for the drums, that would also be centered as well.  If you have recorded an acoustic drum set using a standard four mic setup, put the kick drum in the center, pan the snare slightly to the right, and hard pan the right overhead mic (audience perspective) to hard right, and left overhead mic hard left. This gives you the audience perspective on the drumset. If, for whatever reason, you want the listener to hear from the drummer’s perspective you would put the snare slightly to the left, the right overhead mic in hard left, and the left overhead mic in hard right.

Another mixing trick is to use your eyes in addition to your ears. Logic Pro has meter plugins, and personally, I like to put a multimeter on the output channel. This has two advantages. One is that I can see what frequencies are active. For instance I may learn that need to make the track brighter, that there’s not enough bass, or that there’s too many mid frequencies.

The mulitmeter plugin also has correlation meter, which allows you to check your stereo image. When the meter is all the way to the right that means that the left and right channels are identical, meaning that you effectively have a mono track with no stereo image what-so-ever. If it goes left of center (into the red), it means that you are starting to get some phase cancellation between your channels, which can deaden some frequency content. The ideal place for the correlation meter to be is just to the right of center, which means you have are rich stereo field that doesn’t have any phase cancellation.

Voice can be a challenge to mix. Voices often require a lot of plugins. In addition to EQ, a voice may require some pitch correction. Voices that are thin in quality can be fattened up in three ways: through doubling, delay, or reverb. Reverb is the most common way voices are fattened up in popular music. You may read my article, “Tackling Reverb,” if you need some help fine tuning your reverb settings.

Doubling is one of my favorite ways to fatten vocals. Simply put, you record the vocals twice.  Pan one take either slightly or fully to the left, and the other either slightly or fully to the right. If you like the sound of doubled vocals, but didn’t have the time to do multiple takes, or one take of the vocals was significantly better than the others, you can fake the sound of doubled vocals, by panning copies of the vocals hard left and hard right, and putting a short delay (around 50ms) on one to the two copies.

If your track still sounds a bit dull after all your EQing, mixing, and balancing, you can add an exciter plugin on the output channel. Exciters by definition add volume by emphasizing high harmonic content. If you were careful to optimize your mix, you won’t have the headroom to add this effect. Thus, you can add a compressor on the output channel first, and pull the volume of the whole track down slightly after the compression, which should leave you the headroom to add an exciter.  Alternately, if you feel the whole track doesn’t need an exciter, you may wish to add it to the vocals and / or the guitars to brighten the track a bit. This can be done on the input channel, or by busing to another channel.

Another way to look at mixing is to look at it as a process. It is advisable to start with the vocals. Vocals are by far the most important track in nearly all popular music, so starting with them gives you a structure to build around and to elaborate upon.

Solo out your vocals track by track. Listen to them all the way through to check for quality. Edit out lip smacks (which sound gross), and possibly even breaths. It can be fine to leave breathing in if you feel it ads to the human or emotional quality of the take. Otherwise, it is just more noise.

Add EQ, compression and any other effects you intend on using (reverb, pitch correction, etc.). Pan the vocal tracks the way you ultimately intend them to be panned. Generally speaking, put the lead vocals in the center, unless you are double tracking the vocals. You can spread multiple backing vocal tracks across the stereo spectrum, dialing it back a bit if it sounds too artificial.

Listen for distortion or clipping on each your vocal tracks. Adding plugins often adds volume, and panning a track off center often puts heavier load on one of the channels. Thus. if your vocals did not clip initially, they might clip after all these changes. You can adjust by bringing down the master volume in the EQ. Listen to all your vocal tracks together. Make sure the balance between lead and backing vocals is good. You can soften backing vocals by adding reverb. Again, check for clipping and distortion on the output channel. You can adjust by bringing down the master gains in the EQ.

Do the same process for every instrumental group. That is listen to each instrument separately, editing, panning, EQing, and processing along the way. Then listen to each instrumental group together. Each time checking for clipping / distortion. I tend to mix and balance the instruments in the order: voices, drums, guitars, bass, and everything else. However, it doesn’t matter which order you do them in. Once you finish an instrumental group, check it with the rest of the mix that you’ve done so far.  Check the balance as well as clipping / distortion.

Once you think you’re approaching the final mix, you can check the balance using a simple trick. The volumes of the tracks should be arranged such that the most important track is slightly louder than everything else, with the volumes decreasing, as their importance to the track decreases. In pop / rock music, that typically means that vocals are most important, followed by the drums, guitar, and bass in that prioritized order.

You can check this very simply by playing the mix at a very low volume. When you are listening to the track at an almost imperceptible level, you should still be able to hear the most important track (usually the vocals). Gradually increase the volume. You should hear the instruments enter one by one in order of importance. If the instruments enter in the wrong order of importance, readjust your balance, and try again until the instruments appear in the correct order of importance as you gradually turn up the volume.

Another thing you can do at this point is to put a multimeter on the output channel. Use both your eyes and ears to make sure you are using the entire frequency spectrum in a balanced way. To put this another way, check to make sure that the bass isn’t too strong, that the high frequencies aren’t too sibilant, and that there are adequately balanced mids.

You can also check the stereo balance. Are you using the entire stereo spectrum without getting any phase cancelation?  Does your stereo field sound natural or subtle (unsubtle panning can be appropriate for specific effects, or for novelty arrangements)?  If there are problems with either the frequency or stereo spectrums, adjust your balance, mix, and possibly the EQ for some tracks and try again.

It is always best to listen to your mix on the highest quality speakers or headphones you have available to you. However, many people listen to music in unideal situations, for instance using earbuds, or on a car stereo system. It’s actually a pretty good idea to also listen to your mix using one of more less than ideal playback system.

Finally, if you have the luxury of time on your hands, put the mix away when you feel it is done, but come back to it the next day. Listen to it again. If you don’t want to change anything, it is finished, otherwise, tweak it, and come back to it the next day with fresh ears.