So Much Noise

Did you know that there is a technical definition of noise? Did you know that there are six main colors of noise? The most common type of noise is white noise, which consists of random fluctuations such that there is equal energy content per bandwidth. This can be thought of as being similar to a flat frequency response. Pink noise consists of random fluctuations with equal energy per octave. Brown (also called Red) noise consists for random fluctuations where the energy level of each bandwidth is related to the squared inverse of the frequency (1/f2). When listening to these three types of noise, it sounds like pink and brown noise are progressively lower in frequency than white noise. That is because more of their energy is concentrated in lower frequencies in comparison to white noise.

Blue noise features energy levels that are proportional to frequency, resulting in a 3dB increase per octave. Violet (or Purple) noise utilizes energy levels that are proportional to the square of the frequency, resulting in a 6dB increase per octave. When comparing blue and violet noise to white noise, they will sound higher in frequency than white noise, as increasing amounts of their energy is concentrated in higher frequencies. Finally, Grey noise is basically white noise that has been filtered to correspond with equal loudness curves, so that the while the energy level of each bandwidth will not be measurably equal, but will be perceived by human beings as being the same loudness.

To demonstrate white noise, I generated four seconds in Logic Pro’s Retro Synth. You can listen to the results below. On the first pass, the waveform is displayed in Audacity, on the second pass it is displayed as a spectrum in Logic Pro.

Landscapes Update: December 1st, 2019

Landscape 12: Autumn Forest is complete, and I have started the first phrase of Landscape 13: River, which will be the last piece in the series. I edited and mixed an orchestral reading of Landscape 7: Mountains. The reading, which was done in early November, was by the Musiversal Lisbon Orchestra https://www.musiversal.com/).

There’s been some changes on the Musiversal front. They are discontinuing the 30 piece Lisbon orchestra, and are adding a different 30 piece orchestra. This comes with some good news and some bad news.  The bad news, as a consumer, is that they are raising their prices a bit. However, this is really good news, when you worked out how little the musicians were getting paid, they really do deserve more money. On the good side of things, they are allowing composers to purchase only seven minutes of time again, rather than having a 14 minute minimum, making it a bit more economical.

They’re also changing the instrumentation a bit. The new 30 piece orchestra only has 2 horns instead of 4, but they are adding a harp and percussionist. I’m actually pretty enthusiastic about that change. I don’t get to write for harp much, and who doesn’t love some timpani? Accordingly, I recomposed the orchestral part I wrote for Landscape 10: Rocky Coast, which I will hopefully have read in late winter 2020.

I’ll leave you with the new realization of Landscapes 7: Mountains with the added orchestral part, as well as Carl Bugbee’s guitar tracks. This piece was a bit tricky. Four phrases in work have orchestral backing. Two of these feature a dotted quarter hemiola, so I rewrote these to be in a compound meter in a different tempo to avoid syncopation in the orchestral part. Another difficulty for the piece is that it is in Gb major. However, only one of the phrases was easily notated in concert Gb major. Given instrumental transposition, it made the most sense to notate the other three phrases in E major, and add sharps where needed. Ultimately it made the most sense to write each phrase with a measure of rest of the entire orchestra between phrases, and to put the phrases in a different order than they appear in the piece in the arrangement. Even though two of the phrases segue into each other, it was easier to have the orchestra record them in separate passes, and edit them together in LogicPro.

Oscillator Sync in Subtractive Synthesis

Did you ever wonder what oscillator sync does in subtractive synthesis? Simply put, in sync mode oscillator 2 restarts its waveform every time oscillator restarts its waveform. That’s pretty simple to understand, but it is a bit trickier to visualize, and harder still to try to predict what the auditory outcome will be.

Thus, I have done a test of oscillator sync using Logic Pro’s RetroSynth. Each test involves two passes, each of which is 16 seconds long. Both instances use 55 Hz sawtooth waves (A1, where middle C is C4). In the first pass, we are listening only to the synced oscillator. In the second pass, we hear a 50/50 mix of oscillator one and two.

In both cases however, I am automating the sync value. For all intents and purposes, each pass starts out with the frequency of the synced oscillator matching that of oscillator one, and gradually increasing until at the end of the 16 second pass, the frequency of oscillator two is about sixteen times the frequency of oscillator one.

Watching the audio while it plays in Audacity (below), we see on the first pass, long period sawtooth waves gradually shorten into shorter period sawtooth waves. This change will be audible in an apparent rise in frequency. On the second pass, you’ll see these increasingly shorter period sawtooth waves superimposed on the long period sawtooth wave of the first oscillator.

As we watch a spectral analysis of the sound displayed in an EQ plugin on the output channel of Logic Pro (below),  we will see each successive harmonic of the 55 Hz fundamental rise in volume as the first pass progresses.  You’ll notice that the rate of those harmonic peaks increases over the course of the pass. This illustrates that the sync knob of RetroSynth is exponential in nature, that is it moves up the frequency spectrum using consistent octaves, not consistent frequency bands.  For instance in first octave of motion (55 Hz through 110 Hz), we have two harmonics represented (55 Hz & 110Hz). In the second octave of motion (110 Hz- 220 Hz), we have three harmonics represented (110 Hz, 165 Hz, and 220 Hz). In the next octave (220 Hz through 440 Hz) we have 5 harmonics presented (220 Hz, 275 Hz 330 Hz, 385 Hz, and 440 Hz), and so forth. Thus, more harmonics are presented in the same span of time, giving the aural impression of speeding up. You should hear these harmonics cycle though the harmonic series. On the second pass, you’ll see the same thing, only with the fundamental 55 Hz tone constantly there, softening the feeling of the frequency increasing a bit.

Ultimately, the increased angularity of the sound waves, as seen in Audacity results in a richer harmonic content (as seen in the harmonic analysis). This demonstrates the point of Oscillator sync in subtractive synthesis, namely to create richer harmonic content than what is available in the basic waveforms of subtractive synthesis.

Subtractive Synthesis Waveforms in Logic Pro’s RetroSynth

If you’re like me, you may wonder, how accurate are the waveforms in LogicPro’s RetroSynth subtractive synthesis emulator. It turns out, they’re pretty accurate. I tested the sine, triangle, sawtooth, square and pulse waves. At first glance, RetroSynth seems to only offer triangle, sawtooth, square, and pulse waves (noise as well, but that’s for another day) . . .

However, if you look at the amplifier portion of the emulator, there is a knob labeled “Sine Level.” Thus to get a sine wave, you have to pull down the filter CF all the way to the bottom, and pull up the Sine Level . . .

For the test I put in a whole note with a key velocity of 100 for each waveform. I used the note A1 (middle C=C4), resulting in a 55 Hz tone. You can see and hear the results in this video . . .

Note however that there are some weird artifacts during the square wave, which come from data compression.

Those of you who know your waveforms know that a sine tone is a pure tone that has no overtones (harmonics). A triangle wave is a sum of all the odd harmonics, where the fundamental is harmonic 1, with the amplitude of each partial is 1/(n squared). A sawtooth wave features all harmonics with an amplitude of 1/n. A square wave includes all the odd numbered harmonics with the amplitude of 1/n. A pulse wave is a variable square wave, and the harmonic content is reliant upon the width of the pulse. I used a graphic equalizer in Logic Pro to display the harmonic content of each waveform.

Landscapes Update: November 3rd, 2019

Hello all!  I have finished my writing quota for October, so Landscape 11: Farmland is complete. I’ve started working on November’s goal, Landscape 12: Autumn Forest, completing the first phrase. I have also written an orchestral part for Landscape 10: Rocky Coast, which will likely get recorded in Spring 2020.

Last month I finished spending the budget for my grant, with Carl Bugbee recording the guitar part to Landscape 8: Palm Glade, and Nara Shahbazyan recording cello parts for Landscapes 2 and (Snow and Beach) respectively. Next weekend the orchestral part for Landscape 7: Mountains will get recorded by Musiversal’s Lisbon Orchestra. Thus, there will be some great musical updates that I’ll share with you all in the future, but for this month, I’ll share with you Landscape 8: Palm Glade featuring Carl Bugbee’s guitar tracks!

Mixing Tips

Mixing Tips:

Students often ask me for tips about how to mix a song effectively. Mixing is a full-blown art in and of itself. There is no one way to mix a track well, nor are there rules about how to mix a track correctly. However, there are decent guidelines and best practices on how to proceed and get started.

First of all, you will want to use a quality program like ProTools or Logic Pro. There are numerous other programs that will allow you similar amounts of control over your work. Pro Tools is considered the industry standard for media creation. Personally, I use LogicPro, as I feel it has the best value in terms of features per dollar. Lesser programs like Audacity and GarageBand don’t give you sufficient control of your tracks to fine tune a mix. That being said, if all you can afford right now is a tool like Audacity or GarageBand, then do the best that you can with the following guidelines, and save your pennies up for a better tool.

Put individual EQs (Graphic Equalizers) on all of your recorded tracks. Use them to filter out all of the frequencies below the given instrument’s lowest frequency, as well as all of the frequencies above its highest overtones. In Logic Pro, you can click on the analyzer button in EQ plugins so you can see what frequencies are active in a track, which can help you filter out external noise. You can make small boots of frequency bands that make the instrument sound better, or more characteristic. You will likely need to pull down the master gain in the EQ a bit if you’ve done any boosting to prevent distortion / clipping. Any synthesized / sampled materials or loops that are resident in your DAW (Digital Audio Workstation) software don’t really need and EQ, unless you are going for a particular effect. For more information on the effective use of EQ you can read my  post on “tips for using equalization.”

Put compressors on any recorded voice, guitar, and bass tracks. The default setting I use is a 3:1 ratio.  Make sure that your threshold is low enough that compression kicks in when the track is at its loudest, but not so low that it is kicking in most of the time. Your compression plugin should give you visual feedback for when the compression is kicking in. If you are using both compression and EQ on an input channel, place the compressor before the EQ, as the compressor can undo some of the dynamic changes you set in your EQ.

For music in a popular vein (pop, rap, hip-hop, heavy metal, rock, etc.), you can also put a compressor on the output channel as well. This will help avoid clipping on transients in your final mix. Avoid using compression on the output channel in classical, jazz, or folk music, as overuse of compression diminishes dynamic contrasts between sections (for that reason, I personally avoid using it in popular tracks as well).

When you have more than one track of a given instrument playing simultaneously, use the opportunity to pan them to increase your stereo image. Pan multiple instrumental tracks as far as you can without the panning sounding obvious or artificial (unless you want it to sound artificial). A nice starting place to try is panning halfway, which would be + and – 32 on a scale of +/- 64. A more subtle setting might be + and – 16 on the same scale.  Sometimes panning hard left and hard right can work well for some tracks. In addition to panning you can also use EQ to help tracks using the same instrument sound distinct from one another.

In a typical mix, the lead vocals and bass should be centered.  If you are using a single track for the drums, that would also be centered as well.  If you have recorded an acoustic drum set using a standard four mic setup, put the kick drum in the center, pan the snare slightly to the right, and hard pan the right overhead mic (audience perspective) to hard right, and left overhead mic hard left. This gives you the audience perspective on the drumset. If, for whatever reason, you want the listener to hear from the drummer’s perspective you would put the snare slightly to the left, the right overhead mic in hard left, and the left overhead mic in hard right.

Another mixing trick is to use your eyes in addition to your ears. Logic Pro has meter plugins, and personally, I like to put a multimeter on the output channel. This has two advantages. One is that I can see what frequencies are active. For instance I may learn that need to make the track brighter, that there’s not enough bass, or that there’s too many mid frequencies.

The mulitmeter plugin also has correlation meter, which allows you to check your stereo image. When the meter is all the way to the right that means that the left and right channels are identical, meaning that you effectively have a mono track with no stereo image what-so-ever. If it goes left of center (into the red), it means that you are starting to get some phase cancellation between your channels, which can deaden some frequency content. The ideal place for the correlation meter to be is just to the right of center, which means you have are rich stereo field that doesn’t have any phase cancellation.

Voice can be a challenge to mix. Voices often require a lot of plugins. In addition to EQ, a voice may require some pitch correction. Voices that are thin in quality can be fattened up in three ways: through doubling, delay, or reverb. Reverb is the most common way voices are fattened up in popular music. You may read my article, “Tackling Reverb,” if you need some help fine tuning your reverb settings.

Doubling is one of my favorite ways to fatten vocals. Simply put, you record the vocals twice.  Pan one take either slightly or fully to the left, and the other either slightly or fully to the right. If you like the sound of doubled vocals, but didn’t have the time to do multiple takes, or one take of the vocals was significantly better than the others, you can fake the sound of doubled vocals, by panning copies of the vocals hard left and hard right, and putting a short delay (around 50ms) on one to the two copies.

If your track still sounds a bit dull after all your EQing, mixing, and balancing, you can add an exciter plugin on the output channel. Exciters by definition add volume by emphasizing high harmonic content. If you were careful to optimize your mix, you won’t have the headroom to add this effect. Thus, you can add a compressor on the output channel first, and pull the volume of the whole track down slightly after the compression, which should leave you the headroom to add an exciter.  Alternately, if you feel the whole track doesn’t need an exciter, you may wish to add it to the vocals and / or the guitars to brighten the track a bit. This can be done on the input channel, or by busing to another channel.

Another way to look at mixing is to look at it as a process. It is advisable to start with the vocals. Vocals are by far the most important track in nearly all popular music, so starting with them gives you a structure to build around and to elaborate upon.

Solo out your vocals track by track. Listen to them all the way through to check for quality. Edit out lip smacks (which sound gross), and possibly even breaths. It can be fine to leave breathing in if you feel it ads to the human or emotional quality of the take. Otherwise, it is just more noise.

Add EQ, compression and any other effects you intend on using (reverb, pitch correction, etc.). Pan the vocal tracks the way you ultimately intend them to be panned. Generally speaking, put the lead vocals in the center, unless you are double tracking the vocals. You can spread multiple backing vocal tracks across the stereo spectrum, dialing it back a bit if it sounds too artificial.

Listen for distortion or clipping on each your vocal tracks. Adding plugins often adds volume, and panning a track off center often puts heavier load on one of the channels. Thus. if your vocals did not clip initially, they might clip after all these changes. You can adjust by bringing down the master volume in the EQ. Listen to all your vocal tracks together. Make sure the balance between lead and backing vocals is good. You can soften backing vocals by adding reverb. Again, check for clipping and distortion on the output channel. You can adjust by bringing down the master gains in the EQ.

Do the same process for every instrumental group. That is listen to each instrument separately, editing, panning, EQing, and processing along the way. Then listen to each instrumental group together. Each time checking for clipping / distortion. I tend to mix and balance the instruments in the order: voices, drums, guitars, bass, and everything else. However, it doesn’t matter which order you do them in. Once you finish an instrumental group, check it with the rest of the mix that you’ve done so far.  Check the balance as well as clipping / distortion.

Once you think you’re approaching the final mix, you can check the balance using a simple trick. The volumes of the tracks should be arranged such that the most important track is slightly louder than everything else, with the volumes decreasing, as their importance to the track decreases. In pop / rock music, that typically means that vocals are most important, followed by the drums, guitar, and bass in that prioritized order.

You can check this very simply by playing the mix at a very low volume. When you are listening to the track at an almost imperceptible level, you should still be able to hear the most important track (usually the vocals). Gradually increase the volume. You should hear the instruments enter one by one in order of importance. If the instruments enter in the wrong order of importance, readjust your balance, and try again until the instruments appear in the correct order of importance as you gradually turn up the volume.

Another thing you can do at this point is to put a multimeter on the output channel. Use both your eyes and ears to make sure you are using the entire frequency spectrum in a balanced way. To put this another way, check to make sure that the bass isn’t too strong, that the high frequencies aren’t too sibilant, and that there are adequately balanced mids.

You can also check the stereo balance. Are you using the entire stereo spectrum without getting any phase cancelation?  Does your stereo field sound natural or subtle (unsubtle panning can be appropriate for specific effects, or for novelty arrangements)?  If there are problems with either the frequency or stereo spectrums, adjust your balance, mix, and possibly the EQ for some tracks and try again.

It is always best to listen to your mix on the highest quality speakers or headphones you have available to you. However, many people listen to music in unideal situations, for instance using earbuds, or on a car stereo system. It’s actually a pretty good idea to also listen to your mix using one of more less than ideal playback system.

Finally, if you have the luxury of time on your hands, put the mix away when you feel it is done, but come back to it the next day. Listen to it again. If you don’t want to change anything, it is finished, otherwise, tweak it, and come back to it the next day with fresh ears.

Landscapes Update: October 6th, 2019

Howdy! I finished my writing quota for September, and have written the first phrase towards my October quota. Thus, Landscape 10: Rocky Coast is finished, and Landscape 11: Farmland has been started. Unfortunately I had little time to record, but I was able to incorporate the horn parts for Landscape 4: Sand Dunes into the recording (the horn parts were recorded separately by Musiversal, as their horn players were not present at the orchestral reading).

I was also able to incorporate Carl Bugbee’s guitar recordings for Landscape 7: Mountains into the mix. I have an orchestral reading for this movement scheduled in November, but for now, I’ll leave you with the current version of Landscape 7, including Carl’s guitar tracks . . .

Landscapes Update: September 2nd, 2019

Hello all.  I did well on my writing quota for the month of August. I finished Landscape 9: Desert, and am already 1/3 of the way through my September quota for working Landscape 10: Rocky Coast. Unfortunately I had very little time to do any recording, which I hope to get back to soon. So, I’ll leave you with a realization of Landscape 6: Beach that features Carl Bugbee from Rhode Island’s premiere cover band Take it to the Bridge on guitar, and myself on bass.

Landscapes Update: August 12th, 2019

So, I’m a bit behind in my update. July was a productive month though. Not only did I finish Landscape 8: Palm Glade, but I edited and mixed the Musiversal recording of the orchestra part for Landscape 4: Sand Dunes. Unfortunately, the horn players were absent for this recording session, which doesn’t matter too much as the horn parts were not super crucial to the orchestration. However, Musiversal has promised to email me recordings of the horn parts in the near future, and the isolation on the recordings should be excellent.

As noted in a previous update, the Musiversal 30 piece orchestra has some issues with reading syncopated rhythms. Both orchestral excerpts used in Landscape 4 have orchestral chords once every three eighth notes in common time, forming a hemiola. Knowing that this rhythm would pose a problem for the ensemble, I rewrote the orchestra part, which was originally at 120 bpm to be at 80 bpm in a compound meter.  That way those chords that hit once every three eighth notes are now consistently on the beat. When mixed with the other recordings, nobody is the wiser. I was also able to take some of the chord hits out of the mix and place them at some other parts of the piece to add a bit more orchestral goodness to the movement.

Thus, the musical example for the month is Landscape 4: Sand Dunes. This recording not only features the Musiversal orchestra recordings (sans horns), but also has Carl Bugbee’s guitar tracks and my harmonica tracks (bass and chromatic) . . .

In the month of July I also managed to write an orchestral part for Landscape 7: Mountains, which I hope to get recorded in the next few months. Unfortunately I am a bit behind my August work on Landscape 9: Desert, but I hope to catch up a bit on my writing quota this afternoon.

Landscapes Update: July 2nd, 2019

It has been a productive month.  Not only did I finish Landscape 7: Mountains, but I am half way through my goal for July, to compose Landscape 8: Palm Glade. On the recording side of things I re-recorded some of the bass parts for Landscape 3, and recorded the bass part for Landscape 5. I am also halfway through recording the bass part for Landscape 6. Finally, I took some time this month to revise the solo parts for the first six Landscapes.

A few days ago I had a recording session with Musiversal for orchestral parts for Landscape 4: Sand Dunes. It went well, but I have not received the files from them yet, let alone taken the time to edit them. Therefore, this month I’ll be sharing Landscape 4 without the orchestral part. However, don’t be disappointed because not only do they contain Carl Bugbee’s guitar tracks, but they include harmonica tracks that I recorded.

I used some professional development money I had left over at the end of the year to buy a Swan bass harmonica and a Hohner Chromonica 64. This latter instrument is nearly identical to a Hohner Chromonica 64 that my father bought when he served in the Navy during the Korean conflict (though he served in the mediterranean). Thus, this instrument is pretty special to me.  I have my father’s Chromonica, but it needs about $200 of work on it to bring it back into tune. I intend on having the instrument repaired in the next year or so. The Chromonica 64 is a four octave instrument, and is played by none other than Stevie Wonder. While the Swan bass harmonica is built in China, and is significantly less expensive than other bass harmonicas, as you’ll hear from the recording it has a really fat, robust tone to it.

Anyway, I hope you enjoy the realization of Landscape 4, and I’ll update you all on the progress in a month or so.