Building Your Band

A better conversation about music, with David Loftis and Peter Bulanow

  • BLOG
    • Start Here
    • Then Read This
    • Esoteric
    • Piano
    • Keyboard
    • Guitar
    • Bass
    • Drums
    • Production
    • Sound Engineer
  • PODCAST
    • iTunes
    • Stitcher
    • SoundCloud
  • Intuitive Keys
  • CONNECT

The Best Keyboard

July 1, 2017 By Pete Bulanow

Music pastors and aspiring keyboardists occasionally ask my take on the best keyboard for modern worship music, and while I’m happy to discuss differing synthesis techniques, interface philosophies, and personalities of manufacturers, my take is unequivocal: the best keyboard, is the keyboard you own.

Why is that? Well, if you don’t have your own keyboard, you’re not going to be familiar with its sounds or how to navigate on it, so you’re only going to use a handful of patches for any particular set. You’re also not going to be familiar with how those sounds respond to note velocity, let alone aftertouch or the mod wheel, nor are you going to have a good sense of how they sound solo’d or work in a mix. Consequently, you are going to play tentatively because you’re not going to have confidence in your sounds, or how they respond, and when the sound does something unexpected you’ll get spooked by it and back down. Additionally, the house sound engineer may just mute you if something loud sticks out and start messing up their mix.

You need your own keyboard, so that you can become intimately familiar with its sounds, and so you have a variety of your favorite patches, well organized in the user section, at your fingertips. You need to become intimately familiar with how those patches respond to note velocity, aftertouch, and the mod wheel so that you can create something dynamic that evolves, as our incredibly-made ears identify static sounds with ease. Furthermore you need to have a good sense of how your patches sound both solo’d and in a mix (try playing along with MP3’s at home).

The bottom line is, the only keyboard you will be able to play confidently, is the one that you know inside and out. You will only know how to voice your chords to be both present and in their proper space if you’re intimately familiar with how those sounds respond.

Personally, it doesn’t matter who set up a keyboard or how awesome that keyboard is, if I wasn’t intimately familiar with that particular machine, I would never use it during a gig because all I would be doing is inviting trouble. I have turned down using all kind of fantastic gear, to include the Nords, rather than get bogged down in a new interface, get lost in a menu, and be unsure how a sound will respond (am I even in the right octave?). The exception to this is, I may use one sound from a keyboard that I don’t own, if it were something important that added quality and depth to my sound. The best example of this is I typically use the house piano wherever I go, and add my rig to it. This works because I am already very familiar with how a piano responds, the house sound engineer already knows how that piano responds, including how to mic it if it’s a real/acoustic instrument, and I’m spending zero time fussing with it trying to navigate a new interface.

So, yes, absolutely, if you have a Mac do get MainStage, and if you can afford a Nord Stage go for it, and I haven’t seen anyone using a Behringer DeepMind yet so I’d love to see that, and if you like the sound of something different than what everyone else is playing, even better to bring something new to the table! But you need your own keyboard. You probably need a stage instrument (meant for live music and with a simpler interface) more than you need a workstation (with a deeper interface and squencing capabilities). But in the end, it’s not about the gear. In the end it’s about how the gear is used, and how you hear it.

If you’re a wanna-be keyboardist, you need to understand that much of the contribution you make, just like a guitarist, is via the tone and timbres you bring. And it is imperative that you take ownership of that, because you are going to hear sound slightly differently than everyone else. Buying a keyboard, getting to know that keyboard, selecting your favorite set of patches, tweaking (lightly editing) those patches so they are “yours”, and then understanding how those sounds work in a song are all part of the craft of being a modern keyboardist. And have no doubt, this is craftsmanship.

If you don’t have a budget, just start hitting craigslist up, then audition the keyboards for sale there on youtube before you bother to meet up. If you have any kind of a budget, hit the biggest music store in your city, in the morning and on a weekday so the store is empty, and bring your own set of headphones to audition every keyboard they have until you start to hear the differences and start to have an opinion. Then buy your first piece of gear (from that store!! You want them to continue to be in business, right?). Over the next few years your tastes may change, or you’ll figure out what your machine does or doesn’t do well enough to consider a new piece of gear. Then don’t get rid of that one! Instead, add the new ‘board to your sound so you don’t lose anything that you have, and you can slowly get your head around the new interface and contributions of the new machine, incorporate the capability of your second keyboard into your live playing. Congratulations on starting down your path of becoming a modern keyboardist with your unique voice!

Does your experience back this up? Do you see things differently or have other advice? Leave your questions or comments in the notes below or contact me directly!

Filed Under: Blog Tagged With: Analog, Digital, Keyboard, MainStage, Nord, Piano, Programming, Sound, Synthesis

Your Keyboard Sound

January 18, 2017 By Pete Bulanow

If you’re a died-in-the-wool keyboardist, you probably recognize the names: Tom Oberheim, Alan Pearlman, Roger Linn, Bob Moog, and Dave Smith as the names behind our first electronic instruments as well as many of today’s virtual analog synths. This interview in Keyboard Magazine with Dave Smith talks about the intervening years of analog synthesis since digital keyboards, and in particular, sample playback synths (like the Korg M1), were invented.

Was that the beginning of analog’s long slumber?

The real death blow was when the Korg M1 came out, which was by far the most popular keyboard ever made. It even outsold the DX7. Finally, here was what keyboard players always wanted—real piano, brass, strings, organs, basses, leads. This is somewhat unfair, and I’ll tell you why, but it put synthesis innovation into a 20-year dark age. Ever since the M1, every company just kept building M1s. More voices, more and better sounds, more precision—just more, more, more.

In some ways, they’re still doing it. So why was that unfair to say?

Because it’s what 90 percent of keyboard players need to play gigs, which is different from players who are into synths for their own sake. What’s cool and different now is people are once again playing synths as synths because they’ve already got their Nords and Motifs and so forth to cover all the other sounds they need. So if you buy a synth now, it’s because you actually want to play a synth. That’s why I think this time it’s going to be different from last time. There’s not going to be something digital that comes in and makes true synthesizers go away again.

When I played a DX7 in the 80’s, I was mostly playing sounds that I created from scratch. But the first Keyboard I bought was a Korg M1 precisely because it gave me what I thought I wanted and what I thought keyboardists were supposed to do-emulate “real” instruments.

It took my love for the acoustic piano to finally understand that sample playback instruments have a very real static component to them that our ears easily detect, whereas a real instrument is constantly evolving.

In this way, a real instrument is much more like a waterfall or a fire – similar, consistent, but never exactly the same and always slightly different and evolving. More like a fractal.

While I’m not against sample playback, and I’m not against attempting to emulate real instruments (I do this all the time), my fascination is really with sounds that don’t produce a recognizable picture in your mind when you hear them, yet are nevertheless emotive.

How an unrecognizable / unvisualizable sound can be so compelling is a profound mystery to me, but one that I love exploring.

All that to say, the “dark ages” that Dave Smith references is this period in the wilderness looking for the promised land of perfect emulations of real instruments, when it never crossed our minds that perhaps what keyboards are really good at is something else altogether. Keyboards are good at synthesizing sound.

So I do use sample playback in my arsenal, but more than that, I am looking for compelling sounds that evolve and change like a waterfall or like a fire, just like a real instrument does, so that our highly-attuned ear stays interested.

Food for thought, and I welcome your feedback.

 

Filed Under: Blog Tagged With: DX7, Emulation, Keyboard, Keys, M1, Sample Playback, Sound, Synthesis

Sound – Feedback

November 4, 2014 By Pete Bulanow

 

Microphones (and pickups) are sources of feedback

Microphones (and pickups) are sources of feedback

Seemingly one of the great mysteries of running sound is the source and cause of feedback. Perhaps the greatest sin one can commit behind the sound board is allowing feedback. Running sound truly is a thankless job. If everything is going right, no one takes notice. So thank your soundman today!

Since we paid attention at math in school, can we use math to understand feedback? The answer, to all of our relief, is a resounding “yes”. Incredibly, the language of mixing and sound is entirely one of engineering (as is perhaps all of reality), which makes me happy.

Feedback implies the idea of a loop. All the math we need to understand feedback is multiplication and the concept of “unity“, or 1, meaning if you multiply this number by itself, you get this number back again.

But, if you multiply this number by a number smaller than itself, you get a smaller number, and if you keep multiplying, the numbers keep getting smaller. Similarly, if you multiply this number by a number larger than itself you get a larger number, and if you keep multiplying, the numbers keep getting larger. This is the essence of a feedback loop and why it can seem to hang on a knife’s edge – because it does.

To be clear, the loop we are talking about is sound that goes into a microphone, then into a mixing board where it might get EQ’d, then over to an amplifier, and then out via main and/or monitor speaker.

The loop occurs when some of that sound leaks back into the microphone. If the amount of sound that leaks in is greater than 1x what it was originally, by even a tiny little bit like 1.001 x bigger, that sound will start feeding back on itself and continue getting louder. If it’s smaller, like .9999, that sound may ring momentarily, but it will die out.

Knowing what we know then about the nature of sound, that the atoms of sound are sine waves, this feedback could occur at any frequency that our sound system is capable of making, which is another reason we cut and try not to boost gain at a specific frequency using EQ.

Furthermore, the acoustics of the room will come into play as every room will have a bunch of resonant frequencies (just like a coke bottle or flute) that will be more prone to build gain. And even the angle of the microphone with respect to the speakers will have a role, as some mics reject on the side purposefully for this reason.

Positive feedback like we discussed above is ultimately unstable and applied socially can be unhealthy. Positive feedback can make a diva or a spoiled child. Negative feedback is required for stability.

Filed Under: Blog Tagged With: Feedback, Math, Sound, Sound Engineer

Sound: Quiz

October 28, 2014 By Pete Bulanow

Unclipped sine wave compared to a sine wave 1dB higher that is clipped

Unclipped sine wave compared to a sine wave 1dB higher that is clipped

What happens when a signal clips (runs out of headroom, or hits a digital ceiling, or an amplifier runs out of power)?

Well, when a sound (such as a sine wave) clips, we start to see a corner that looks like a square wave forming. So what is happening to that sound? We know that the sharp corners on a square wave are high frequencies consisting of odd harmonics – which is exactly what happens.

Spectrogram of 40 Hz sine wave 1 dB into hard clipping

Spectrogram of 40 Hz sine wave 1 dB into hard clipping

So on the one hand, odd harmonics are not atonal, so as a signal starts to clip, the sound still could be pleasing / musical as it’s still related by integer harmonics – at the very least it’s not inharmonic!

But on the other hand, pushing that much power normally found in the low frequencies up into the higher frequencies which need/use less power is a formula for disaster.

THIS is how speakers get blown: when an amplifier runs out of power. As shown above, when (for example) a 40 Hz / low frequency signal meant for the big woofer clips because an underpowered amplifier runs out of power, basically a square wave is formed, converting much of that signal into typical square-wave odd harmonics. These odd harmonics are higher frequency, which get directed at the little tweeter speaker, which then fries.

Contrary to conventional wisdom, too little amplifier blows speakers. You can never have too much amplifier.

And now the term “total harmonic distortion” makes a lot more sense!

Filed Under: Blog Tagged With: Math, Sound

Sound 202 – Inharmonic Timbre

October 24, 2014 By Pete Bulanow

Welcome to your second semester of Timbre! I hope you have everything from first semester under your belt! 🙂

Integer Harmonics

Integer Harmonics

Previously, we looked at the harmonic structure of some nice pretty harmonic sounding sounds. That is, sounds that seemed to have a very clear note (tonal center if you will) and a nice even pleasing timbre to them. We did this by looking at platonically ideal waveforms like square waves and sawtooth waves – which are actually common starting points in many synthesizers.

Acoustic instruments are generally pretty harmonic but a little richer sounding. They mostly follow these same integers for their arrangement of harmonics. Although often when I hear instruments from the far east I hear less harmonic, or inharmonic, sounds that sound “clangy” to my ears. I am not at all an expert on these instruments however so I’ll stop there.

White Noise

White Noise

But I am somewhat of an expert at the piano, which employes stretch tuning, meaning that harmonics are progressively sharper as you go up the piano. This is done to align the fundamentals of higher notes to the slightly sharp harmonics of lower notes. This is also why you will see season stringed musicians tune their instrument to their harmonics.

So inharmonic sound starts on a continuum starting with strech-tuned pianos, extending to clangy sounds, and ending up with atonal sounds and finally random noise. To get that type of sound we start with harmonics that are increasingly not related by whole numbers to the fundamental, extending to atonal sounds  such as a snare drum with rattles, through completely random pink or white noise.

Filed Under: Blog Tagged With: Math, Sound

Sound 201 – Timbre

October 17, 2014 By Pete Bulanow

Sine, square, triangle, and sawtooth waveforms

Sine, square, triangle, and sawtooth waveforms

Before we dig deeper, let’s remind ourselves of some basics:

Typically we humans hear down to 20 Hertz (Hz) or vibrations per second) and up to 20,000 Hz (also said 20 KHz).

If we were to hear a note at 440 Hz, that note would be the A above middle C, also known as “Concert A” which is the note an orchetra tunes to.

The question we pose is, “How would we be able to tell if a 440 Hz ‘Concert A’ sound came from a violin or a clarinet?” The answer is, we can tell by the harmonics, or the mathematically related sine waves above 440 hz that give each instrument their characteristic sound or timbre.

Let’s understand this better by looking at a mathematically ideal square wave and sawtooth wave. For reference, a square wave sounds somewhat string-like -any early string emulation was built on these square waves. However, a sawtooth wave sounds somewhat reedy, like a clarinet.

Animation of the additive synthesis of a square wave with an increasing number of harmonics

Animation of the additive synthesis of a square wave with an increasing number of harmonics

So mathematically, a square wave contains the odd harmonics (1, 3, 5, 7, 9, etc), each one half as quiet as the previous while a sawtooth wave contains all harmonics (1, 2, 3, 4, 5, etc).

What we see as we add harmonics, is that the waveform gets less wobbly, more mathematically precise, and eventually (with the harmonics going out to a theoretical infinity requiring an infinite frequency response) we have a perfectly sharp corner.

Thinking about sound as sine waves lets us make sense of a lot of things which we will talk about soon.

Filed Under: Blog Tagged With: Math, Sound

Sound 101 – Sine Waves

October 16, 2014 By Pete Bulanow

Transforming Sound from the Time Domain to the Frequency Domain

Probably the most foundational thing any musician or sound engineer could take the time to understand is sound. And probably the most important way one could do that is to understand Fouriers theorem.

If you ever said to your math teacher “how am I ever going to use this in the real world,” you are about to eat those words. I hope they’re delicious.

Fouriers theorem says that any waveform (i.e. timbre) can be made by adding sine waves at various multiples (i.e. harmonics) of the fundamental (i.e. note).

More mathily – Fouriers Theorem transforms sound from the time domain (the way we see and experience it) and rotates it 90 degrees to look at it sideways in the frequency domain (which actually provides insight).

The six arrows represent the first six terms of the Fourier series of a square wave (they are sine waves!). The two circles at the bottom represent the exact square wave (blue) and its Fourier-series approximation (purple).

Put another way, Fouriers Theorem shows us that sine waves are the atoms of sound.

Isn’t that cool? It doesn’t get much more awesome than that people. All of a sudden, sound is much less mysterious.

And the more you think about it, the more situations it helps you make sense of, the more situations you see people who don’t understand this get things wrong, the less mysterious sound becomes.

(So that’s why they test our hearing with “pure” sign waves, because they’re checking our hearing at a given frequency and don’t want our ability to hear overtones to affect the results.)

Filed Under: Blog Tagged With: esoteric, Fouriers, Math, Sound, Theory

“Is there, like, a specific place I’m supposed to be looking?”

August 26, 2014 By Pete Bulanow

Grand Piano (Pete Bulanow)

Grand Pianos are heavy (Pete Bulanow)

New Zealander Lorde recently made history by being the first female and first solo artist to win the best rock video award at the VMAs. In her short time on stage, somewhat bewildered by it all, she asked the question: “Is there, like, a specific place I’m supposed to be looking?”

This is a telling question. If we don’t want people to be bewildered on Sunday morning, we need to have an answer to this question. The visual “melody” of the song if you will, must be clear. Lights can help create this focal point, but at a minimum, the worship leader must be visible. More than once, I’ve seen a worship leader sitting at a piano on the ground level with an unidentified voice coming from the sound system. If that worship leader needs to play a grand, get that piano on stage, or get them playing a big sample-playback keyboard on the stage. We have to get this right.

Let’s talk about sound for a moment.

Reality is generally coherent. For example when a twig snaps in the forest behind you, that means something is behind you. With artificial environments, sight and sound can be decoupled (become incoherent), to the detriment of the experience and the bewilderment of the observer.

Certainly, at a bare minimum, have your speakers up front where things are happening. Similarly, more than once I’ve actually seen speakers in the middle or even back of the church. The point is not just to make sound louder, it’s to make it all make sense. Disembodied voices are disorienting.

Now if you have a nice stereo setup, it makes sense to align the audio with your visuals. If backing vocals are slightly to the left, it may improve coherence to mix them that way. But if your drum kit is off to one side, I would still recommend panning it to the center of your mix (same with the bass), or if panning something off to one side means you will hear a different mix depending on where you sit in the house, then keep everything centered.

The goal is to make it easy for people to understand what is going on and minimize the artificiality of technology.

Filed Under: Blog Tagged With: esoteric, Mix, Sound, Sound Engineer, Tech, Worship Leader

Subscribe to the Podcast

Apple PodcastsAndroidby EmailRSS

Receive these blog posts in your inbox


 

Recent Comments

  • William Brew IV on Podcast Guest
  • Chordy on Podcast Guest
  • Aron Lee on Podcast Guest
  • Pete Bulanow on New to Hymns
  • Almighty on New to Hymns
  • Heather on New to Hymns
  • Worship // The Back Pew Perspective - Back Pew Baptist on Throw-away songs
  • Aarography on Aaro Keipi, ‘Keyboardists Agreeing’
  • BatmanBass on Aaro Keipi, ‘Keyboardists Agreeing’
  • Pete Bulanow on Making room for the bass

Tags

Arranging Band Bass BVGs Choir Composition Critique Drums esoteric Genres Gospel Guitar Harmony Inspiration Instruction Interpretation Jamaica Keyboard LessisMore Life Logic MainStage Math Missions Mix Musicianship Piano Prayer Production Quotes rhythm section Season1 Season2 ServingtheSong Sound Sound Engineer Space StartHere Tech TheFUnk ThenReadThis TimeSignature Vocals Worship Worship Leader
© meltingearth