If you’re at the Experience Conference Sep 5th to 8th in Orlando, and you’re a listener of the podcast, I would love to connect with you!
If you’re at the Experience Conference Sep 5th to 8th in Orlando, and you’re a listener of the podcast, I would love to connect with you!
If you’re a creative, you probably have an eye for Apple’s design and you understand computing at the intersection of technology and the humanities. The last few years, we’ve seen quite a few signs that Apple is focusing heavily on consumers (not necessarily a bad thing) and missing signs that they’re focusing on professionals (could be problematic). While it might make good economic sense to focus on consumers, this strategy is worrisome from an overall creative-ecosystem perspective. After all, if Apple doesn’t provide professional grade hardware and software for creative professionals, where will those media products come from? At Apple’s 2017 WWDC (World Wide Developer Conference), we were given our first tangible proof that Apple has not lost their way, wants to support creative professionals, and wants to do so in a big way. But before we get to that, let’s first briefly review the dark ages we’ve been living in for the past few years.
First, let’s look at their professional software. Way back in 2002, Apple acquired Logic from Emagic. Although Apple didn’t quite turn it into garage band (very consumer-ey software), Logic has clearly lost stock from its position as the preeminent audio-sequencer. Similarly, Apple acquired Final Cut Pro from Macromedia, and in 2011 transformed into something a lot more like iMovie with their Final Cut Pro X release, which lost many pro features in place of a slicker interface and has been slow to recover them. Finally, Apple created Aperture for doing professional post production on RAW images, similar to Adobe’s Lightroom; however, support for Aperture halted altogether. MainStage remains a gift to keyboardists (particularly given Apple’s acquisition and inclusion of Alchemy soft synth in MainStage) as well as to guitarists, although this one application has been looking like an outlier in Apple’s professional portfolio.
Their hardware has been telling a similar story of support for consumers with few real pro-level capabilities. The MacBook Pro I’m typing on, a Mid 2015 top of the line model purchased at the start of 2017, has the most RAM available at just 16 GB. The reasons for this seem to point back to a lack of low-power RAM support at larger capacities. High-power RAM could be used, but that would impact heat dissipation and ultimately,”thin-ness,” by making the machine thicker. Which is exactly the point: many pros would gladly trade 32 GB or 64 GB of RAM for a hit to battery life and thinness, but that’s not how Apple rolls. Apple’s latest innovation – the Macbook Pro – featuring a touch bar in place of the function keys, adds $800 to the bottom line, but doesn’t offer a faster machine or more RAM. So what we have are machines not getting speed or capacity bumps, but rather a very cool and tone-deaf touch bar in its place. Worse still, their halo machine, the Mac Pro, has not been updated for over 1000 days but is still for sale at the same price. Here is a really good summary of the state of Apple in 2016.
In April 2017, Apple called in five journalists for a face-to-face meeting. When Apple calls you, you come. It’s a fascinating story, and John Gruber called it that this meeting must be about the Mac Pro. I.e. either Apple was going to discontinue it, or they were going to support it, but not have anything ready to show in the near future e.g. the 2017 WWDC.
So the short of the story is that Apple had painted themselves into a thermal corner with their innovative “garbage can” design of the Mac Pro. Apple had banked on two smaller size GPUs plus one CPU, while the industry went with one larger size GPU plus one CPU. With that mistake acknowledged, we now know we have Mac Pros coming – not in 2017- but they’re coming.
That’s all we had to work with until the 2017 WWDC on June 5th. Tim Cook said this was going to be the best WWDC ever, and now that it’s all over, it’s hard to call him a liar.
The bottom line is that we creatives are now in great shape. The iPad Pros are looking more and more focused on doing real professional / productive work with iOS 11. More importantly, the Macbook Pros and iMacs look fantastic. Both now have the option for 32 or 64 GB of RAM, meaning either machine is dialed in for the kind of work we do. Given that Mac Pros are coming, it looks like we will have a fantastic roundup of machines to serve our creative needs.
But wait, there’s more! We also have an iMac Pro coming in December, featuring 8, 10 or 18 core Xeon processors and up to 128 GB of RAM with improved 5K displays and the ability to drive additional monitors. These machines are incredible, and it’s exciting that Apple is aware that creatives have been falling in love with their iMac’s due to its integrated design and significant capabilities. My dream machine might be a bottom of the line iMac Pro (partially because they will come in space gray), and I wonder if even that would be overkill. I’d probably have to invest in several thousand dollars in instruments and effects and need to score a feature length movie to tax that machine (which I’m quite pleased to do with proper funding… anyone?). These are the kinds of machines that developers, pro video folks, and physicists need. That these machines exist, and that Mac Pros are still coming, is proof that we have so much more excellence to look forward to.
This interview hosted by John Gruber with beloved Apple Execs Phil Schiller and Craig Federighi AKA Hair-Force One, offers a very insightful review of the “State of Apple” and support for pro creatives. I highly recommend it.
Music pastors and aspiring keyboardists occasionally ask my take on the best keyboard for modern worship music, and while I’m happy to discuss differing synthesis techniques, interface philosophies, and personalities of manufacturers, my take is unequivocal: the best keyboard, is the keyboard you own.
Why is that? Well, if you don’t have your own keyboard, you’re not going to be familiar with its sounds or how to navigate on it, so you’re only going to use a handful of patches for any particular set. You’re also not going to be familiar with how those sounds respond to note velocity, let alone aftertouch or the mod wheel, nor are you going to have a good sense of how they sound solo’d or work in a mix. Consequently, you are going to play tentatively because you’re not going to have confidence in your sounds, or how they respond, and when the sound does something unexpected you’ll get spooked by it and back down. Additionally, the house sound engineer may just mute you if something loud sticks out and start messing up their mix.
You need your own keyboard, so that you can become intimately familiar with its sounds, and so you have a variety of your favorite patches, well organized in the user section, at your fingertips. You need to become intimately familiar with how those patches respond to note velocity, aftertouch, and the mod wheel so that you can create something dynamic that evolves, as our incredibly-made ears identify static sounds with ease. Furthermore you need to have a good sense of how your patches sound both solo’d and in a mix (try playing along with MP3’s at home).
The bottom line is, the only keyboard you will be able to play confidently, is the one that you know inside and out. You will only know how to voice your chords to be both present and in their proper space if you’re intimately familiar with how those sounds respond.
Personally, it doesn’t matter who set up a keyboard or how awesome that keyboard is, if I wasn’t intimately familiar with that particular machine, I would never use it during a gig because all I would be doing is inviting trouble. I have turned down using all kind of fantastic gear, to include the Nords, rather than get bogged down in a new interface, get lost in a menu, and be unsure how a sound will respond (am I even in the right octave?). The exception to this is, I may use one sound from a keyboard that I don’t own, if it were something important that added quality and depth to my sound. The best example of this is I typically use the house piano wherever I go, and add my rig to it. This works because I am already very familiar with how a piano responds, the house sound engineer already knows how that piano responds, including how to mic it if it’s a real/acoustic instrument, and I’m spending zero time fussing with it trying to navigate a new interface.
So, yes, absolutely, if you have a Mac do get MainStage, and if you can afford a Nord Stage go for it, and I haven’t seen anyone using a Behringer DeepMind yet so I’d love to see that, and if you like the sound of something different than what everyone else is playing, even better to bring something new to the table! But you need your own keyboard. You probably need a stage instrument (meant for live music and with a simpler interface) more than you need a workstation (with a deeper interface and squencing capabilities). But in the end, it’s not about the gear. In the end it’s about how the gear is used, and how you hear it.
If you’re a wanna-be keyboardist, you need to understand that much of the contribution you make, just like a guitarist, is via the tone and timbres you bring. And it is imperative that you take ownership of that, because you are going to hear sound slightly differently than everyone else. Buying a keyboard, getting to know that keyboard, selecting your favorite set of patches, tweaking (lightly editing) those patches so they are “yours”, and then understanding how those sounds work in a song are all part of the craft of being a modern keyboardist. And have no doubt, this is craftsmanship.
If you don’t have a budget, just start hitting craigslist up, then audition the keyboards for sale there on youtube before you bother to meet up. If you have any kind of a budget, hit the biggest music store in your city, in the morning and on a weekday so the store is empty, and bring your own set of headphones to audition every keyboard they have until you start to hear the differences and start to have an opinion. Then buy your first piece of gear (from that store!! You want them to continue to be in business, right?). Over the next few years your tastes may change, or you’ll figure out what your machine does or doesn’t do well enough to consider a new piece of gear. Then don’t get rid of that one! Instead, add the new ‘board to your sound so you don’t lose anything that you have, and you can slowly get your head around the new interface and contributions of the new machine, incorporate the capability of your second keyboard into your live playing. Congratulations on starting down your path of becoming a modern keyboardist with your unique voice!
Does your experience back this up? Do you see things differently or have other advice? Leave your questions or comments in the notes below or contact me directly!
If you’re a died-in-the-wool keyboardist, you probably recognize the names: Tom Oberheim, Alan Pearlman, Roger Linn, Bob Moog, and Dave Smith as the names behind our first electronic instruments as well as many of today’s virtual analog synths. This interview in Keyboard Magazine with Dave Smith talks about the intervening years of analog synthesis since digital keyboards, and in particular, sample playback synths (like the Korg M1), were invented.
Was that the beginning of analog’s long slumber?
The real death blow was when the Korg M1 came out, which was by far the most popular keyboard ever made. It even outsold the DX7. Finally, here was what keyboard players always wanted—real piano, brass, strings, organs, basses, leads. This is somewhat unfair, and I’ll tell you why, but it put synthesis innovation into a 20-year dark age. Ever since the M1, every company just kept building M1s. More voices, more and better sounds, more precision—just more, more, more.
In some ways, they’re still doing it. So why was that unfair to say?
Because it’s what 90 percent of keyboard players need to play gigs, which is different from players who are into synths for their own sake. What’s cool and different now is people are once again playing synths as synths because they’ve already got their Nords and Motifs and so forth to cover all the other sounds they need. So if you buy a synth now, it’s because you actually want to play a synth. That’s why I think this time it’s going to be different from last time. There’s not going to be something digital that comes in and makes true synthesizers go away again.
When I played a DX7 in the 80’s, I was mostly playing sounds that I created from scratch. But the first Keyboard I bought was a Korg M1 precisely because it gave me what I thought I wanted and what I thought keyboardists were supposed to do-emulate “real” instruments.
It took my love for the acoustic piano to finally understand that sample playback instruments have a very real static component to them that our ears easily detect, whereas a real instrument is constantly evolving.
In this way, a real instrument is much more like a waterfall or a fire – similar, consistent, but never exactly the same and always slightly different and evolving. More like a fractal.
While I’m not against sample playback, and I’m not against attempting to emulate real instruments (I do this all the time), my fascination is really with sounds that don’t produce a recognizable picture in your mind when you hear them, yet are nevertheless emotive.
How an unrecognizable / unvisualizable sound can be so compelling is a profound mystery to me, but one that I love exploring.
All that to say, the “dark ages” that Dave Smith references is this period in the wilderness looking for the promised land of perfect emulations of real instruments, when it never crossed our minds that perhaps what keyboards are really good at is something else altogether. Keyboards are good at synthesizing sound.
So I do use sample playback in my arsenal, but more than that, I am looking for compelling sounds that evolve and change like a waterfall or like a fire, just like a real instrument does, so that our highly-attuned ear stays interested.
Food for thought, and I welcome your feedback.
When we hold a workshop, we start out by talking about the roles for each instrument. Knowing the role of each instrument goes far to inform what everyone should play.
But even without that understanding, we have clear examples of the kinds of things we should play all around us: original studio tracks. If your playing doesn’t line up with basically the kinds of things you hear on records, you may be overplaying.
Session players are the ones that get the call to play in the studio while the tape is rolling and there are a bunch of people sitting around charging by the minute for their time – when you need to get it right the first time. Toto is a band that formed out of session players – so in many ways, they are a textbook.
This is a breakdown of a famous song of Toto’s from the 80’s called “Rosanna” which you can read all about at the wikipedia page. Other than the fact the announcers talk too much over the tracks, this really does go far to break down just how little is needed, yet how significant each contribution is. If your playing is significantly different than what is on here, it’s time to rethink some things.
A couple things jump out at me listening to this:
Jeff Porcaro on Drums – he is famous for just playing the groove and not playing a lot of fills. My kind of drummer, and exactly what you need most Sunday mornings.
Steve Porcaro on Keys – This really is textbook keyboard playing. Something as simple as a roll down at the right time can shift the whole song.
Steve Lukather on Guitar – Note just how tasteful his playing is when called to play rhythm. Don’t be afraid to step out a little when asked to solo.
Vocal Harmonies – Everything should start out with melody. You build harmonies slowly. Blend is everything. You can actually get away with a lot of harmonies if you’re tasteful and intentional.
Finally, everybody uses contrasts to make certain things speak, and other things lay back.
What jumps out at you?
We are so proud of BYB Podcast contributor, bassist, and uber-dynamic speaker Aron Teo Lee who recently was featured in his very own TEDx talk. “The Right Amount of Wrong” is something Teo talks about all the time when he is producing or co-producing music. It’s a counter-intuitive idea that seems to constantly produce compelling results. Take 15 minutes and be inspired:
If you’ve been to one of our music production workshops, you know how hard it is for the rest of us to compete after Teo is done talking about playing bass, but we do our best 😉
If you haven’t seen this conversation between Bono and Eugene Peterson, it’s worth taking twenty minutes of your time for this loving critique of Christian music from two people who are really trying to contextually honor Christ and the Gospel.
Also, if you’re not aware, Fuller Theological Seminary is really trying to engage with film, e.g. at Sundance, and this is one of their products.
Something that I think would have been helpful to me in my musical journey, was some sort of validation of where I’ve been, where I am, and a hint of what is ahead of me.
This handy chart that I created is one way to break things out, and it’s the way I hear many folks describe their musical journey.
The First Stage is really the beginner stage- when you first pick up your instrument and don’t know a thing about a scale or a chord or a time signature. It’s about acquiring those basics.
The Second Stage is the “doing your homework” phase of musical progression, where you put in your time – maybe even most of your 10,000 hours – to gain proficiency on your instrument. If you don’t love your instrument by this point you get out.
The Third Stage represents a paradigm shift. It’s the first time you start focussing not on what you’re playing, but on what you’re not playing. It’s about creating space for others and responding to what is going on. If you’re copying the record at this point, that’s where your eyes get opened up to what session players are actually doing. They’re not the busy little doodlers we are when we play by ourselves. There is an economy to what they play. This is when we get knocked back by the significance of The Edge when he says:
Notes actually do mean something. They have power. I think of notes as being expensive. You don’t just throw them around. I find the ones that do the best job and that’s what I use. I suppose I’m a minimalist instinctively. I don’t like to be inefficient if I can get away with it. Like on the end of “With or Without You”. My instinct was to go with something very simple. Everyone else said, “Nah, you can’t do that.” I won the argument and I still think it’s sort of brave, because the end of “With or Without You” could have been so much bigger, so much more of a climax, but there’s this power to it which I think is even more potent because it’s held back… ultimately I’m interested in music. I’m a musician. I’m not a gunslinger. That’s the difference between what I do and what a lot of guitar heroes do. —The Edge (1991)
The Fourth Stage is when you’ve moved past trying to copy your influences and you prefer your own voice. This is when you can apply your sound to original material without second guessing yourself. This is also when you might listen to the record, but you don’t need to, because you understand how to serve the song. This level represents the true expert, the specialist in music.
The Fifth Stage is reserved for the very few who are willing to be extremely brave and vulnerable and who continue to distill their voice and find something so new as to be thought of as original. Very often this occurs across genres or it is art that transcends genres. This stage of musicianship is reserved for those who change the way we hear music, and we’re never the same after that.
Once we move forward, we still may step backwards at times so that we can again move forward with a different vocabulary or improved skill set. I think of Rush’s Drummer Neil Peart, already a world class drummer with 14 Albums under his belt, using a traditional rock style of hitting the snare (clearly at the Fifth Stage), who decided in 1994 to back up (to the Second Stage) and learn the looser jazz style traditional grip of playing to find some fresh inspiration, the result of which can first be heard on Test For Echo.
What do you think? Do these stages help you think about where you are in your own musical journey? Are they helpful as you think about the musicians you play with, produce, or direct?
We just wrapped up a “Starting Point” workshop for 5th to 9th graders, which is a simplified version of a regular workshop curriculum. The whole event went so well and the kids were awesome.
A photo posted by Peter Bulanow (@peterbulanow) on
The remarkable thing about this workshop, is that the majority of the kids (I think 60%) had never played an instrument before, and something like 90% of the kids had never played the instrument they were playing for the workshop (they decided to pick up a new instrument). Yet, at the end of just four days they were playing musically, together, as a band. There were a number of times during the “Battle of the Bands” on the last day that our jaw was on the floor with what we were hearing and we experienced actual compelling moments that caused emotion to well up inside. It was incredible.
If there is a thesis to Building Your Band, the podcast and the website, it’s this: you probably don’t need better players or more skills to sound better, you need better production. These kids were teachable, and delivered those goods.
We’re excited to be partnering with Pleasant Garden to produce a Building Your Band Workshop geared toward the next generation of musicians and worship leaders. There is a fantastic panel of musicians lined up to do the instructing, and our curriculum has been refined for a younger audience.
Anyone wishing to host a Building Your Band workshop at your church, please contact us to discuss how we can greatly accelerate Building Your Bands.