|www.ethanwiner.com - since 1997|
Add Realism to Your Synthesized Sequences
by Ethan Winer
This article first appeared in the November 1997 issue of Recording magazine.
Modern sampling synthesizers have come a long way from the beeps and squeals of the early analog models, which were easy to distinguish from a live performer playing a real musical instrument. Many instruments, such as the piano, drums, harp, and others that have an inherent fixed envelope, lend themselves well to sample playback. Once the note is sounded, it can only die away in a fixed and predictable manner. Indeed, it is difficult if not impossible to discern a well-recorded snare sample from the real thing. And working with good drum samples is often quicker and easier than trying to fight the inevitable leakage you get when miking and EQing real drums. But many other instruments - notably strings, brass, and woodwinds - can vary their volume and tone quality after the initial attack. This gives them a greater degree of expressive freedom, but also makes them much harder to emulate convincingly with a synthesizer. Horn stabs and sustained string lines can be made fairly believable without too much effort, but you have to work a lot harder with solo legato parts. (Legato is the musical term that describes a sustained playing style, where each note in a passage is held for its full written duration and joins smoothly with the subsequent note.)
For all the advances in modern synthesizers and their popularity, the soundtracks for most movies and TV shows still rely on scores that use real instruments and live players because of the expressiveness real instruments can impart. But even if you can't usually achieve 100 percent realism with the current crop of synthesizers, you can still come pretty close. Making sequenced synthesizers sound more realistic is essential not only for classical music styles but also for jazz and pop tunes, and any other genre where the goal is to accurately imitate live musicians playing real instruments.
One of the problems facing the one-person orchestra is knowing how to play many different instruments. While you are probably proficient at one or maybe two instruments, the ones you want to sequence are usually those that you can't play! And if you don't know how a particular instrument is played or all of the sounds it can make, you are at a disadvantage when trying to emulate it with a synthesizer. This article will explain many of the techniques used by real players, with an eye toward imitating what they do with synthesizers. What follows is gleaned from the perspective of someone who plays regularly in orchestras and hears real instruments daily, and who also sequences with an aim toward achieving realism.
RULE #1: NOTE ARTICULATION
Real instruments are rarely played by starting a note at full intensity and then holding that level throughout the note's entire duration. Yet this is exactly what you get with a typical sample-playback synthesizer. Moreover, with a sampling synthesizer every note begins with the same fixed attack. Even though most modern synthesizers respond to note-on velocity - how hard you hit the keyboard's key - to control the volume of the note, the sample's tone does not change in the same way as a real instrument. A trumpet's tone quality can vary enormously - from a velvety smooth whisper at soft volumes to a harsh "blat" when overblown. These days, all but the least expensive synthesizers and samplers will adjust their playback brightness in response to velocity. And some can even switch between different samples so notes played softly trigger a sample of a real instrument playing softly, while harder hits trigger a different sample of the same instrument playing loudly. But even with these features, you still can't come close to the enormous variation in tone quality possible with, for example, a real violin or clarinet.
Real instruments are often played with an initial accent at the start of each note, and the amount of accent can vary a lot. There is even a standard way to notate this: Fp (FortePiano) means play loudly for a brief moment and then suddenly lower the volume; SFz (Sforzando) is similar but implies an even bolder attack. These effects yield an initial burst that makes the note sound clearly and stand out in the live "mix" but does not drone on, thus masking the other instruments. Even when these markings are not used, classical musicians accent the start of many notes at least a little to give a more clear enunciation. Other times a skilled player will start a note at a low level that swells over time, depending on the requirements of the music. Perhaps most important of all, real players never articulate every note in exactly the same way.
Unfortunately, most of the samples I've heard sound as if the musicians were instructed to bow or blow suddenly, with no change in level during the duration of the note. Likewise, legato notes played on real instruments tend to trail off gently rather than end abruptly. If you look at most classical music scores, you'll see that they are full of "hairpin" continuous dynamics markings. It is tedious to add such dynamics later using MIDI Volume Control messages (controller #7) - and harder still to do it live with a foot pedal or other controller. But with practice and patience it can be done effectively. Some sequencers, such as Passport Designs Master Tracks Pro, let you draw and edit controller data using a mouse after a track has been recorded.
Although you can add dynamics using MIDI Volume messages, understand that the timbre of the notes won't also change as with a real instrument. Many synthesizers are savvy enough to vary their timbre (or switch samples) in response to different note-on velocities, but most won't change the timbre in response to MIDI Volume changes unless you specifically define a patch to do that. In this case you tell the synthesizer to vary the cut-off frequency of the low-pass filter based on changes in the MIDI Volume Controller, with louder volume settings corresponding to a higher cut-off point. You might think that such an obvious tie between volume and timbre would be the default on all synthesizer patches, but sadly this is not usually the case.
When the volume doesn't need to change over the course of a single held note or chord, you can simply use note-on velocity. One trick is to shape your phrases: If a musical line is going up in pitch, increase the velocity gradually and make the top note quite a bit louder than those that precede it. And for a phrase that goes up and then comes down, the top note should still get the added emphasis. For a piano run consisting of eighth or sixteenth notes, you could vary the note-on velocity between, say, 40 for the soft notes and 110 or even more for the loudest ones.
VIBRATO AND PITCH EFFECTS
Just as they change tone quality and volume levels, real players vary both the amount and frequency of vibrato while a note is being held. Vibrato should never be static or sound phony, as produced by pushing the Modulation wheel on a synthesizer. Natural vibrato often swells in depth and sometimes in speed too, during the course of the note. You can emulate natural vibrato easily by using the Pitch Bend wheel. Although it can be difficult to rock the Pitch Bend wheel smoothly across its center detent, you can instead limit yourself to the range just flat or sharp of center. Thus, you would slide the pitch down a bit, and then add vibrato to the subsequent flattened note. Many sequencers let you add a constant value to all Pitch Bend commands afterwards, thereby adjusting the vibrato center up or down to compensate. However, this is not necessarily needed. Some string players use vibrato that goes flat only. And guitar vibrato can, of course, only go sharp (unless you are using a whammy bar, or stretch the string up to the note being played).
Even if you prefer to use the Modulation wheel for vibrato, don't just turn it up and leave it; at least vary the intensity during the course of the note's duration. A common technique used by real players is to fade in the vibrato depth slowly a few moments after the note's initial attack. You should also experiment with changing the patch's Modulation rate. Many factory presets use an LFO speed that is either unnaturally fast or slow.
Even if your sequencer doesn't allow editing controller data after it has been recorded, you can still record a track straight and add the Mod Wheel or Pitch Bend vibrato afterward. The trick is to use a separate track that is set to the same output channel as the main instrument track, and record only the controller signals on that track. If you make a mistake you can do it again, but without having to risk losing the notes that you are already happy with. Once you are satisfied, you can optionally merge the modulation data onto the main (notes) track to free up the extra track.
The Pitch Bend wheel is great for adding slides and other expressive effects to a fretless bass sound, but you don't always want a fretless bass. To emulate slides played on a fretted bass you can still play the Pitch Bend wheel with the same feeling, but then keep only the bend signals that lie on the note boundaries - edit out all of the in-between bend values. For even more realism, add a very short and soft new note at that exact point to emulate the fret clicking of a real fretted bass. Of course, for slides farther than two frets you'll have to modify the bass patch's Pitch Bend range.
Scales and slides on instruments in the violin family are somewhat more difficult to emulate. Many passages are played using a mix of finger-down note changes and the inevitable slides that occur during position shifts. It is the slides that are so difficult to program. When a string player slides from one note to another, the slide is not an obvious glissando. Rather, the bow's pressure and speed are reduced dramatically for that moment, making the slide just barely audible. But it is still there and still an important part of the instrument sound. You can simulate this by using the Pitch Bend wheel to effect the actual note transition, and then reduce the MIDI volume substantially just for the duration of the bending. Again, this effect will be more realistic if the string patch's low-pass filter setting is linked to the Volume controller, such that lowering the overall volume also reduces the brightness.
Use dynamics on your drum sequences! Many drum parts have a high-hat pattern that relies on continuous 1/16th or 1/8th notes, and you can always tell they have been sequenced if all of the hits are exactly the same intensity. For the most realism you should make the first hit of each group of four have a much greater velocity than the remaining three. For example, you can use MIDI velocity values of 100-30-30-30. You could also goose the third note just a bit: 100-30-60-30. For 1/8th note hi-hat patterns you would raise the main beat but not the "and" hit. Or vice versa. But never play all of the notes with the same velocity. And never create one or two measures of drums or percussion and copy that throughout the rest of the song - at least not for the final product. Even if you program a simple repeating pattern initially, once more instruments are recorded and the tune is starting to take shape, you should go back and redo those tracks playing them anew all the way through. And concentrate on playing those parts with feeling - ignore any wrong notes because you can easily fix those later.
The same technique applies to such non-percussion instruments as piano runs. Accenting the first note of each group of four (or three for triplets) is how real musicians naturally play. If you must enter a part in step time, do it from the keyboard - not with a mouse. At least that way you can vary the velocity of each note, which always sounds more natural than going back later and adding contrived dynamics. Although there is no substitute for good keyboard technique, it is better to stab at a solo at full or nearly-full speed with feeling and a sense of dynamics and edit the notes later, rather than try to add dynamics later to a "perfect" performance you step-entered. Likewise, avoid playing all notes with exactly the same duration. For added realism, you can accent the shorter notes slightly; this also increases their average volume, making the entire phrase sound more consistent.
EMULATING AN ENSEMBLE
One of the more common ensembles in both modern and classical music is the horn section. The most effective way to synthesize multiple horns is to use different patches for each voice. If you use the same trumpet patch for a two- or three-note chord, it is more likely to sound like a car horn than a musical instrument. Even if you have only one sound source, surely it has a trumpet patch and also a trombone patch, or perhaps both baritone and tenor saxophone sounds. Using different samples is especially important if you're playing a unison part with more than one instrument; this avoids the inevitable phasing effects that result when a sound is combined with a slightly delayed version of itself. You could even use a tuba patch played an octave or two higher than normal as a dark sounding second or third trumpet or trombone. Of course, most synths also have section patches, which are ideal for playing unison lines.
There is nothing wrong with quantizing, but often you don't need very much to make a piece sound tight. The slight variations in where notes start and for how long they are held is what makes music sound human. The real problem is when an ensemble is not in sync with itself. Therefore, it is much more important to adjust the section performances to be together than it is for them to be strictly on the beat. But never make the start times exactly the same. For section horns and woodwinds that are already quantized too precisely, use your sequencer's Randomize Start Time feature to stagger the start of each note in the group by a tiny amount. The result is a section that does not sound like a car horn or an accordion.
Nothing sounds more phony than having the tail of one bass note overlap the start of the next note. Therefore, you should inspect your sequenced bass tracks carefully, truncating any overlapping notes to make the track cleaner. Likewise, never allow notes in a solo horn line to overrun. For legato passages, make sure each note stops one or two clock ticks before the next one begins. However, if you're playing a harmonica part, you can often make it more realistic by letting the notes overlap because that is how harmonicas really work as the player navigates from note to note. It is also okay to overlap legato notes in a string section slightly; sometimes that makes the line more cohesive.
Always thin your controller data. For example, Master Tracks lets you remove continuous controller data (Pitch Bend, Aftertouch, Mod Wheel, and so forth) that is within a specified number of clocks or within a range of adjacent data values. It is unlikely you'll ever need all the data your controller has put out. This is not actually a realism tip, though having your synthesizer stutter under the burden of too much incoming data hardly sounds realistic. You should also inspect your tracks for very short or very soft notes that were played accidentally, and delete them.
If you are sketching parts that will ultimately be played live from printed music, define a Bb trumpet, a Bb clarinet, an F horn, and so forth. For trumpets, saxophones, and clarinets, do this by transposing the patch down a whole step so it sounds a Bb when a C note is played. This way you can write and print the parts in the correct transposed keys. Defining transposed instruments also helps when going in the other direction - when you are sequencing parts from an existing score - because that lets you play the parts in the proper key as they are written.
Finally, use care when selecting the actual notes to be played. For example, you will get a more open and thus more realistic effect by playing woodwind harmony note pairs in sixths instead of thirds. You can also double one section with another (for instance, woodwinds and strings) by playing the different instruments in either unisons or octaves. Again, be sure to use different patches for each part in a unison line to avoid the phased hollowness that otherwise results from the tiny delays between note starts.
In this article I have presented a number of practical techniques you can use to make your sequences sound as close as possible to real instruments. Even if you could own the perfect synthesizer - which doesn't yet exist - you must still understand how the various real instruments are played in order to emulate them effectively. I also urge you to attend live orchestra concerts, and rehearsals too, because you are more likely to hear one section play at a time as the conductor works through a passage. Many towns have a local community orchestra that practices regularly, and you can usually get permission to watch and listen.
Ethan Winer is the principal cellist in the Danbury (Connecticut) Community Orchestra, and he's been building and playing synthesizers since the 1960s.
SIDEBAR: Beyond Sample Playback - Yamaha's VL70-m Physical Modeling Synthesizer.
Yamaha set the audio world on its ear a few years ago when it introduced the VL1 physical modeling synthesizer. Where most digital synthesizers play back pre-recorded samples of real instruments, the VL1 uses high-horsepower computer hardware to calculate the sound of acoustic instruments. Yamaha's newest physical modeling synthesizer is the VL70-m, and it costs but a small fraction of the earlier VL1 model. I purchased one of the first VL70-m units available, and I've put it to good use in several of my tunes. But although the VL70-m comes the closest (of the synthesizers I've heard) to emulating the sound of real instruments, it is not perfect. Further, it is more difficult to operate and requires more "practice" than a sample-playback synthesizer, precisely because it lets you control so many aspects of the generated sound. To get the most out of this unit you really need a breath controller, which affords more precise control over the volume and timbre parameters than a slider or foot pedal could.
The VL70-m really shines over most sampling synthesizers because the instrument volume level is so tightly integrated with the resulting tone. When you play a note and then reduce or swell the volume using the breath controller, the tone quality changes simultaneously in a very natural manner. The saxophone patches traverse smoothly from a lush, soft tone rich in breathy air to the harsh squeak you get from over-blowing the real thing. And trumpet gliss effects don't have the phony sound characteristic of most sampling synthesizers when using the Pitch Bend wheel. Rather, you can hear each overtone partial as it "jumps" from mode to mode just like a real trumpet. And because the sounds are generated in hardware rather than played back from static samples, you can create many strange and interesting effects that have no counterpart in nature, and even emulate analog synthesizers.
SIDEBAR: When Samples Just Don't Cut it, Sync Live Audio.
No matter how good your sampled tracks are, experienced listeners can always tell they are synthesized. When only the real thing will do, a good solution is to combine live recording with MIDI sequencing. I try to sequence as many instruments as practical - drums, bass, keyboards - and then add live recorded guitar and saxophone solos, and section cellos and violins. Even for section horns, one real sax or trumpet slightly louder in the mix playing alongside two sampled tracks sounds significantly more real than three synthesizers. Likewise, a real hi-hat or a snare played with brushes can be added to sequenced drums to make the overall sound more realistic.
Most high-end sequencers can sync their playback to a tape or hard disk recorder via SMPTE. In the past I synced my ADAT to Master Tracks with SMPTE by first recording the sync tones on one tape track, and then setting Master Tracks to follow the tones on that track during playback. (Sync boxes are also available to connect most digital multitrack recorders with a computer directly, without having to waste a track.) I recently replaced my ADAT with a new Pentium computer and IQS's SAW Plus hard disk recording program, but the principle is the same. In fact, you can even sync a sequencer to a 4-track cassette deck, though that leaves you with only three audio tracks for recording. A complete discussion of syncing live audio to sequencers would require an entire article of its own, but I'll relate a few relevant tips here.
1. It always takes several seconds for the sequencer to start playing back when it is following SMPTE sync tones. Even if your live audio tracks don't play at the start of the piece, you should still insert a few empty measures at the beginning of your sequences. Otherwise, you won't be able to play the final mix from the beginning!
2. To avoid SMPTE startup delays entirely - assuming you have an extra audio track - record a mono cue mix onto one track. This avoids having to run the sequencer at all while recording the overdubs, thus speeding up the process considerably.
3. When recording the same physical instrument several times to create a unison section, move the mike slightly for each take. For example, no two violins sound exactly the same, so using different mike placements on the same violin helps to imitate that difference. This is similar to using different horn samples for a unison line described earlier, and as with samples it also helps to minimize phasing effects. Similarly, if you have more than one "good" microphone, use a different one for each take.
4. When a tune requires real strings, horns, or other instruments you don't play, you can often get amateur but capable musicians from a local community orchestra who will work for little or no money, or perhaps in exchange for recording a demo or audition tape. Unless the parts are very difficult, good amateurs can often play as well as the top-dollar professionals. As long as they can at least play in tune, they will surely sound better than a bunch of synthesizers!
Entire contents of this web site Copyright © 1997- by Ethan Winer. All rights reserved.