I wrote this rant way back when I was high school at the age of 17. Kept it here as a souvenir. Those days we did not have any wordpress, just forums and ICQ.
What do I need to make (computer) music? No really..
What did Beethoven need to compose his Ninth? Nothing. Just his head and his deaf ear plus a lot of genius and guts. Valid till the end of time.
The latest pop band – all the audio gizmos that money can buy. Valid till next months Billboard top 50.
But what do they have in common. Beethoven needed an orchestra for the real world performance as every composer in his era. So what exactly comprises an orchestra. Its a sound source really. The document where composers compose is like the sequencer for the orchestra. In this document called the sheet music, the composer specifies what combination of notes should sound for what instrument(s) at what time. He specifies the base properties of the tune like the time signature and the key signature. He also specifies the composition structure, a concerto is different from a solo or a quartet. Each style has its own format which might be adhered to. Dynamic markings and tempo markings indicate a particular kind of expression, like a staccato indicates short sounding notes with no release or sustain or the fermata which indicates the player to hold that note(s) for a subjective length of time within the borders of the time signature.
Tempi is specified as an Italian word like andante or allegro or presto. Other marking like slurs or tuplets or broken chords/arpeggios, grace notes or appogiaturias indicate the subtleties of the performance.
The orchestra does its best to reproduce what the composer had in mind. Finally the conductor conducts the perfomance choosing the number of instruments in each group of strings section, percussion, horns and brass and the woodwinds. Other instruments if required like the piano, harp or the guitar or mandolin. A voice (soprano, alto, tenor, bass) if needed, or a choir if required. The final sound infact depends a lot on how the conductor visualizes his resources to the maximum effect and realizes it.
The ambience of the audition hall is also a big factor for such performances.
Today’s production is absolutely similar, just that the requirements have evolved and hence the tools have changed as well.
In the software world we are still emulating a virtual orchestra. The sound palette has dramatically increased during all this time owing to the discovery of electricity and the corresponding innovations in sound related technology. Sound environments are simulated. No matter what we use as a sound source the familiar tripod of pitch, rhythm and timbre are very much valid today as it was 400 years back. Of course the artistic limits of these are challenged from time to time as well giving rise to newer forms of musical expression, but it is these rules that have to be played with. The digital algorithms give the composer an unlimited source of sounds, some good, some weird but all of them can be used musically in the proper context.
Our midi enabled sequencer is our sheet music. Infact many softwares do provide a direct translation of midi data and sequencer data to sheet music and vice versa. You can work in any mode as dictated by your need or choice. Many advantages are evident. Early composers did not have interactive sheet music that we have now. You can route that set of notes through any instrument to see how it sounds. What they had was awesome skill and excellent musical vocabulary to surpass those limitations. Literally they hear in the head. Now we really don’t have any excuse not to go for the same ideal, even with the benefits. The tool never makes the music, its you. Don’t forget that. Getting more software does not mean you are going anywhere in your quest for musical mastery, especially after the incubatory learning stage. Thats just a good excuse for not applying yourself. Its also called gear craze or writers’ block
The sound sources or generators are to be instructed what to play and how to play. The implementation details are encapsulated. You don’t have to make a trumpet to use it in your music, but you need to know how it sounds and what it can do in various registers and playing methods like muting or tonguing. You have to be intimate with the capabilities of the instrument. Likewise you should know what each knob in a sampler does to the resulting sound or how does it affect the other parameters in tandem.
Finally you have to mix and master it for general media so that it can be consumed in its right surrounding. TV audio is handled different from DVD audio or radio.
You need 3 things :
1. sound sources
2. a way to trigger those sources and control how they trigger it in a timeline.
3. a way to combine the sounds in an efficient manner for final output.
Synthesizer Technologies – Additive, FM, WaveTable, Subtractive, Granular, Physical modelling
Most synths will use one of these or in any other combo.
Samplers are sound players/recorders/audio manipulators built in one package.
They come in various forms as a standalone audio player like piano samplers and drum libraries.
They also are implemented in drum samplers.
They also feature as separate modules.
Samplers are very useful. They map the sound pitch to the keyboard or any other MIDI enabled hardware or software and allow you to “play” the sound as an instrument. Velocity Zones and keymaps etc are just terms for grouping the sound samples within the playable range. Envelopes allow fine control over the sound relating to how they play, how long they play and at what volume.
Most samplers and synthesizers have sound shapers in the form of filters. Lowpass are the more common ones. Others include high pass, bandpass and notch filters. As the names suggest, the filters filter out specific frequency ranges. Their slopes determine their strength of effect over the threshold frequency towards the upper or lower limits of the entire filter range, i.e. how much the volume reduces every octave of the threshold freq. Resonance knobs dictate the amount of self resonance at the threshold frequency.
Other components for modulating a particular property is done using an LFO or a Low Frequency Oscillator. This is a lower value range below 15 Hz. It’s very useful for automatic panning, filter movements or modulating other parameters in the device.
Sequencers are what enable you to work with different sounds and arrange notes in a timeline and build the song structure as well as communicate with other audio equipment. Most software studios have a built in sequencer.
A proper mixing environment is the key to a good sonic product. Most have mechanism to enable fx and related bussing an d sends features.
Soundcards and monitors are of course essential, but I wont go into a hardware catalog mode just yet.
Once you got the above, the rest you need is music knowledge to actually make music. Even that is not that important beyond the absolute basics. Perception of sound is so complex that no one has figured out how we percieve the vibrational information in such fine detail and isolate individual elements. Even the physics of sound is mind bogglingly complex. Tying to emulate a guitar with all its subtleties and nuances or a very accurate piano is quite a difficult task and is still a work in progress. The most we have as a body of music theory is just a set of conventions, corresponding evolution of ideas, compromises and implementations of observed phenomenon shaped within cultural and technical constraints giving a diversity that enables us to pinpoint a few things here and there. But really beyond established norms of 300 years of Western Music, that itself is what I have described above, the cummulative body of knowledge is not any ones invention or the absolute dogma which cannot be modulated or even abandoned. Of course to break the rules you need to know them.
Synthesizers+Samplers+a sequencer +mixer = All you need.
I will focus on the excellent tool Reason for any explanations.
All the sound sources in Reason are either a synthesizer or a sampler. Check out the instruments list and you get the idea. Infact in all audio software any concept of a sound source is either external to the system or a sampler or a synthesizer.
Subtractor – Subtractive Synth
Maelstrom – Granular Syth
Thor – FM Synth
Korg Drum Sampler
You get the idea?
Reason has an excellent sequencer, an integrated interface which enables you to trigger sounds from the sources described above. You arrange notes in a timeline in a piano roll view. In addition software technology enables the incorporation of convenience features like blocks and switching from song mode to edit mode in the same screen, zoom in and out, snap to grid, quantization, copy paste editing, non-destructive editing etc. These help the musician from getting as much control as possible from the software environment.
Mixers are the 14:4 and the Line Mixer along with a bunch of stock fx in the bottom list. Primarily all fx involves some work with the time domain, frequency domain or amplitudes i.e. dynamic range of any sound source in the signal path.
Think of fx as tools or algorithms that manipulate a specific set of properties of the sound source and if the resulting effect is audibly effective we tie it in as an fx module.
Compressors work on the dynamic range of the sound source. Simply if reduces any sound level above a threshold by the ratio set in. This enables the dynamic range to be shortened resulting in tighter sounding output tracks.
Reverb work on the time domain using various internal mechanisms to simulate early reflections and late reflections and obstructions resulting in diffusion of the sound source vibrations. This is done using a series of comb filters to simulate the wave pattern of an effected sound and outputs suitably passed through an all pass filter, which changes the phase of the sound input keeping everything else intact. This simple model is the basis for many complex reverb models.
Recently impulse response filters are used to simulate real world surroundings.
Distortion involves incorporating harmonics to a sound source as well as using a waveshaper function that result in clipped sound waves giving a characteristic harsh sound. 2nd order harmonics have been found out to give that analog grit to a more colder digital sound.
Flanger and chorus delay the sound with a short range like 18 – 24 ms and modulate the delay times. The sound sources are detuned as well for the chorus.
Phasers use comb filters and modulate the frequency range of these filters to produce a characteristic wooshy sound without the intensity of the flanger. Comb filters look graphically plotted as combs, hence the name.
Delay simply delays the input sound and mixes it back as feedback to produce a characteristic audible pattern. 1000ms equals 1 second is a standard unit for determining a lot of the bpm synced delay sounds. Also additional channels called taps might be used to create a more involved sound.
Equalizers are frequency specific volume controls. This in effect changes the tone and the resulting frequency content in the final output. They are parametric in reason. Familiar controls for Q, freq value and the gain are visible along with standard visualizations for the same.
So you use any sound source from the toolset and set the sounds to motions by triggering a pattern you like from the sequncer and finally mix the resulting sounds using the mixer and any fx in the sends or for the individual sound source signal paths. The end result is your track. This technology just enabled you to conceptualize, integrate and produce music efficiently by giving an environment with all the features of a top dollar studio with a lot of hardware along with total recall for complete convenience.
Open any other music software at this point and you will begin to appreciate Reasons design decisions. All the softwares essentially provide the 3 things above with varying degrees of success. The rest boils down to your preferences. Use all and then stick with one. Or use one and incorporate many. Whatever you approach or tool the provision is basically the same. This is the key to a successful session anywhere where the meat and bones become easily identifiable so that the work can start at the earliest which is always more important than the tools (especially for the tech phobe musicians).
So now you know what you need. Go make some music if you already know how to use the tools themselves. Or else I show you how each feature of each of the tools affects the sound and other programming techniques that I use for my tunes. Its not at all difficult and once you get used to it you’ll begin to use it even more creatively.
Jon Von Neumann – “In mathematics you don’t understand things. You just get used to them.”