Guitarrismos

Mês: maio, 2014

The Delay Spectrum

Hi! This is Rafa Monteiro, from Montpellier, France. This is an assignment for lesson for week 5 of “Introduction To Music Production” at Coursera.org. Today, I would like to talk to you about the delay spectrum.

All delay effects work on the recorded signal following the very same principle: by creating and applying copies of an original signal over and over again, with a delay between them.

Older delay gear would literally make a copy of a recorded tape.

These effects are important because they allow us to recreate the sensation of the space in which the sound was recorded or was intended to sound. The sensation of ambience of a small room, a big theater or even an open field can be recreated by proper use of delay effects.

You might ask: “well then, why don’t we just capture the sound of the space in which the sound was recorded in the first place and make things simple?”.

The reason is related to the notion of the good take: it’s impossible to change or fix a distorted or poor take, and the same applies to ambience. A take with recorded ambience will be stuck with that sound, and it’s hard to manipulate it without making it sound unnatural. If that’s the sound you are after, fine. But if you are not, it might be best to record with the least ambience possible and apply them later with the delay effects.

There is a trick, however: depending on the time of the delay, the acoustic result changes drastically. They may sound as completely different effects. In fact, many devices sold in the market have different names, more suited to express the acoustic result of the applied effect. Choruses, Envelop Filters, Phasers, Flangers and so on. However, in essence, all they do is work on the signal by manipulating copies of the delayed signal.

Know what you are using

Delays can be separated into three categories. Both can be fixed delays, with the periodic repetition set on a fixed time value, or variable delays.

  •  Short delays (usually <10 ms): a copied signal delayed up to (usually) under 10 milliseconds won’t be perceived as a delayed sound. This is a very interesting phenomena, in which the repeated sound won’t sound like an echo
    Instead, the short delays will influence the sensation of pitch, due to phase cancellation between the two signals. The acoustic result is called “Comb Filtering”, named after the shape of a comb that the signal gets, with all those peaks and notches from phase canceling of the harmonics.
    Phasers, Flangers and Choruses are effects based on those short delays, with the first two being fixed and the last being variable.
  • Medium delays (usually between 10 and 50 ms): as the time of the delay increases, the acoustic result stop being perceived as a difference in pitch and timbre and starts being perceived as the sound of ambience.
    Do you know that bathroom echo, in which people love to sing while taking a bath? It’s made of those medium delays of sound.
    Delays that fall in this category (they are usually called Reverbs) emulate the acoustic reflection of the sound in different ambiences and surfaces, since the sound of space is dependent on these short reflections of sound. They manipulate the number and intensity of those short delays to create the sound of small rooms, big concert halls, churches, stadiums, etc.
  • Long delays (usually over 50 ms): these delays stop being perceived as ambience and, start being perceived as discrete repetitions of a sound. They work like the “mountain echo”, in which a given sound will be perceived more than once, without changing timbre (like the small delays) or ambience (like medium ones)
    Usually, plugins and equipments in the market labeled “Delay” – analog delay, digital delay, and so forth –  will refer specifically to this category of delay. But we know that delay is much more than a simple echo, and we shouldn’t be confused.

It’s very important to understand how these three categories of delay work, since they are very different manifestations of the very same effect. Even simple delay plugins, with small adjustment in settings, can turn from a Reverb into a Flanger or Phaser. It’s a dramatic change that will surely be problematic if we do not understand what is going on.

On the other side, good knowledge in delay will allow us to take more from our equipment an even saves us from purchasing unnecessary stuff or overload our DAWS with plugins, since many delay can be configured to produce different effects.

Anúncios

Dicas pra inversão de acordes – Tétrades diminutas

Essa informação já é (ou pelo menos deveria ser) velha conhecida de quem já tem experiência em música. Entretanto, quem é novato, não tem o hábito de tocar jazz e nem música brasileira talvez vá se beneficiar dessa informação.

Acordes diminutos ocorrem com bastante freqüência em estilos musicais harmonicamente mais complexos. No rock é um tipo de acorde pouco comum, mas ele aparece bastante no jazz, no blues e num monte de estilos brasileiros.

Tétrade diminuta do sétimo grau resolvendo no acorde da tônica

Em português, a diferença entre uma tríade e uma tétrade diminuta é a presença do sétimo grau diminuto. Em inglês, entretanto, o termo “diminished chord” se refere exclusivamente à tríade diminuta (t, b3, b5), enquanto que o termo “diminished seventh chord” se refere à tétrade diminuta (t, b3, b5, bb7).

Em 99,99% dos casos, este acorde terá função dominante, normalmente resolvendo no acorde menor da tônica. Exatamente como o exemplo acima.

A tétrade diminuta tem uma propriedade interessante: como ela é composta de três terças menores sobrepostas, todos os seus intervalos são simétricos entre si. Isto significa que qualquer inversão deste acorde vai gerar um outro acorde diminuto exatamente igual.

Se invertermos uma tétrade diminuta de C, teremos uma tétrade de Eb, de Gb ou de Bbb (que, enarmonizado, é um acorde de A diminuto). Todos estes acordes soam muito próximos, e podem ser substituídos entre si sem problema algum.

No caso da guitarra, é facílimo fazer esta substituição: basta tocar o mesmo shape do acorde três casas acima ou abaixo no braço do instrumento. Não tem como errar.

Alguns shapes da tétrade diminuta de C. Estes acordes podem ser tocados três casas acima ou abaixo da posição indicada sem perder a sonoridade e a função

Distortion

Hi! This is Rafa Monteiro, from Montpellier, France. This is an assignment for lesson for week 4 of “Introduction To Music Production” at Coursera.org. Today, I would like to talk to you about a subject that I, being a guitarist, love: distortion!

Distortion can be either a curse or a blessing (usually it’s the former). A distorted record or mix can be spoiled beyond fixing, meaning that not only it will sound very bad, but the work will have to be done from scratch. On top of being very frustrating, it is also time and energy consuming – and if you happen to be renting equipments or even a studio, can be very expensive too.

What is distortion?

Putting it in a simple way, distortion is a change in the timbre and loudness of the sound due to changes in the electric or digital signal when their gain is pushed over the limits of operation of your gear.

Every piece of equipment, from the microphone to the DAWs and reference monitors, are constructed to operate with a determinate level of signal without changing the sound*. Within those limits, the signal will be recorded and won’t be distorted.

The problem arises when we raise the level of the signal beyond the threshold of the gear.

That’s what happens when the signal is raised above the threshold.

 

The signal will be cut above the threshold point (clipped) and more partials will be added to the signal. This will change the timbre of the sound and increase the sensation of perceived loudness (even with the signal maxed out at the clipping point).

Again: once the wave hits distortion, it can’t be fixed by artificial means.

Avoiding distortion

We can have distortion in two procedures: during the recording and while editing and mixing stuff.

To avoid recording distortion, we must set the preamp gain in a way that allows the musicians to play freely and also let us to record with a high gain without hitting the red line, always paying attention to the LEDs and graphic indicators. Hitting that sweet spot can be very tricky, and it’s not uncommon to record the same piece of music a couple of times just to adjust the gain levels.

Once we record, we avoid distortion by paying attention to the manipulation of the recorded signal, never raising it above the 0 dB FS. FS is for Full Scale, and it’s the threshold for the digital signal in a DAW.

Putting it simple: do not cross the red line. Green is low, yellow is good, and red is bad.

How to use distortion in a good way

Distortion, however, can be a powerful tool for expression.

The distorted sound usually is brighter, heavier and louder than it’s natural counterpart. On top of that, the distorted sound can be manipulated and equalized, generating different timbres. In fact, there are many devices projected specifically to create specific distorted sonorities.

Each device produces a different kind of distortion

Distortion can be used in a creative and expressive way. In fact, many musicians (mostly guitarists) do this all the time, recording instruments with their sound pushed beyond the threshold of distortion.

Check out when Eric Johnson turns on his Tube Screamer distortion at 1:35 to create a contrast between “clean” and distorted sounds.

The Foo Fighters record their guitars with a lot of distortion, always in a creative and musical way.

Distortion is a powerful tool for expression, that should never be left unchecked or used without control or purpose.

*In fact, all equipments affect the recorded sound. Since there aren’t perfect electronic components, everything can (and will) change the recorded sound into something slightly different. Usually, the alteration is almost imperceptive, unless you are using a poor piece of gear. Also, the less the equipment interferes with the sound (which means higher fidelity of sound), the more expensive it gets.

Lição do William Leavitt sobre o uso do baixo na guitarra

“I define the real bass (sounding) range as any note lower in pitch from C, 5th string (3rd fret) or C, 6th string (8th fret).”

“All (fundamental or 2nd inversion) forms[…] that employ the 6th string (and therefore sound in part in the real bass register) have the root (first) or the fifth chordal degrees sounding in the bottom. These are the ‘strongest’ chord tones and always sound right.”

“Any chord voiced with the 3rd degree in the bass has a weak chordal sound, and should be used only when leaping to a knew inversion of the same chord, or as a passing chord to produce scalewise or chromatic bass motion”

“Chord voicings with the 7th degree in the bass have very weak chordal sounds. These forms (like this with the 3rd in the bass) may be used for inversion leaps or as passing chords, but their use must be well justified – such as in a strong descending bass line – or they will sound ‘wrong'”.

“All forms[…] that employ the 6th string (and therefore sound in part in the real bass register) have the root (first) or the fifth chordal degrees sounding in the bottom. These are the ‘strongest’ chord tones and always sound right.”

– William Leavitt, A Modern Method for Guitar

– x –

O professor William Leavitt chama à atenção para o “detalhe” do registro grave do instrumento. Na hora de tocar acordes, a gente precisa ter um certo cuidado se tocarmos o baixo (pros leigos e iniciantes: a nota mais grave do acorde) dentro desta região.

Pra ele o registro grave fica abaixo do Dó na terceira traste, quinta corda, ou na oitava traste, sexta corda (pros leigos e iniciantes: é a mesma nota). Embora a extensão real do registro possa não ser tão rígida assim, o ponto é que, 90% das vezes que tocamos algo com a 5a corda solta ou quase qualquer nota na sexta corda, estaremos no registro grave.

E ele precisa de cuidados especiais.

Basicamente, acordes na primeira e terceira inversões (com a terça do acorde ou a sétima do acorde no baixo, respectivamente) com o baixo no registro grave tendem a solar embolados e/ou fora da função.

Existem basicamente três situações que atenuam isso:

– Uma linha melódica (diatônica ou cromática) bastante nítida no baixo, que faça com que essas notas harmonicamente instáveis façam parte de uma passagem mais fluida;
– Uma outra inversão do mesmo acorde atacada num tempo forte, fazendo com que o baixo salte para outra nota da harmonia.
– Enarmonia, para acordes diminutos, aumentados e meio diminutos que viram acordes menores com sexta (e vice versa – rola muito na função dominante)

Obviamente, estes movimentos do baixo pedem, pelo amor de Deus, uma resolução.

Segundo o Leavitt, acordes com a fundamental ou a quinta na região grave sempre funcionam. Ainda assim, eu teria um pouco de cuidado ao colocar  quinta no baixo: não é exatamente o grau do acorde mais forte. No caso de tríadas, a presença da quinta no baixo é um relativamente mais comum. Mas quando começamos a tocar trétrades com tensões, a quinta é a primeira nota a sair para dar lugar à outras notas.

Se o baixo do acorde for tocado numa região mais alta, teoricamente, é possível usar qualquer inversão. Eu tomaria cuidado, de qualquer forma.

Numa banda, o guitarrista tem a vantagem de poder delegar os baixos para um outro instrumento mais grave (quase sempre o baixo, piano ou a tuba) sem tocar as notas neste registro. Entretanto, isso não é uma opção quando se toca sozinho ou quando o instrumento que faz o acompanhamento (e por tabela, os baixos) é o violão ou a guitarra.

Categories of effects in audio production

Hi! This is Rafa Monteiro, from Montpellier, France. This is an assignment for lesson for week 3 of “Introduction To Music Production” at Coursera.org. Today, I would like to talk to you about the categories of effects in audio production.

Last week we discussed a bit of the properties of sound, the mechanical waves that travel across the air, the earth, the water and all solid objects. In fact, sound travel across everything, with the vacuum being the only exception. Do you remember the properties of sound? They were briefly discussed along the course in many topics and videos. They are:

  • Frequency: it’s a value that represents the number of sound vibrations in a given period. We measure in Hertz (Hz), which represent the number of times a given sound will vibrate in a second interval. The higher the frequency, the higher the pitch of the sound. Humans can hear roughly between 20 Hz and 20.000 Hz, but sound can vibrate in frequencies beyond those values.
  • Amplitude: it’s a value that represents the intensity of the sound vibrations. Higher amplitude means louder sounds. It’s closely related to the dynamics of music, although they aren’t exactly the same thing. In audio production, we measure amplitude in decibels of sound pressure level, (dB SPL). Note that the decibel (dB) is not a unit of loudness. In fact, it is a logarithmic unit that express the ratio between two values of a given physical quantity. Usually, it’s power or intensity. In audio production, it measures logarithmic variations in the sound pressure of  the vibrations, with 0 dB at the threshold of human hearing.
  • Spectrum: it represents the periodic harmonic properties of a given sound. Except for pure sine wave tones, all sounds generate harmonics. Their frequencies are multiples of the fundamental frequency that generated them, with different arrangements of amplitudes for different harmonics. Some sources of sound will have louder even harmonics, for example, when others might have higher odd harmonics, or even some crazy combination of them. That spectrum (which can be seen in the spectrum analyzer of a DAW) is what gives the timbre of a sound. In essence, it’s what makes a piano sound like a piano and not like a flute.

These properties of sounds can (and are) manipulated during audio production, creating many different effects of timbre, ambience, loudness that can (and should) enhance the artistic expression of music. They are manipulated by a series of different hardwares or softwares, that we will call plugins. They got this name from the past, when all effects were produced by big electronic devices that were plugged in the mixing board. We can still see them being used in professional studios, but nowadays it’s much more common to see softwares running in DAWs doing the very same work. All you have to do is to add a plugin to a channel in the DAW Garage Band Plugin The number of effects available for use is infinite, but they all can be grouped in just three* categories. They are:

  • Dynamic effects: these effects manipulate the signal in order to change the amplitude of the sounds, changing the dynamics. That’s why they got this name. Compressors, limiters, expanders and gates fall into this category.
  • Delay Effects: these effects work on propagation. They act in two ways: manipulating the intensity of the sound (just like the dynamics), but also creating repetitions of the sound in order to give the sensation of ambience. Reverbs, delays, phasers, flangers fall into this category of effect.
  • Filter Effects: these effects work on the frequencies of the sound. They can manipulate fundamental frequencies as well as the harmonics to alter the pitch and timbre of the sounds. Pass filters, Equalizers (graphic of parametric), envelop filters, pitch shifters fall into this category.

It’s important to understand these categories of effects because it’s only possible to work with them understanding their logic and how do they work on the sound properties. Again, there are billions of different plugins available in the market, both in hardware, software or a mix of the two. They range from the most simple compressor with a button or two to the very complicated effects that mix two or all the three categories in a single unit that can create very crazy sounds. All in different sizes, prices and quality.

Many, many buttons! O.O

They might appear menacing and confusing with a lot of buttons and knobs, but they all have their logic. All becomes clear once one understand it.

*I like to think that Distortions and Boosts fall into a “fourth” category, even though they work like other Dynamic effects, by affecting the amplitude of the sound through manipulation of the audio signal. Their electronic design and circuits are very similar, but they have a totally different purpose. Boosts are usually meant to add color and warmth to the signal (pretty much like the pre amp does) and distortions are used to change the timbre of an instrument. They aren’t used the same way a compressor or gate would be used.

The Analog to Digital conversion process.

Hi ! This is Rafa Monteiro, from Montpellier, France. This is an assignment for lesson for week 2 of “Introduction To Music Production” at Coursera.org. Today, I would like to talk to you about the conversion of an analog signal into digital information.

Sound is a physical phenomenon. When sound happens (and it happens all the time), the mechanic waves travel across the air, ground and water in a continuous way, with the intensity of the sound becoming smaller the further it gets from it’s source.

When the sound hits a microphones or an electric pickup, from a bass or guitar, what it does is create an electrical signal that is analog to the sound in the ambience. Since it’s analog, it’s also a continuous signal, with an amplitude that relates to the intensity of the sound, a frequency relating to the number of sound vibrations, and the decreasing intensity in time, just as the sound does.

Microphone capsule

A computer, unfortunately, can’t understand sound or electric signals in it’s nature. This happens because computers don’t understand continuous information. They only work with binary data, composed of huge sequences of information written only in zeros and ones – that’s why it’s called binary information.

All data processed through a computer is written in big chains of zeros and ones. All programs, all internet content, this text, all the videos and musics played on a computer. It’s no different from the recording process.

In order to work with the audio signal in a DAW, the signal must be transformed into binary data.

Analogic to Digital, Digital to Analogic

This conversion is made in the audio interface, in a device called “AD/DA” converter, which means “analogic to digital/digital to analogic” conversion. This electronic component will read the audio signal and write binary words that are analog to the signal. Those big words, full of data, can be understood by the DAW.

This conversion is made through a quantization of the signal: periodically, the device will capture discrete amounts of information from the signal to build the digital data.

This data is build on bits. A bit is a piece of memory that can represent either zero or one. A word of one bit can only represent two values: zero and one. A word of two bits can represent more values: 00, 01, 10, 11. three bits can represent even more data: 000, 001, 010, and so forth.

Since sound has a lot of information and nuances that need to be registered in a high quality recording, the digital words used to store this data are very big. Home CD systems use 16 bits of information only to store audio information, and for studio standards, it’s usually in 24 bit word information. This is called the resolution of the recording, and it’s related to the amplitude of the sounds recorded.

Frequency is registered in another way, by the number of samples made during the conversion of the audio signal into digital data. For each second, the AD converter will dig samples of the signal.

The bigger the sampling rate, the closer to the original signal

As you can see in the graph above, a good quality in audio demands bigger samples. In order to record all the possible different audio frequencies (they are between 20 Hz and 20k Hz, remember?), it’s necessary to use a very big sampling rate. For home CDs, it’s 44,1 kHz sampling rate and for professional studio, it’s usually 48 kHZ or even 96 kHz.

That also means that the equipment you are using will be burdened when processing these tons of data. Even small recordings, made of two or three instruments, can generate insanely huge amounts of data. Usually, home computers won’t be able to handle all this processing smoothly, and that’s why the purchase of a dedicated DAW should be considered and studied.

Last, but not least, the quality of the AD/DA converters differs from one audio interface to another. Better quality AD/DAs usually means more expensive equipment, but also a better quality of digital audio. As I explained in my previous article, the quality of the sound is usually given by the weakest link in the chain of equipment, and that’s another topic to consider when purchasing an Audio Interface.

Type and Usage of Important Studio Cables

Hi ! This is Rafa Monteiro, from Montpellier, France. This is an assignment for lesson for week 1 of “Introduction To Music Production” at Coursera.org. Today, I would like to talk to you about cables.

Once a professor taught me an important lesson regarding the importance of cables. It happened during a recording workshop at college, in which the students would play and record stuff to mix later. It was one of my first experiences in recording music.

She told me that “you will have the sound of your weakest link”, meaning that the lowest quality piece of gear that I had would have the bigger impact in the quality (or loss of quality) of my recording.

Usually, that piece is the cable, the most overlooked piece of equipment. Not only because it is usually the most common piece of gear used (you will need at least one cable for each device you need to connect), but usually they are the most worn and battered part of the equipment. All that plugging unplugging (not to mention other forms of damage it takes, like people stepping on it) have its toll.

Many people spend a lot of money on expensive DAWs and Audio interfaces and forget about the cables. Purchasing the cheapest ones might not be good, since it’s a higher chance that they won’t sound that good. It would be better if mones was spent in an overall average set than an expensive ones with poor quality cables, that might compromise the signal flow.

That said, it’s also very important to treat the cables with the same care that the other pieces of equipment are treated. They deserve it.

There are many kinds of cables avaiable in the market (just google “professional audio cable” and check it out), but only a few that are really important, at least for us who are making the first steps in the world of audio recording.

They are :

XLR – this is the cable that is mostly used to plug microphones, and that’s why they are also called “Mic cables”. It is a balanced one, meaning that it has two wires in wich the signal is sent, and a ground metallic lattice for the ground. This allows the cable to resist interference and noise, allowing its use for longer lenghts.

TS or ¼ inch cable – this is the classic instrument cable. Guitars, Basses and Keyboards are usually plugged with this one. It is unbalanced (it has a positive wire and the ground lattice only), making it vulnerable to intereference and noise. Preferably, it’s best not to use anything longer than 3 meters (or 10 feet) long. The shortest, the better.

There are two kinds of TS cables : Mono (the regular ones, with only one black mark in the jack) and Stereo (with two black marks in the jack, near the tip). The first one is used most of the time, since the signal produced by instruments is usually Mono. But there are a few situation in which you will need a stereo TS cable.

TRS cable – it looks like a smaller version of the TS cable, more thin with smaller plugs. They have the same plug used in home earphones and earbuds. They work exactly the same way as the TS, and it’s mostly used to plug home devices into the DAW.

RCA – remember those old cables that used to connect your TV and a VRC or DVD player ? Well, they are still used in studios, for the very same purpose – plug home appliances into the interface.

MIDI – some MIDI equipment (specially onder gear) use this kind of cable of 5 pins inside a jacket. It’s used to send MIDI (Music Information Digital Interface) information back and forth from a controller to a DAW or synthethizer.

It’s still worth to keep one, but more and more they are being substituted by…

USB/Firewire cables – These are used mostly to connect the DAW to the interface. They only transmit digital data, instead of audio signal. But more and more, they are being used for other purposes, like the USB being used to connect Keyboards or Pedalboards to the DAW.

For a small home studio, in order to start making simple recordings of their voice and instrument, a pair of good quality XLR cables and a couple of not so long TS cables will do.

The audio interfaces usually come with their own cable, so, you won’t need to purchase one just to plug it in the DAW. But, since USB appliances are very common nowadays (and not that expensive), having an extra pair of USBs won’t hurt.

If you are a guitar or bass player, I would also suggest considering to purchase a good quality Direct Box, to help with the connections. They aren’t expensive and will be useful in a lot of situations, like monitoring the recorded audio.

Last, but not least, it’s never enough to ask you that you treat your cables with care. If you do, they will last for many years.

Como a leitura tem ajudado a percepção e vice-versa.

Uma coisa que tem me ajudado bastante no estudo de percepção musical, por mais incrível que pareça, é… a leitura!

Cada vez mais eu me convenço de que o processo de percepção nada mais é do que o de o reconhecimento de algo.

Sim, parece uma afirmação óbvia e besta, daquelas tão óbvias e bestas que a gente nem se dá conta. Tipo aquela historinha dos peixes que o David Foste Wallace conta (pode googlear), na qual eles são incapazes de enxergar a água em que vivem.

A gente não reconhece nada inédito. Se é inédito, a gente é apresentado e passa a conhecer (ok, tem um debate filosófico gigante sobre conhecimento inato x aprendizado que eu não pretendo iniciar agora, mas vocês entenderam)

Por definição, reconhecer (re-conhecer) pressupõe, pela etimologia da palavra, conhecer de novo aquilo que já se sabia anteriormente.

A gente só reconhece aquilo que já notamos e sentimos um bilhão de vezes e que não tem nada de inédito em nossas vidas. Podemos não ter acesso às causas, mas sentimos seus efeitos. E, assim como um cientista cria meios de observar a causa (por meio de microscópios, sensores, etc.) e observar algo que ele SABE que já está lá, fazemos um esforço em tentar ouvir com clareza o que está acontecendo.

É aí que a leitura entra.

Por meio da leitura musical, somos apresentados às idéias musicais na sua forma mais explícita e intencional. Podemos não saber qual a intenção do autor da obra, mas não teremos dúvidas sobre o que ele nos disse objetivamente.

E quanto mais a gente se expõe a esse tipo de informação, mais a linguagem vai entrando em nós (ou seriamos nós que vamos entrando na linguagem?). Cada vez mais, aquilo que lemos e ouvimos se torna familiar.

Nesse momento, perceber nada mais é do que uma reversão do processo, de olhar para aquilo que já sabemos.