some terms used in the mag i dont know

Feedback on the current issue, ideas for articles, questions about Tape Op

Moderators: TapeOpJohn, TapeOpLarry

generichumanperson
alignin' 24-trk
Posts: 55
Joined: Sat Aug 11, 2007 5:54 pm

some terms used in the mag i dont know

Post by generichumanperson » Sun Dec 16, 2007 3:59 pm

I have gotten a couple issues of the magazine so far, and there are some terms I'm not too familiar with, and I figured I'd ask on here rather than go through a million searches online. I'm admittedly not a professional and dont claim to be anything close to one. If you guys could help me out thatd be great. Here's we go:
comb filtering (i just looked this up and it said a delayed duplication of the signal bounced back? is this accurate?)
time-align
phase-align
first-order reflections (does this mean early more noticeable reflections?)
polarity-reverse
phase-coherency
notch-filtering
'stems'
side-chaining
daisy-chaining
summing amp
summing bus
recording.....just kidding

any help would be great, even if you just have some example for one or two of them.

User avatar
Scodiddly
genitals didn't survive the freeze
Posts: 3972
Joined: Wed Dec 10, 2003 6:38 am
Location: Mundelein, IL, USA
Contact:

Post by Scodiddly » Sun Dec 16, 2007 4:39 pm

First few are time-related... to give a very simple example, if you had two microphones at different distances from your source (a guitar amp, perhaps), even only a few inches different, they would pick up the sound at two different times. What that results in is weirdness if the two are mixed together. If you moved the mics so that they were at the same distance, or simulated that by adding delay to the closer mic, then you've "time-aligned" the two signals. Some people refer to this as "phase-aligned", though the word phase is a rather slippery and tricky word to use properly. Having two signal that are not time-aligned can result in "comb filtering", where certain frequencies are cancelled out by phase differences. If you looked at this comb-filtering on a visual display you'd see a pattern that looks a bit like a hair comb had been dragged across the screen, killing off various frequencies.

i am monster face
buyin' gear
Posts: 524
Joined: Mon Feb 16, 2004 7:17 pm
Location: Omaha
Contact:

Post by i am monster face » Sun Dec 16, 2007 4:40 pm

Wow. Well...wow. That's a lot of stuff. It's good to see that you are interested and that you are chipping in here to figure stuff out.

What I would recommend is maybe a good book, a good google/board search, and more reading.

These things are kind of the "Intermediate" terminology of recording and I'm sure there are many people who would like to help you out with these. I will do what I can to help later.

Ian

User avatar
Rodgre
carpal tunnel
Posts: 1744
Joined: Fri May 30, 2003 3:19 am
Location: Central MA
Contact:

Re: some terms used in the mag i dont know

Post by Rodgre » Sun Dec 16, 2007 4:48 pm

comb filtering This is the effect when you combine two similar signals together, and the differences between them create dips and peaks across the frequency spectrum due to the two signals being similar, but not the same-thus creating frequencies at which their waveforms are in phase (peaks) and out of phase (dips). The result, if looked at as a frequency plot, would resemble a hair comb.

time-align and phase-align are in some contexts, the same thing. When one sound source arrives at two differently-placed microphones at different times (because one is closer to the source than the other), they will create a comb filtering effect at the frequencies where their waveforms are in and out of phase. With visual DAW-based recording, you can actually look at the waveforms on the screen and see where they differ, and you can move the late waveform to line up with the close/early waveform to make them be more in phase.

first-order reflections I'm not 100% sure of the context of this phrase, but I'm assuming it is referring to the effect of hearing a reflected version of whatever signal you're listening to which is bouncing at you from immediately between the source and you. For example, in a mixing environment, when your monitor speakers are on a shelf above the meter bridge on your mixing board, your ears will hear some reflections of the sound as it hits the top of the mixing board before it gets to your ears. Same thing for a mic which is on a source, which is also picking up a reflection from the floor between the amp and the mic.

polarity-reverse is the act of flipping the polarity of one signal in relation to another similar signal, so what were positive-going voltages are now negative-going, and vise-versa. Also called: flipping the phase, inverting phase.

phase-coherency is where two or more similar signals are compared (usually by ear) and each signal's phase is inverted or not in order to get all of the sources to be at the best sounding polarity in regard to the others. For example, on a drum kit you might have overheads, snare top, snare bottom, etc. You would audition each mic's signal in relation to the others and flip the phase of one signal back and forth to ensure that you have the best-sounding phase setting that leaves no voids in certain areas of the frequency range.

notch-filtering is a type of equalizer filtering that takes whatever frequency you're adjusting and attenuates that frequency, often with a really tight bandwidth. If you looked at the filter's response on a frequency graph, you would see a "V" shaped notch where you are cutting this frequency.

'stems' refers to using a multi-track recording system to record your final mix in separate, but simultaneous subgroups so you can still have some wiggle room for adjusting balances in the mix later. For example, you're using a DAW system like Pro Tools. You have a typical 24-track mix of a rock tune going. You may have drums spread out over 8 or more tracks, guitars on 5 tracks, vocals on 3 tracks, etc. When you're ready to do a final mix of your song, you might also record "stems" of each subgroup separately. You'd bounce the drums as a stereo group. Then mute the drums and just record the guitars as a stereo sub. So on and so forth. This way you can go back a year from now and do an alternate mix with maybe just bringing the bass up or the vocals back, without having to totally recreate your entire mix.

side-chaining is typically referring to the use of dynamics processors like compressors and gates. Many of these devices allow you to send a signal different from the audio that you're trying to process straight into the control section of the circuit. For example, you want to tame the sibilance of a really squeaky acoustic guitar part. You love the sound, but those string squeaks are ruining your mix. You can send the guitar through a compressor, and enable the sidechain, so the compressor's control circuit is being controlled by whatever you send into it's sidechain input. For "de-essing" like you would do for this acoustic guitar, you can send a split of the acoustic guitar's track through an EQ, and BOOST the offending squeak frequencies so the compressor reacts by attenuating the signal whenever there is a peak in those frequencies, but leaves the signal alone the rest of the time.

daisy-chaining is simply referring to running several devices in series, one into the other.

summing amp and summing bus are referring to the same thing. The summing amp/bus is the place where several signals are combined together, like in the output section of an analog mixer. These days, many engineers are using a DAW-based recorder, and don't have access to mixing on a full-scale analog mixing console. One way to get the best of both worlds without having to have a huge and expensive mixing board full of features you don't need (like EQs, Aux busses and mic preamps) is to use one of the many summing amps that are available now, which will give you so many inputs (say 16) which you can select to go to the left, right or mono, and it electronically combines all the signals in the analog domain, which many would argue sounds more musical than "bouncing to disk" in a computer.

Roger
(is it obvious that I'm stir-crazy from being snowed-in? :))
Last edited by Rodgre on Mon Dec 17, 2007 7:56 am, edited 1 time in total.

User avatar
syrupcore
deaf.
Posts: 1793
Joined: Mon Mar 08, 2004 4:40 am
Location: Portland, Oregon
Contact:

Post by syrupcore » Sun Dec 16, 2007 4:54 pm

rodgre ftw! I hope something nice happens to you today. perhaps some 10 year old boyscouts will come shovel you out.

i am monster face
buyin' gear
Posts: 524
Joined: Mon Feb 16, 2004 7:17 pm
Location: Omaha
Contact:

Post by i am monster face » Sun Dec 16, 2007 7:14 pm

That was really nice of you to do Rodgre.

Nice work.

generichumanperson
alignin' 24-trk
Posts: 55
Joined: Sat Aug 11, 2007 5:54 pm

Post by generichumanperson » Mon Dec 17, 2007 3:55 pm

wow, thanks a lot. That definitely did help, although I do have a couple questions though, this phase stuff is hard for me to grasp, like how would you be able to hear that mics are out of phase with eachother, other than hearing delays or something, how would you be able to tell that there are voids in certain frequency ranges? Is this a skill that develops over time, or is there some way to read it on a computer? I'm assuming that good micing techniques would usually equal no phasing problems right?
about side chaining - when you say split the acoustic guitar track, do you mean if you have a stereo track sending the left or right signal to the eq? and the eq would control the compressor by compressing those certain frequencies? the sidechain input, do you mean just the regular input on the compressor or is there an input called sidechain input? Or do you just call it a sidechain input when you are sending it to something else then the compressor?
Also, so stems basically mean grouping instruments together, so could you say, 1 stem for guitars, another for vocals, etc?
Thanks again, hopefully these questions aren't confusing!

User avatar
A.David.MacKinnon
ears didn't survive the freeze
Posts: 3821
Joined: Wed May 07, 2003 5:57 am
Location: Toronto
Contact:

Post by A.David.MacKinnon » Mon Dec 17, 2007 4:42 pm

Phase is something you will learn as you go. You can hear it if you know what to listen for. A good example would be something like -

You are recording a electric guitar with a mic an inch from the speaker and another mic a few feet feet away from the speaker. The guitar sounds great through both mics individually but sounds bad when both mics are heard together. This is a phase issue. The sound is hitting the mics at different times because they are at different distances from the speaker resulting in comb filtering. Some frequencies are being cancelled out while others are being boosted.
There will always be phase differences when you use multiple mics on a source. It's an unavoidable law of nature but you can minimize the bad effects and use it to your advantage. This basically means moving mics around, listening to the results, repeat, repeat, repeat until it sounds good.
It's a hard concept to get your head around when you're starting out but it will become second nature after a while. The best advice I can give is keep moving mics around until it sounds good. Keep notes and draw pictures or take photos once you've got something you like. (I take digital photos and keep them in the same file as the Pro Tools session).

A stem is when you take a mix and break it down to multiple stereo mixes of each instrument (or what-ever you'd like). This means muting everything except the drums and recording a stereo mix to make a drum stem then muting everything but the bass and recording a stereo mix for a bass stem and so on and so on. The result is stereo stem mixes of each instrument that when combined with all of their faders set to 0 will sound the same as you master stereo mix. This is most often done when you are mixing on an analog console without automation or recall ability. The advantage is that you can make general changes after the mix is finished without having to recall all of the board and effects settings. You can turn the guitar up by turning up the guitar stem or turn the vocal down by lowering the vocal stem.

I hope I explained that well.

TapeOpLarry
TapeOp Admin
TapeOp Admin
Posts: 1665
Joined: Thu May 01, 2003 11:50 am
Location: Portland, OR
Contact:

Post by TapeOpLarry » Wed Dec 19, 2007 10:38 am

"when you say split the acoustic guitar track"

It's being "multed" in this case. The signal is sent to two places, either via a mult point on a patchbay, or a "Y" cable, or from a device with dual outputs.
Larry Crane, Editor/Founder Tape Op Magazine
please visit www.tapeop.com for contact information
(do not send private messages via this board!)
www.larry-crane.com

User avatar
JGriffin
zen recordist
Posts: 6739
Joined: Thu Jul 31, 2003 1:44 pm
Location: criticizing globally, offending locally
Contact:

Post by JGriffin » Wed Dec 19, 2007 11:11 am

TapeOpLarry wrote:"when you say split the acoustic guitar track"

It's being "multed" in this case. The signal is sent to two places, either via a mult point on a patchbay, or a "Y" cable, or from a device with dual outputs.
Or, alternately, the singer can't play a C7 to save his life and the bass player took a hatchet to the singer's Alvarez out of desperate frustration.
"Jeweller, you've failed. Jeweller."

"Lots of people are nostalgic for analog. I suspect they're people who never had to work with it." ? Brian Eno

All the DWLB music is at http://dwlb.bandcamp.com/

Electricide
dead but not forgotten
Posts: 2105
Joined: Thu Jul 17, 2003 11:04 am
Location: phoenix

Post by Electricide » Wed Dec 19, 2007 2:28 pm

run a stereo signal, like a cd track, into your board or PC software, and flip the polarity of one side. That big sucking sound in the middle? That aggravating way the sound creeps around to the sides of your head? that's an extreme version of phase problems. So if your snare and kick sound awesome, then don't sound so awesome when you add the OH's, that might be the problem.

AstroDan
george martin
Posts: 1366
Joined: Wed May 07, 2003 12:07 pm
Location: Avoca, Arkansas

Post by AstroDan » Wed Dec 19, 2007 3:39 pm

Or on phasing...

Take a pair of speakers. Wire one speaker red to black, black to red. They would be out of phase.

Microphones are basically speakers in reverse, and wiring and placement can both cause phase/polarity issues.
"I have always tried to present myself as the type of person who enjoys watching dudes fight other dudes with iron claws."

generichumanperson
alignin' 24-trk
Posts: 55
Joined: Sat Aug 11, 2007 5:54 pm

Post by generichumanperson » Wed Dec 19, 2007 8:49 pm

junkshop wrote:Phase is something you will learn as you go. You can hear it if you know what to listen for. A good example would be something like -

You are recording a electric guitar with a mic an inch from the speaker and another mic a few feet feet away from the speaker. The guitar sounds great through both mics individually but sounds bad when both mics are heard together. This is a phase issue. The sound is hitting the mics at different times because they are at different distances from the speaker resulting in comb filtering. Some frequencies are being cancelled out while others are being boosted.
There will always be phase differences when you use multiple mics on a source. It's an unavoidable law of nature but you can minimize the bad effects and use it to your advantage. This basically means moving mics around, listening to the results, repeat, repeat, repeat until it sounds good.
It's a hard concept to get your head around when you're starting out but it will become second nature after a while. The best advice I can give is keep moving mics around until it sounds good. Keep notes and draw pictures or take photos once you've got something you like. (I take digital photos and keep them in the same file as the Pro Tools session).

A stem is when you take a mix and break it down to multiple stereo mixes of each instrument (or what-ever you'd like). This means muting everything except the drums and recording a stereo mix to make a drum stem then muting everything but the bass and recording a stereo mix for a bass stem and so on and so on. The result is stereo stem mixes of each instrument that when combined with all of their faders set to 0 will sound the same as you master stereo mix. This is most often done when you are mixing on an analog console without automation or recall ability. The advantage is that you can make general changes after the mix is finished without having to recall all of the board and effects settings. You can turn the guitar up by turning up the guitar stem or turn the vocal down by lowering the vocal stem.

I hope I explained that well.
you did explain it well, thanks. So making stems basically sounds like bouncing tracks. That photos idea is a really good one, thanks for it.

TapeOpLarry
TapeOp Admin
TapeOp Admin
Posts: 1665
Joined: Thu May 01, 2003 11:50 am
Location: Portland, OR
Contact:

Post by TapeOpLarry » Thu Dec 20, 2007 4:08 pm

"Bouncing Track" is something that Digidesign unfortunately gave the world. In analog, you would bounce tracks together (reduction mix in the UK) and open up more tracks for recording. The Pro Tools version is kinda like that, but usually you're mixing, right? I'd rather mix just by running the track like with a deck if it ain't gonna do it fast and save me time...
Larry Crane, Editor/Founder Tape Op Magazine
please visit www.tapeop.com for contact information
(do not send private messages via this board!)
www.larry-crane.com

User avatar
the finger genius
re-cappin' neve
Posts: 746
Joined: Wed Nov 15, 2006 1:32 pm

Post by the finger genius » Fri Dec 21, 2007 7:56 am

generichumanperson wrote:wow, thanks a lot. That definitely did help, although I do have a couple questions though, this phase stuff is hard for me to grasp, like how would you be able to hear that mics are out of phase with eachother, other than hearing delays or something, how would you be able to tell that there are voids in certain frequency ranges? Is this a skill that develops over time, or is there some way to read it on a computer?
I once had a professor (Dave Fridmann) mention that one of my recordings was severely out of phase, and when I asked him what out of phase sounded like, his response was: "It sounds like a vacuum cleaner sucking out your brains through one ear."

Dave had super-ears, it's rarely that obvious to me (unless something is near 180 degrees out.) As for seeing it on a computer, you can at least do a quick check, by zooming in real close on the waveform, and looking to make sure that your first peak and valley are aligned on all tracks. Again, this should only be used as a guide; it's best to actually listen, if it sounds good, there's no reason to start monkeying with time-aligning. Especially on distant room mics, usually the delay is part of the sound, so you really don't want to be messing around with this.

Post Reply

Who is online

Users browsing this forum: No registered users and 10 guests