19.11.07

Creative Computer Final


Download Time Reversal Symmetry.mp3[4.3Mb]

Download SuperColider Patch.rtf

Electronic Instrument Design






This is the first stage in building the prototype design for my instrument.


29.8.07

AA - Wk 5 - Ideas for film

My film is taking the form of a documentary style investigation into the ideology and structure of the band that I play guitar in and started with my best friend Rowan on Halloween 2006. at that time I had become increasingly intereseted in the field of cosmology and similar fields of scientific interest particularly quantum physics. After some investigation, I became aware of many similarities between quantum physics and my previous experience with meditation. The concept that interested me the most is that of entanglement. This idea profoundly affected me last year and completely changed my outlook on life, so much so that we decided to name our musical project Entanglement. This theory of quantum physics has become our ideology in our band, and is the reason that we are on the brink of a quantum leap in our development.

Following is my explanation of what entanglement is about. I plan to use this as a narrative over parts of the film, with an eerie, mysterious theme.

Entanglement is a key element in the field of quantum physics. Entanglement is the process whereby a vibrating or oscillating can affect another system through simple proximity. The effected system is entrained to the source, causing it to vibrate sympathetically. A simple example of this is striking a tuning fork will cause another closely situated fork to vibrate at the same frequency.

Some experiments with pendulum clocks have yielded remarkable results; a whole row of clocks left overnight with different periods will develop similar phase relationships. More advanced experiments have been conducted in the quantum domain, involving the entanglement of oscillating electrons. This phenomenon is observed to occur instantaneously, irrelevant of distance, suggesting some kind of unseen interconnectedness.

It may seem like scientific trivial pursuit; who cares if we can entangle two electrons? However this has deep cosmological implications, after all we are all made up of atoms, which in turn contain oscillating electrons. The technology to entangle matter on a mass scale does not exist in a conventional sense, however It is commonly acknowledged the ability of music to deeply affect human physiology and psyche. When music is played at large concerts it is incredible to see the waves of people moving in synchronicity in response to the rhythm.

I would posit that there could be more information carried through music than simply mere notes, rhythm and harmony. Human physiology is structured from billions of particles, each with their own unique vibratory pattern which determines the nature of that particle, amalgamating to create a whole living structure working in synchronicity. Modern day quantum physics has demonstrated the existence of entanglement, is it really that much of a stretch to believe the existence of an intangible meaning in music, in intrinsic intelligence contained within its structure? How that message is communicated through simple vibratory patterns, or indeed what that message is, is unclear at this time.

28.8.07

CC Week 5

Download SuperC Patch

Download Sound File[1.8Mb]

22.8.07

AA-Team America

This week we examined the movie Team America as an example of good director/composer/sound designer collaboration. As a movie that has only voice actors played by marionettes the sound for the film has to be well thought out as it is the music which provides the realism needed to back the visuals.

There are 5 types of film music;

Theme Music
-This is the main themeatic material wich is repeated throughout the film
Mood Music
-Used to present emotional weight
Character Music
-Music which is associated with specific characters, think Darth Vader, Imperial March
Environmental Music
-Music which is accosiated with specific locations
Action Music
-Mimics onscreen action, speed, motion, fighting etc

CC Week4

here are my notes so far...keep getting errors... maybe one day..if i collide hard enough.. i will collide with the best of them....

http://h1.ripway.com/danmurtagh/SuperCWeek4.rtf

13.8.07

MTF - Week 3

As you can clearly see, enthusiasm is just oozing from Johnny C and maybe even Freddy is showing us some glory here...


...And more to the point it should be noted that our "forum" class is now taking place in the Engineering Department. No longer do we as musicians have to be burdened with the suffocating restrictions of "conventional" instruments, now we cas truly express ourselves in pure sinetone bliss or, if you prefer sawtone rapture.

Johnny C enters the fray with caution in mind.





And now... everything you've all been waiting for. The man behind the machine, Johnny C in full technicolor.

31.7.07

CC-Week1-BBCut

This week in CC we examined BBCut

http://h1.ripway.com/danmurtagh/DAnweek1.rtf

http://h1.ripway.com/danmurtagh/Drums01.mp3

http://h1.ripway.com/danmurtagh/Drums02.mp3

30.7.07

AA Week 1 - Designing A Movie For Sound

Well you guessed it folks, 3rd year music tech is back, with a vengeance. Johnny C and myself will be spearheading this campaign of total 3rd year domination, possibly even obliteration. To help us on our quest is the venerable Luke Harrold, whose knowledge of the dark film arts is deep and vested. With his knowledge of films, and our ability to make noise, there is nothing that can stand in our way.

This week I read an article written by Randy Thom on the filmsound.org website.

"The biggest myth about composing and sound designing is that they are about creating great sounds. Not true, or at least not true enough."

Our good friend Randy opens up the discussion with this bold statement. Ok, fair enough this guy has an opinion, read on. A few pages in and I start to get the feeling that Randy has a chip of the old block when it comes to the film industry and their apparent disregard to the contribution sound can make to a movie. Indeed he raises some good points about the need for sound to be considered as early as the preproduction stage. Randy argues however, that in most cases sound isn’t even considered until the film reaches post production stage. I gained some insight into the use of ADR in film making, how a bad performance at this point can have negative effects at the end product. Randy also raises some interesting ways to achieve certain moods or feelings through the consideration of sound even at the writing stage.

Whilst Randy raises some good points I did get the feeling that Randy’s chip of the old block may have been more of a rant about how the film industry doesn’t care about the (film)sound industry. Quite a good portion of the paper was openly criticising directors and their lack of knowledge.

“Unfortunately, most directors have only the vaguest notions of how to use sound because they haven’t been taught it either.”

This sounds a bit broad, but if anything at least it means that as the director of my own film this semester I can make a film with sound in mind and fill it with as much atonal/noise/art/silence/weird stuff/music as I like.

4.6.07

AA Week 12 - Mastering

For this song I used a variety of plugins to maximise the waveform. I fed the signal through an insert on the master fader. I used an aural exciter to create low end pulsing. The kick drum needed more substance so I applied the plugin at maximum to gauge the effect. I soon became aware of a psychoachoustic phenomenon that I had read about in Stav's book Mixing with you Mind and from the readings, referring to gravity and its effects on music. I found that at a certain point whilst adding the sub effect, I noticed the rhythm of the kick seem to lag. I attributed this to the subs being too loud creating this "heaviness" effect which seemed to drag the tempo. I eventually found a sweet spot where the effect really worked with the rhythm or "bounce" of the song.

I then implemented another insert containing Antares Tube which is available for a free demo trial at this link:

http://www.antarestech.com/download/demoform.shtml

I found that applied discretely the effect really warmed up the sound of the piece, made it feel more "up the front" in the sound field. I then used multi-band compression and applied maximising techniques using limiting and dithering for the final master.

I think this week, I have brought a new lease on life to our dear friend Johnny C.

Mastered Final Version of Water Drop 0'34, 1411kbps, Wave, Download[5.84Mb]

21.5.07

AA Week 10 - Avalon Preamp

THis week we used the Avalon Preams which have been freshly installed in studio 1. I thought i wouldf hit two birds with one stone for this weeks excersise. I wanted to make a good start on my major project, and I had the idea to create a click track for the drummer of the band I am recording. I organised for the two guitarists to come in to the studio, lay down their parts to a click, which i will then use as guide guitars for the drummer. The idea is to save the guitarists from having to come in and play along twith the drummer in real time, as well as giving the drummer the exact material he will be playing with, to practice. I used the avalon to warm up the sound. here is an example of the recording. I could only export at 128kpbs this week. Sorry!

Guitars - Click Track 2'46, 128kbps, Download[0.8Mb]

15.5.07

CC Week 9 - Recording

This exercise will involve the creation of a large synthesiser assembly containing the following components (1) MIDI Input and playback (2) A sample playback synthDef (3) An effects synthDef (4) A modulator synthDef (5) The synthDefs will be organised in a way so the modulator can manipulate the sample player which has its output routed through an effects synthDef. (6) Blog your patch and an MP3 result.

This week's task took some time to get my head around, and resulted in the collision of my brain with pain. That is to say, despite my efforts, help file extrapolating, and note taking I have been unable to create a patch that works. I incorporated last week's sample playback SynthDef with corrections into this week's excersise. My aim was to input the drum samples into a reverb SynthDef, which can be modulated by another controller SynthDef. I have been able to create each of the components for this week's task, however I have not been able to implement them into a cohesive, working patch. For a brief time I was able to create a reverb effect on the drums without modulation, however for some reason, without me changing any settings, was unable to recreate on reboot.

I can say that I need help with my memory management; I am not sure how to structure a logical kill off procedure for the multiple synthdefs which are created from the reverb SynthDef. Also my Modulator SynthDef is woefully incomplete. I also had difficulty with bussing signals; the process seems logical, however I am unable to produce results. Here is a downloadable copy of my patch as an rtf file, it contains all my formatting etc. Otherwise here it is below with the formatting decimated by archaic html.

Download Patch[8Kb]





//Boot Server--------------------------------------------------------------->
s = Server.local;
s.boot;

//Define current file path------------------------------------------------->
~filePath = PathName.new(Document.current.path).pathOnly;

//Reverb SynthDef-------------------------------------------------------->
(
SynthDef("reverb", {
//args none

var in,
inBusA = 2,
outBusA = 0,
rev;

//Bus input

in = In.ar(
bus: inBusA,
numChannels: 2
);

//Reverb Generator
5.do({
in = AllpassN.ar(
in: in,
maxdelaytime: 0.3,
delaytime: [0.09.rand, 0.08.rand],
decaytime: 1.0
);
});

rev = in;

//Output

Out.ar(
bus: outBusA,
channelsArray: rev
);
}).send(s);
)

//Controller/Modulator SynthDef------------------------------------------>
(
SynthDef("controller", {

arg freq = 5,
amp = 5;

var sig,
outBusC = 5;
//Signal

sig = SinOsc.kr(
freq: freq,
mul: amp
);

//Output

Out.kr(
bus: outBusC,
channelsArray: sig
);
}).send(s)
)
//------------------------------->
// More Stuff entered here?

// Modulator and Envelope Infrastructure------------------------------>
//- Clearly this section needs work!
inC = In.kr(
bus: inBusB

mod = SinOsc.kr(
freq: mFreq;
mul: mAmp
);

car = SinOsc.ar(
freq: cFreq;
mul: cAmp * inC
);
env = Env.new(
levels: [0,1, 0.3, 0.8, 0],
times: [2, 3, 1, 4],
curve: 'linear'
);
)
//Initialise Instance

r = Synth("reverb");
g = Synth("controller");


//My Sample Playback SynthDef--------------------------------------------->

(
SynthDef("DethSynth", {

arg bufnum = 0;
//pitch = 1.0;

var signal;

signal = PlayBuf.ar(
numChannels: 1,
bufnum: bufnum,
rate: BufRateScale.kr(bufnum)
);
Out.ar(
bus: 2,
channelsArray: signal
);
}).send(s);
)
//Loop load several samples-------------------------------------------------->
(
for(0,8, {

arg i;

Buffer.read(
server: s,
path: ~filePath++"drumSamp"++i++".wav",
bufnum: i
);
//Feedback
["bufID",i,"path",~filePathSamples++"drumSamp"++i++".wav"].postln
}
);
)

(
//Initialise MIDI------------------------------------------------------------------->
MIDIClient.init;
MIDIIn.connect;
MIDIIn.noteOn = {

arg uid,
chan,
noteNum,
noteVel
;

[noteNum.midicps, (noteVel.midicps / 100)].postln;

Synth("DethSynth", [
\bufnum, noteNum%8,
\pitch, 1.0
]);
}
)

// Setup Recording Settings--------------------------------------------------->

// Manual Recording :: Server
~recPath = (PathName.new(Document.current.path)).pathOnly;
s.recSampleFormat = "int16"; // sample format
s.recHeaderFormat = "WAVE"; // header format
s.recChannels = 1; // channels
s.prepareForRecord(~recPath++"test.wav"); // location
)
(
// Start Recording
s.record;
)
(
// Stop Recording
s.stopRecording;
)




14.5.07

AA Week 9 - Auto Tune

This week our class was supposed to to be an opportunity to study Antares Auto Tune with David Grice. David was there, the studio was there, I was there, Vinnie was also there, and most importantly, my enthusiasm was in full swing. I was looking forward to exploring the functionality of Auto Tune.

"Antares Auto-Tune is a powerful pitch-correction tool which is already an industry standard for tightening up vocal performances. As Paul White explains, however, it has the potential to do much more..."

This is a quote from the readings, an article from Sound on Sound Magazine, which is a tutorial for Antares Auto Tune. I have read the reading, and it was very informative. It is also available free on the Internet.

Ok, so we don't have Antares, what now? The class ended up with David trying to explain how to use Auto Tune using some generic "pitch shifter". I know how to use a pitch shifter and I know that it is not really the same thing. I have used Auto Tune myself at Sound House Studios, so luckily, I am not completely in the dark(I cannot speak for Vinnie though).

David asked us to finish off a mix of last week's jazz recording. I enjoyed doing this, here it is.

Jazz Ensemble - Full Mix 2'46, 320kbps, Download[6.5Mb]

8.5.07

Forum Week 8 - Gender in Music Technology

__________
Stephen Whittington. “Forum: Semester 1, Week 8. Gender in Music Technology”. Forum workshop presented at EMU, University of Adelaide, South Australia. 3rd May 2007.

CC Week 8 - Buffers

Create a SC patch that provides eight sample playback. The playback will be controlled from a MIDI keyboard. Consider MIDI playback control information including pitch, amplitude, filtering, modulation, panning and the use of unit generators explored previously.

Here is my Supercollider Patch as a zipped file. The zip also includes the drum samples needed for this patch to function. If the folder is unzipped into /Users/student/Desktop/ the patch "should" function correctly.
Download[1.3Mb]


//Define current file path
~filePath = PathName.new(Document.current.path);

//My SynthDef------------------------>
(
SynthDef("DethSynth", {

arg bufnum = 0;
//pitch = 1.0;

var signal;

signal = PlayBuf.ar(
numChannels: 1,
bufnum: bufnum,
rate: BufRateScale.kr(bufnum)
);
Out.ar(
bus: 0,
channelsArray: signal
);
}).send(s);
)
//Loop load several samples------------------>
(
for(0,8, {

arg i;

Buffer.read(
server: s,
path: ~filePathSamples++"/Users/student/Desktop/Sound2/drumSamp"++i++".wav",
bufnum: i
//I could not get SC to automatically read samples in folder
//so I have specified the address/path above.
);

//Feedback
["bufID",i,"path",~filePathSamples++"drumSamp"++i++".wav"].postln
}
);
)

(
//Initialise MIDI------------------------------>
MIDIClient.init;
MIDIIn.connect;
MIDIIn.noteOn = {

arg uid,
chan,
noteNum,
noteVel
;

[noteNum.midicps, (noteVel.midicps / 100)].postln;

Synth("DethSynth", [
\bufnum, noteNum%8,
\pitch, 1.0
]);
}
)

7.5.07

AA Week 8 - Mixing

During week 7 we had the opportunity to record a jazz ensemble in the EMU live space. David had already positioned the mics roughly before I had arrived. We spent the class fine tuning and checking various mic placements. This week our task was to perform a mix on the drums from this recording. This next clip is a mix without plugins. What?! No plugins? Yes, I have mixed this clip using only panning and faders. Note the cardboard sounding kick drum, and the untidyness of the kit in general (lack of subs, presence, "tightness" etc).

Drum Excerpt - Panning & Faders 0'29, 320kbps, Download


This next clip is a full mix of the drumkit with all other instruments on mute. I use the term "mute" loosely, as the instruments in question do bleed through the drum mics and particularly through the room mics. I used eq and compression to correct elements of the kick, snare, and toms. I used a gate on the tom mic to reduce bleed. I also eqed the hats and overheads to add presence. I also used multiband compression on the master fader to boost certain frequency bands and raise geeral volume.

Full Drum Mix 0'29, 320kbps, Download


Here is a mix of a different drumkit, recorded in the EMU live room with a jazz trio. The bass and piano are both muted, however similar to the previous recording, both instruments bleed into the room mics. I have included the room mics in the mix as they provide much needed ambience to the drums. Most of the techniques used here are similar the the last mix terms of eq, gates and comps. I feel that there could be too much low end on the kick creating a booming effect.

Jazz Trio 0'32, 320kbps, Download


Here is a short drum beat I programmed for my band Entanglement using the Drumkit From Hell Superior 1.0 Sample Library. I have taken out guitars, bass, and sfx, to leave you with the mixed drum sound on its own. I took a different approach here as these drums are for a heavy metal band. The most interesting effect I used is one i learned from David last year; using compression on the room mics to create a pumping, almost distorting effect, then adding it back into the mix. In this instance it adds a subtle warmness to the overall sound.

DFH Drum Mix 0'57, 320kbps, Download


Here is the same clip with all the plugins bypassed. From this you can really tell the difference I have made to my drum sound with plugins. If you're interested to see what I'm up to musically, check out my band Entanlgement. Our MySpace is accessable by clicking on the big eye to the right of this blog. All songs are mixed by me.

DFH Raw Drum Sound 0'57, 320kbps, Download


__________
[1] David Grice. "Audio Arts: Semester 1, Week 8. Mixing." Lecture presented at EMU, University of Adelaide, South Australia, 1st May 2007.

3.5.07

Create a SC patch that contains a synthDef that utilises the following core elements and
related unit generators as demonstrated in class - carrier, modulator, envelope, filter,
delay and panning. This patch will be playable from a MIDI controller and will utilise
memory management principles.

(
//Initialise MIDI------------------------------>
MIDIClient.init;
MIDIIn.connect;
MIDIIn.noteOn = {

arg uid,
chan,
noteNum,
noteVel
;

[noteNum.midicps, (noteVel.midicps / 100)].postln;

Synth("simpleSine", [
\cAmp, 0.5,
\cFreq, noteNum.midicps,
\mFreq,(noteVel.midicps / 100),
\mAmp, 10
])
};
)

//Initialise Synth------------------------------------------------->
(
SynthDef("simpleSine", {

arg cFreq = 1000,
cAmp = 1;

var car,
mod,
out;

//Modulator
mod = SinOsc.ar(
freq: 300,
mul: 0.5
);

//Carrier
car = SinOsc.ar(
freq: cFreq,
mul: mod * cAmp
);

// Output
out = Out.ar(
bus: 0,
channelsArray: car
);
}).send(s);
)

2.5.07

Forum Wk 7

This week’s forum topic was “Gender in Music Technology”. A controversial topic indeed, with Ben Probert being the opener for the festivities. Ben posited that one of the biggest gender issues regarding music technology is the lack of female interest. Ben argued that this could be due to social conditioning and conventions. As it stands, the domain of high tech gadgetry does seem to be primarily a male dominated arena. Ben also believes that this could be due to male oriented education systems, where women are not actively encouraged to participate in, leading to a lack of interest.

1st year tech student Doug, had a seemingly ill prepared speech, veering off into unrelated tangents, and making gross stereotypes about men and women without any real hard evidence to substantiate his claims. His misunderstanding of the forum topic, and outrageous ideas about the “genetic” differences between men and women led to a heated argument amongst students, particularly from the girls in the class.

1st year student Amy was next, who had some interesting comments to make. She argued that people should be given opportunity base on skill and ability rather that gender. This idea she refers to is a merit based system. Personally I found that Amy’s talk rang true with my view; whether you are male or female should not be of consequence, whether you have the ability and the drive to do the job should be the deciding factor.

Jake wrapped up the day with a video of Bjork. Whilst the film itself was interesting to watch, Jake’s talk did not have a huge impact on me. As most of the points were covered by the previous talks, and by Stephen at the beginning of the class, there was not all that much for Jake to discuss.

Next week’s forum should be interesting as more people discuss the same thing as the people last week. Sweet.

4.4.07

Forum Week 5 - Collaboration II

This week marked the second installment in Stephen Whittington's investgation into collaboration. Luke entered the foray with an insightful discussion about a collaboration between Radiohead


__________
Stephen Whittington. “Collaborations II”. Forum workshop presented at EMU, University of Adelaide, South Australia. 29th March 2007.

Alfred Essemyr. “Collaborations II”. Forum workshop presented at EMU, University of Adelaide, South Australia. 29th March 2007.

Darren Slynn. “Collaborations II”. Forum workshop presented at EMU, University of Adelaide, South Australia. 29th March 2007.

Luke Digance. “Collaborations II”. Forum workshop presented at EMU, University of Adelaide, South Australia. 29th March 2007.

CC Week 5 - Unit Generators

Create a synthesiser code patch that demonstrates your understanding of unit generators. The code patches will need to select a form of synthesis (such as FM, AM, Additive etc) and integrate the base components of a synthesiser including pitch, amplitude and modulators.


// Triangle Oscillator at Audio Rate with Amplitude Modulation
{ LFTri.ar(
SinOsc.kr(
SinOsc.kr(
freq: 0.1, //Mod frequency
mul: 5,
add: 10
),
phase: 0,
mul: 800, //Mod Multiplier
add: 800 //Mod Addition
),
mul: 0.5, //Multily Amplitude
add: 0.1 //Add Amplitude
)
}.play;

__________
[1] Christian Haines. "Creative Computing: Semester 1, Week 5. Unit Generators." Lecture presented at EMU, University of Adelaide, South Australia, 29th March 2007.

2.4.07

AA Week 5 - Cello

I was particularly excited about this week's session, as I had found out I would have to opportunity to record a friend of mine, Louise, who is a cello student at the Con, and a fantastic player. The piece Louise chose to play for us is Suit for Solo Cello, 1st Movement. Preludio-Fantasia (Adante Maestoso) by composer Gaspar Cassando. Dragos and myself worked on this project cooperatively and began by experimenting with various positions around the EMU live space, deciding on an area which really complimented the cello's character. We also had a variety of mics to play with, all of which yielded different results.



Neumann U87 0'46, 320kbps, Download
We angled this mic facing the bottom section of the f-hole on the cello, with a cardioid pattern. This was to capture the resonance emanating from that part of the instrument. The 87 was used as it has a large diaphragm to capture low end, but also because of smoothness in the mid to high frequency range.

Neumann KMi 0'46, 320kbps, Download
This microphone was angled directly at the top of the f hole. We were hoping that this mic would provide some nice highs, and it did. I belive this mic had the most accurate representation of the sound in the EMU live room. Top notch!



Neumann U89 0'46, 320kbps, Download
This mic was used as an ambient source. Dragos and I walked around the live space searching for the sweet spot. We both agreed on a position roughly 2 metres away, at around 8 o'clock in front of Louise. This gave an excellent effect, with resonance intact without harshness in the mids and nice reverberance.

Beta 52 0'46, 320kbps Download
This was intended to capture to extreme bottom end of the cello due to its nature as a kick drum mic, however results differed from expectations. Sounding very scooped around the low mids, the 52 just sounds a bit "quacky".




Beta 57 0'46 320kbps, Download
Sounding similar to the 52 this mic lacked the frequency response of the Neumann's.

Final Mix 0'44, 320kbps, Download
Overall the Neumanns sounded the best. In the final mix, the two Shure mics were dropped in volume with a small drop in the ambient mic. I added some subtle reverb to add space to the mix, as well as some multiband compression to add some crispness to the highs. I believe this recording captured the sound of this fantastic cello player with accuracy and warmness.

Suit for Solo Cello, 1st Movt. 6'52, 192kbps, Download[9.45Mb]
Here is a copy of the full version of the piece(I had to compromise on quality due to file size restrictions).


__________
[1] David Grice. "Audio Arts: Semester 1, Week 5. String Recording." Lecture presented at EMU, University of Adelaide, South Australia, 27th March 2007.

28.3.07

Forum Week 4 - Collaboration

CC Week 4 - Wireless MIDI

Create a function that includes mechanisms by which to setup real-time MIDI input from a controller and output to a simple VI device. For example, mapping the ReMote SL to SimpleSynth. This will include MIDI note on/off, pitch and control data mapping, post window feedback, default settings etc and 'must' be setup to work with systems in the Audio Lab.


__________
[1] Christian Haines. "Creative Computing: Semester 1, Week 4. Wireless MIDI." Lecture presented at EMU, University of Adelaide, South Australia, 22nd March 2007.

26.3.07

AA Week 4 - Brass


Olle Schnipper plays a bizarre saxophone-like synthesizer from the 80s[1]

This week we had the opportunity to work with a brass player from the Elder Conservatorium, Kim Gluyas. Dragos and I worked as a team to set up a session for saxophone in the EMU space and began by testing the room's ambient response in various positions. We asked Kim to walk around the room while we listened for resonance. Once a position was determined, we set up a variety of mics to see which was best suited to the timbre of the sax Kim was using. Following are some examples of improvisation by Kim using different mics and placements.

Beta 52 0'36, 320kbps, Download
This microphone was intended to catch the resonance of the sax; we pointed the 52 into the lower region, beneath the bell of the instrument. Unfortunately the sound returned is a bit woofy.

Neumann KMi 0'36, 320 kbps, Download
This mic sounded the best in my own opinion. My reasoning? The frequency response of the Neumann KMi is comparatively the most accurate of the three. That's my reason.

Neumann U89 0'36, 320kbps, Download
Don't let my previous comments mislead you, this mic is quality. the sounds produced is warm, but may have been in an inferior position to the KMi. More experimentation is needed.

Mixed Excerpt 0'36, 320kbps Download
This is the Final Mix. I used multi band compression to boost the volume, whilst paying attention to the dynamics. I also added reverb, which in hindsight could have been used more subtly.

__________
[1] Music and Sound at the Banff Centre. "Audio News and Views." March 2006. 20th March 2007.

[2] David Grice. "Audio Arts: Semester 1, Week 4. Brass Recording." Lecture presented at EMU, University of Adelaide, South Australia, 20th March 2007.

21.3.07

Forum Week 3 - Compossible

This week's forum was wacky. In fact i'm not really sure where to begin. To be honest I'm not quite sure I should say anything at all.

__________
[1] David Harris. "Music Technology Forum." Lecture presented at EMU, University of Adelaide, South Australia, 15th March 2007.

CC Week 3 - OSC Responder

Consider the structure of a MIDI message and build functions that can transmit and receive all note on / off and modulation information between two computers using the OSC protocol. Consider the types of arguments that the function should take to make the function transportable and easily useable on other computers.


// ------------------------------------------------------------------------------------
// RECEIVER SETUP - OSC RESPONDER
// ------------------------------------------------------------------------------------

(
// Open Sound Control Responder
r = OSCresponder(
addr: nil, // nil respond to messages from anywhere
cmdName: "Note Off", // an OSC command (same name as messge sent)
cmdName: "Note On",
cmdName: "Key Pressure",
cmdName: "Control Change",
cmdName: "Program Change",
cmdName: "Channel Pressure",
cmdName: "Pitch Wheel Change",
action: { // action to perform

// Arguments - default
arg time, // OSC time
resp, // responder name
msg, // message
addr // address
;

// Feedback
[time, resp, msg, addr].postln;
}
).add; // add method - adds OSC responder to server
)


// ------------------------------------------------------------------------------------
// SENDER SETUP - OSC SEND USING NETADDR + SENDMSG
// ------------------------------------------------------------------------------------

(
// Setup the Network for the target computer (or the computer that is having data sent to it)
n = NetAddr(
hostname: "129.127.86.100", // IP Address of target computer
port: 57120 // Port Number that SC uses
);
)

(
// Send a message to that computer
n.sendMsg(
"Note Off", // Command Name (Same as OSCResponder Command Name)
"1000nnnn, 0kkkkkkk, 0vvvvvvv" //(kkkkkkk) is the key (note) number //(vvvvvvv) is the velocity
);
// Send a message to that computer
n.sendMsg(
"Note On",
"1001nnnn, 0kkkkkkk, 0vvvvvvv"
);
// Send a message to that computer
n.sendMsg(
"Key Pressure",
"1010nnnn, 0kkkkkkk, 0vvvvvvv" //(vvvvvvv) is the pressure value
);
// Send a message to that computer
n.sendMsg(
"Control Change",
"1011nnnn, 0ccccccc, 0vvvvvvv" // (ccccccc) is the controller number (0-119)
//(vvvvvvv) is the controller value (0-127)
);
// Send a message to that computer
n.sendMsg(
"Program Change",
"1100nnnn, 0ppppppp" // (ppppppp) is the new program number
);
// Send a message to that computer
n.sendMsg(
"Channel Pressure",
"1101nnnn, 0vvvvvvv" //(vvvvvvv) is the pressure value
);
// Send a message to that computer
n.sendMsg(
"Pitch Wheel Change",
"1110nnnn, 01111111, 0mmmmmmm" //(llllll) are the least significant 7 bits //(mmmmmm) are the most significant 7 bits
);
)


[1] Christian Haines. "Creative Computing: Semester 1, Week 3. Architecture & OSC." Lecture presented at EMU, University of Adelaide, South Australia, 15th March 2007.

19.3.07

AA Week 3 - Drums

Here are 3 recordings done in EMU live space with Dragos Nastasie Vinnie Bhagat and myself. Vinnie Bhagat is playing drums. The sound is natural with no effects, save for a limiter on the master fader to boost volume. The kick was miced with a Beta 52A. Unfortunately there is no hole in the front kick drum skin, so the 52 had to be placed in front. This resulted in a boomy and somewhat undesirable kick sound. The snare's top and bottom were miced with Beta 57A's, which yielded a nice snappy sound with a pleasant ring. We miced the toms with Beta 56's, however the floor tom lacked resonance so we swapped mics with a Beta 56. This was better, though more experimentation with mic placement is needed to achive a superior sound. Overheads were miced with Rode NT5's, which always seem to sound good, although I do believe the one 16" crash we had to use sounds crappy to say the least. A U87 was used for a room mic, which was used subtly. I gained much from this excercise, however I would like to spend more time on a kick drum with a front hole, as well as more attention to mic placement.

Drums - 01[mp3]
Drums - 02[mp3]
Drums - 03[mp3]


__________
[1] David Grice. "Audio Arts: Semester 1, Week 3. Drum Recording." Lecture presented at EMU, University of Adelaide, South Australia, 13th March 2007.

14.3.07

Forum Week 2 - Originality

CC Week 2 - MIDI n to freq

With the assistance of your function from the previous week create a control structure that generates the entire MIDI note range of frequencies. The code will build and print an array that contains note name, octave number, MIDI note number and frequency.


(
// build a table of note names

var table = ();

value
{
var semitones = [0, 2, 4, 5, 7, 9, 11];
var naturalNoteNames = ["c", "d", "e", "f", "g", "a", "b"];


(0..9).do
{
arg o;

naturalNoteNames.do
{

arg c, i;

var n = (o + 1) * 12 + semitones[i];

table[(c ++ o).asSymbol] = n; table[(c ++ "s" ++ o).asSymbol] = n + 1; //determine sharp
table[(c ++ "ss" ++ o).asSymbol] = n + 2; //determine double sharp
table[(c ++ "b" ++ o).asSymbol] = n - 1; //determine flat
table[(c ++ "bb" ++ o).asSymbol] = n - 2; //determine double flat

};
};
};

"Pitch class and Octave Number, MIDI Note Number, Frequency Value" .postln;
a = table.atAll (#[a4].postln).postln;// Creates MIDI Note number --Enter Pitch class and Octave number here.


a = 2**((a-69)/12) *440; //Coverts MIDI note number to frequency value


)


__________
[1] Christian Haines. "Creative Computing: Semester 1, Week 2. Introduction (2)" Lecture presented at EMU, University of Adelaide, South Australia, 08/03/2007.

12.3.07

AA Week 2 - 5.1 Surround

This week in Audio Arts we examined 5.1 technology. 5.1 simply refers to the speaker setup employed to create spatial effects which we enjoy in the movie cinemas. The 5 refers to the 5 satellite speakers set up around the listener specifically, Left and Right, Center, Rear Left and Right. The .1 refers to the subwoofer, which due to the nature of low end frequencies is uni directional, and hence can be placed anywhere. In spite of this, cinema setups generally incorporate the sub woofer to the left at the front of the room.

7.3.07

Forum Week 1 - Semester 1

CC Week 1 - SuperCollider

[1] Create a function that takes two arguments - pitch class and octave number - and produces the corresponding the MIDI note number. [2] Create a function that can convert the pitch class and octave number into its corresponding MIDI Note number and then into its corresponding frequency. It will include extensive commenting, descriptive arguments and variable names, default values including a set tuning frequency.

Here is a SuperCollider patch I created to convert Pitch and Octave class to MIDI note number. The resultant MIDI note number is then converted to a Frequency value. Unfortunately my HTML coding skills are about as good as my SuperColliding skills, so the formatting leaves a lot to be desired.

(
// build a table of note names

var table = ();

value
{
var semitones = [0, 2, 4, 5, 7, 9, 11];
var naturalNoteNames = ["c", "d", "e", "f", "g", "a", "b"];



(0..9).do
{
arg o;

naturalNoteNames.do
{

arg c, i;

var n = (o + 1) * 12 + semitones[i];

table[(c ++ o).asSymbol] = n; table[(c ++ "s" ++ o).asSymbol] = n + 1; //determine sharp
table[(c ++ "ss" ++ o).asSymbol] = n + 2; //determine double sharp
table[(c ++ "b" ++ o).asSymbol] = n - 1; //determine flat
table[(c ++ "bb" ++ o).asSymbol] = n - 2; //determine double flat

};
};
};


a = table.atAll(#[a4]).postln; // Creates MIDI Note number

a = 2**((a-69)/12) *440; //Coverts MIDI note number to frequency value

)


__________
[1] Christian Haines. "Creative Computing: Semester 1, Week 1. Introduction." Lecture presented at EMU, University of Adelaide, South Australia, 1st March 2007.

5.3.07

AA Week 1 - Stereo Micing

This week we examined various stereo micing techniques. For this exercise I decided to implement the techniques for drum overheads. It seemed to me the most practical application aside from micing an orchestral performance, however I was promptly informed by peter that taking mics walkabout from level 5 was a definite no-no. So instead of soothing the savage beast with the gentle ambience of elder hall, you are going to have to deal with my less than satisfactory drumming skills. I have tried a variety of mic placements, each yielding a significantly different result;

Coincident Pair [mp3]


This example was recorded using a matched pair of NT5's. The stereo spread here seems to be quite narrow, with the kick and snare drums sounding quite resonant in the center.

Spaced Coincident Pair [mp3]


In this example I adjusted the Coincident Pair so that the diaphragms were no longer aligned at 90 degrees. As you can see from the diagram the pair is now spaced with their diaphragms aligned on a horizontal plane. The stereo spread here appears to be a little wider compared with the previous example. This is most noticeable when toms move from left to right as my drum fill progresses.

Spaced Pair [mp3]
This example demonstrated a much more "spacey" effect. Kick is not so centered, the stereo spread is a lot wider, and the reverberation and ambience from EMU's live room becomes much more apparent.

NT4 Stereo Mic [mp3]
The NT4 is a specialized microphone, which actually has two diaphragms set on top of each other in a permanent X/Y position, otherwise known as a Coincident Pair. This example appears to me to have quite a narrow stereo spread. The kick drums sounds less prominent that in the first example which uses the same technique with two microphones. The snare drum in this example has more presence than the others, this could be due to placement rather than differring mic models.

This was an interesting exercise, one which I hope to further in the coming weeks' lectures on drum mic placement.

__________
David Grice. "Audio Arts: Semester 1, Week 1. Multi Micing." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 27/02/2007.

12.6.06

Show's Over Folks!

This will be my final post for semester 1. That is not to say that this is my final post. In fact, whether or not this blog continues to be up for assessment, I will endeavour to keep it updated. I believe having a blog, especially one which is shared amongst my fellow music tech students, is a fantastic idea. Through the blogger I have been able to see what other students have been up to, what their opinions are on many topics, as well as post and receive comments. I believe it make the course more transparent, which is a great thing. I was particularly relived to find that some opinions I have held to myself are shared by some of my colleagues.

David Grice’s class this semester has been valuable source of information for me. Having someone in the industry with practical hands on knowledge of how it works out there. After all, it is all well and good to sit in class and have all the technicalities written up on the whiteboard, but this doesn’t prepare you for the practicalities of actually finding a job in the field. If I wanted to build and run my own studio, where do I start? How much does it cost to set up? What essential equipment do I need> how much can I expect to earn? Are there opportunities in SA or do I need to relocate? These are the questions which are not discussed in my degree, but are perhaps the most pertinent when it comes to my long-term livelihood in the industry. David, being someone from the industry, not, to borrow a phrase from Mark Carroll, living in an ivory tower (aka the Shultz building!), has been most helpful in addressing some of these questions. I have not even mentioned how helpful his tuition on recording techniques have been. I have heard a rumour that he won’t be lecturing next semester which is a damn shame, as I found his class most engaging and relevant to my interests.


Creative computing has been great this year. Max is an extremely powerful program, of which I have only been scratching the surface. It can be a brain drain at times, however it is most satisfying when a patch finally does what you want it to! I am looking forward to implementing more MSP based patches next semester.

Forum has also been engaging(sometimes not!) this semester. Obviously it is a new thing for EMU and I believe there has been a learning curve for both students and lecturers. Having artist talks has been excellent, including the post grad students. Highlights for myself were the Milkcrate project by Seb Tomczak and Gordon Monro’s presentation of his brainwave recording, most interesting indeed. David Harris’s part of the lecture has had it’s ups and downs for me. While I find some of the piece’s he plays interesting there are many which I find extremely irritating. The reason I find it irritating is not because the sound is unpleasant, albeit quite often it is, but because quite often I fail to see the relevance to the rest of my studies. It seems to me rather tan being a forum where works and ideas about music technology are presented, it is more of a listening session of David’s private collection. I believe a forum should be just that, a forum where everyone has an opportunity to present a work or idea, whether it is their own or someone else’s, to the class and have an open discussion about it. I know that I personally have come across some interesting music in the past six months that has far more relevance to music technology that many of the compositions that David has played for us this year. I believe that is we can pool our knowledge and experiences, surly this would me more beneficial that only hearing from one person week in week out? whilst David Harris is most knowledgeable in a variety of genres, I do think that the music he play’s is subjective to his tastes and does not reflect the interests of the class. Week 12’s forum was much better. Finally some input from other students! Stephen Whittington commented on his disappointment that more people didn’t volunteer, If I had have known there were invitations I would most definitely have volunteered.

2.6.06

Shock! Horror!

Tick tock, it’s the end of semester and the beginning of winter. Too cold to be sitting exams I say. Maybe I’ll speak to the Dean about postponing assessment for another 6 months. I feel like that’s how long it’s going go take me to finish all this work I have. Unfortunately I only have about 3 weeks…now where did I put that time dilation device? To make matters more complicated I recently discovered I have made a grievous error of judgment (shock, horror!). But that’s impossible I hear you say. Well believe it kids, the rumors are true. This blunder on my part was recording a song a few BPM short of a six pack. In English I have recorded it too slow. Record it again? No chance. So far bass and guitars have been completed on this particular project. The drums are sequenced in Steinberg’s Nuendo using the sound bank Drumkit From Hell.

It is easy enough to change the tempo of the drums; merely the click of the mouse and I can have blast beats faster that Beethoven’s fastest chops. Seeing as vocals have yet to make their debut on the recording, it all boils down to the guitars and bass. First thing’s first, hit the Time compression/Expansion plug-in in the protocols arsenal. Second thing, run into more complications. Ok let me explain; I have 2 guitar tracks panned left and right respectively, with bass in the centre. Each instrument has been recorded from 3 sources, i.e. each guitar take pulls audio from three mics placed in front of a single cab, and the bass with 2 mics and a DI. This leaves me with nine simultaneous tracks for three layers of audio. Still with me? Ok, select all nine tracks, process the audio. Now this is where things go awry. It seems the effect will work for an individual track, however when it processes more than one track i.e. the 3 guitar tracks from the one take, the end result is pure degeneration of you work. It sounds like the mics have become completely out of phase with wave cancellation running rife through the project.

Now what? After consulting with my lecturers and colleagues in the industry I have received mixed responses from everyone, without any definite course of action to take. I know that I can bounce the project in its entirety down to a single stereo track and then run the plug-in with total success, however this requires me to do a full mix down prior to time compression. That wouldn’t be so bad except I’m am quite concerned that if I record the vocals at the slower speed, I could be in for a “chip monk” effect after acceleration. The best case scenario would be to find an algorithm that is not going to damage my guitar and bass tracks, speed up the drums in Nuendo, and then record the vocals at the new speed.

Don’t flip out people, I will restore order and right this terrible wrong!

24.5.06

Times two or times four?


Another late session in the studio last night. It seems that no matter how much time I anticipate a project taking it is nearly always double my original estimate. What is up with that? Sometimes I try to trick myself by doubling the amount of time I would normally think something will take and saying that it is my original guess. You would think that it would work but it ends up take twice as long anyway, giving me a project that takes 4 times as long as what I thought in the first place! Sometimes it is my fault sometimes not. When it’s not my fault I have a tendency to gripe, I have a feeling that you are in for a whole lot of griping.

Gripe one; the studio setup at my uni. Now don’t get me wrong, it is a great studio, with quality microphones, and the control 24 desk is fantastic to work with. We have 2 control rooms and two recording spaces. One is a dead room, and the other is a live room. The dead room, which is the room that I would prefer to do vocal recordings can only be booked with studio 2. The problem with this setup is that there is no intercom. How am I supposed to communicate with a vocalist in a room that sound cannot penetrate? Ok, so I can’t blame the under funded uni, I guess the leaves the government. Damn bureaucrats and their red tape. Anyway with some time lost I found a solution by setting up a condenser mic (for sensitivity) in the control room and patching it into the mixer. No problem, gripe over.

Gripe two; you thought it was over? Guess again. This gripe is over the headphone amp. Now maybe this gripe is unwarranted, perhaps that only person at fault is myself and due to my lack of knowledge in the fine art of patch bay technicalities. Either way I lost much time trying to achieve a relatively simple outcome. This being, a clean signal from the dead room into ProTools, accompanied by a signal with reverb from the DP4 unit back into the dead room. This signal must be accompanied by the music which must remain unaffected. The point of all this is to make the singer more comfortable by adding some sweetness to his voice in the cans but not altering the recorded version. After much stuffing around I found it impossible to setup and gave up. The headphone amp would only output the vox or the music separately, never simultaneously. It appears that I’m not the only person that has had issues with this infernal device, as every time I work in studio 2 the headphone line is plugged not into the headphone amp, but into the mixing desk

Gripe three; just kidding! I have nothing else to report, only a dark void where my patience used to reside. See you next week!

7.5.06

Back to Bassics

It seems my ego know no bounds. I am referring to a previous post in which I commented "With this setup, I am quietly confident in the bass sound being large." Well I was right about one thing, it just so happens that it wasn't about the setup. It turns out that that only way I could record using an Ampeg classic bass head was if I shelled out 3500 bucks to the kind people at John Reynolds. Whilst they are kind, they bungled up, one person tell me it was for hire, and another trying to convince me it was the deal of the century, meanwhile I don't even play bass! I ended up visiting my friends at Rock Music on Pultney St and trying out a few amps myself. It all came down to 2 amps in the end, and it was a case of transistor vs tube. The amps in question were the Ampeg SVT-4 Pro and the Fender Studio Bass. Now when I say transistor, I am referring to the Ampeg, as it uses MOSFET power amps. MOSFET is acronym for metal oxide semiconductor field-effect transistor. Without giving you a complete technical rundown I will note that MOSFETs are commonly found in computer chips and are actually not metal but silicone. The advantage of using MOSFETs for power applications is that they deliver transistor equivalent power levels, whilst behave similar to vaccum tube. The idea is create an amplifier which has high power capabilities (the SVT-4 delivers a staggering 1200w into 4ohms!) with the extra warmth synonymous with tube driven amps.

I tried out the SVT-4 Pro through a Hartke 4x10 cab using a Fender Jazz bass. The sound was quite harsh with loads of attack in the top end which I attributed to the Hartke cab due to its aluminumium cones. Revelation time; paper cones! Out with the Hartke in with the Ampeg 4x10 cab. Much better. The SVT-4 comes with a variable 0-7:1 compressor, quite handy for the style of bass I was planning to record; 4 string detuned to a low A can produce a large dynamic range particularly with the top end twang.

The Fender Studio Bass amp is on the other side of the spectrum to the ultra modern SVT-4. All valve pre and power amps this vintage baby was made in 1978. The 200 watt Fender was originally designed as a combo with a 15" speaker, however the old warhorse I was using had the speaker kicked in. Looks can be deceiving though, the Fender had been recently reconditioned with a brand new set of 6 6L6's installed. That amp sounds amazing. Clean and crisp at low volume, warm and overdriven when cranked, breaking up nicely with an aggressive playing style. Whilst the Ampeg is a quality amp with many modern features, I could not look past the Fender with its rich vintage tone. The Fender is the choice for this recording.

30.4.06

It's all about maximising your time!!!

This week’s technology forum was a bit of a mixed bag, but then again when is it not? David Harris presented us with some widely contrasting pieces, the first of which I found to be very interesting. Composed in 1989 by Iannis Xenakis, the piece is titled Voyage absolu des Unari vers Andromede, or Voyage to Andromeda. This composition was constructed through a series of graphs which then was interpreted electronically using a computer. The result is a smorgasbord of sweeping soundscapes, a sensory explosion if you will. I enjoyed it thoroughly, the piece certainly lived up to its name, for I found it somewhat reminiscent of 2001 A Space Odyssey.

The second piece entitled "In Flagranti", was composed by Gabriele Manca and performed by Geoffrey Morris. David Harris described it as an experimental bottleneck guitar composition. I on the other hand would describe it as a steaming pile of unmentionables. That’s right, I pluralised unmentionable it was that bad. Apparently the piece was supposed to demonstrate the performer's ability to play varying degrees of “micro rhythms”. I’m pretty sure that it was called micro rhythms, you see, I’m not certain what happened after it was played, must be some kind of psychological repression. It sounded like a two year old mashing his face into the strings. My opinion was shared amongst my peers, although when David was asked his opinion on the piece, he claimed he found it enjoyable.

I can definitely see the point of giving us a broad background in music technology, exposing us to a variety of pieces and genres, however there has been some questionable material that we have been subjected to. In this case, there was no technology involved at all, let alone music. Maybe David was trying to demonstrate the amount of patience required to be a sound engineer involved in recording self indulgent artists palming off complete rubbish as intellectual art. Your guess is as good as mine, probably better if you never heard this piece.

Obviously this is my only my opinion, and is not reflective of all the materials presented. Many of the pieces we are exposed to I find engaging and thought provoking. Some of them I do not. As a general rule I try remain open minded to new ideas and different ways of thinking. However it really grinds my gears when I am continually subjected to long repetitive compositions that do not have anything to do with music technology. Surely a piece such as In Flagranti would be better suited in a composition forum? Seeing as there is no use of technology, save the recording process (which wasn’t examined at all), the piece has no place in a music technology forum. I believe we should be examining pieces that are relevant to our field of study. I have creative computing and audio arts, I am not studying slide guitar. I may sound narrow minded, but there is a limited amount of time in a class, and when we spend much of it looking only at music that is dated, and quite often with no relevance to music technology, I wonder what is the point? So far we haven’t looked at anything that has been written in the last ten years, and yet so many advances have been made in the field of music technology in that time. I would be very interested in examining more music from successful modern artists. Trent Reznor, for example is accomplished in the genre of music technology, and has produced a plethora of compositions employing all kinds of unorthodox methods to achieve his sounds. Examining an artist like that would be relevant, interesting, and would allow us more time for discussion as his music doesn’t go for 20 minutes a piece. I mean, come on, if you're going to only demonstrate one particular technique is it really necessary to repeat the same thing over and over again. I believe I am not alone in my views, maybe someone out there will listen. Don't Do Drugs.

Here is a max patch I casually whipped up. It converts MIDI note values to their corresponding note frequency.

max v2;
#N vpatcher 811 186 1411 586;
#P window setfont "Sans Serif" 9.;
#P flonum 35 205 35 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 35 176 41 196617 mtof;
#P number 35 150 35 9 0 0 4096 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P user kslider 35 83 35 1 0 120 19 7 0 128 128 128 128 128 128 255 255 255 0 0 0 0 0 0;
#P connect 0 0 1 0;
#P connect 1 0 2 0;
#P connect 2 0 3 0;
#P pop;

10.4.06

Large or XLarge?

Wow. I can't believe it is the end of term already. Time sure does fly when you're having bucket loads of fun. David Grice, my lecturer for Audio Arts, has been instructing me in the dark art of recording sound. Maybe we need brighter lights or something. Either way, David has shone a spotlight on the bass side of things. Let me muse about bass for a moment; its a big instrument, unwieldy to say the least, sounds pretty stupid on its own, as a guitarist I cannot see why anyone would want to touch one. Whew! Them sound like fightin’ words I know. But I digress, bass is an essential ingredient in a rock group configuration. It is the driving force underneath the guitars, indeed an entire band. A solid bass sound is a solid foundation to build a mix on. I would even go as far as to say, bass is more important that the guitars in achieving a phat, resonant mix; the guitars being only the colouring on top if the bass.

David showed us a couple of recording techniques that I am hoping will be quite useful. He explained the workings of a DI box, lamenting the inferior quality of the uni's DI's. We DIed his Ernie Ball Music Man 5 string using the box as well as the DI on the Control 24, which yielded a much stronger signal. Unfortunately there was no working bass quad to mic up, although I did have a conversation with Dave after the lecture regarding which mic would be appropriate. He suggested using the same mic as used on a kick drum, and once again there are no great kick mics at the uni.

I will be trying out these techniques over the term break, hopefully with some good results. I am currently scouring Adelaide’s hire companies for an Ampeg Classic valve head, but with no luck so far. The best I have been able to do so far is an Ampeg SVT4 pro, however although this is a quality head, it does not have tube power amps. The plan is to acquire the Classic, use it in conjunction with a Fender Jazz 4 string, DI the head and the bass separately, and mic up the cab which is an Ampeg 8x10 fridge. Looks like I’ll be using one of the AKG mics for that task. With this setup, I am quietly confident in the bass sound being large, even x-large… and that’s pretty big.

2.4.06

25 Hours a Day


It has been quite a busy last few weeks for me. Things are progressing on many fronts and sometimes it is difficult to keep up. People say to me there are only so many hours in the day. I think I am pushing the boundaries on the what the definition of a "day" is, and how many of these so called "hours" you can fit into one. With that in mind, I have been able to achieve remarkable success in a recording I am currently working on. I spent all last weekend in the studio laying down guitar tracks with Rowan, the guitarist from my band. I have been very fortunate to enlist the help of an engineer from a local recording studio, Soundhouse. Ian, the engineer, was able to provide me assistance in fine tuning mic placement on the guitars. My setup is as follows;

ESP Ltd 6 String with EMG 81 Pickups
Peavey 5150
Engl 4x12 Cab with Celestion Greenbacks
Boss EQ Stompbox

My rig has a rich tone which is complemented by the tuning of my guitar; dropped A,E,A,D,G,#G. For the recording I also used an EQ rackmount unit through the fx send on the Peavey.

For the mic setup I used an SM57, Neumann U89, and an AKG mic, the model number eludes me right now. THe SM57 and AKG mics were close to the cab, phasing allowed me to cancel out some nasty high mids. I was able to get a very resonant tone from the AKG mic while the 57 captured the crunch from the 5150. The U81 I used as an ambient mic, I'm not sure whether I will use this sound yet.

I will be uploading some pics of the session soon, and when I have finished editing, some samples of the guitars.

18.3.06

Welcome to Ride-Town

Well, this has been a ripsnorter of a week for me. Just upgraded the ram on my laptop to 1.25Gb, and my HD to 80Gb, all in preparation for the installation of the ultimate in drum sequencing, Toontrack’s Drumkit From Hell Superior software package. That’s quite a mouthful if you say it fast enough. Go on, try it. Feels good yes? Well, it felt good to me. Loading up in Nuendo there is nothing this VSTi cant do in the realm of drum programming. So without further ado (and hopefully without sounding like an advertisement) I will explain my enthusiasm for the program.

After I loaded up all 35Gb of the sweetest wav files to grace my system I was surprised with a new interface to the old DFH window. This window, known as the construct window, is intrinsic to the function of DFH Superior. The construct window shows what is known as the microphone leakage control matrix. Once the drumkit has been assembled, ie. the user has chosen brand of kick drum, snare, overheads etc. the amound of bleed can be controlled through the matrix. Sounds special right? It is. Too much snare through the overheads? No problem, 2 seconds later, snare no more. More ride through the room mics? Easy done. Welcome to Ride-Town.

I read many reviews for this software, as well has listened to various groups that have used it to compose their music with. Nowhere have I found anyone that disliked it, myself included. After programming a few rudimentary beats I decided to test the programs usability in producing a “live feel”. Luckily the drummer from my band has an electronic kit, so it was simply a matter of hooking up the midi ports and off we went. We recorded 7 songs using his Roland kit, and I replaced the awful GM sounds with DFH VSTi, and wow, I cannot tell the difference between DFH and real live drum sounds. The kick drum is deep and resonant blending well with the toms, whilst the overheads are crisp and clear, without that fake electronic sheen that seems to be prevalent in many drum libraries out there; LM7, Reason, etc. I am confident that with some tweaking I will have a drum sound that can match that of a kit recorded in a professional studio. I have not yet been able to explore all the functions of this remarkable software, which includes a 108 piece percusion library also. I'll get back to you in a couple o' hundred years.

6.3.06

Year of the Blog

Welcome to my page. In this illustrious year of the dog, I shall be making my blog debut, making 2006 for me, the year of the blog! Seriously though, this page will chronicle my travels through time and space, with special attention on my studies in second year Music Technology at the University of Adelaide. Through this blogger I will endeavor to keep the cyberspace hoards at by bay whilst keeping an informative journal of my experiences within my course. I cannot guarantee that it will change your life, but if it does, then you probably spend too much time on the net. Read on...