Announcement

Collapse
No announcement yet.

Production techniques

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AstroAndMusic
    replied
    Originally posted by Sampleconstruct View Post
    That sounds intriguing, how can we help you to get over that hurdle and start programming NOW
    Nothing can be done except sitting down and doing it with the unusually average brain that I have. No one else can learn it for me. Boo hoo hoo :cry: It's nice to know that others may be interested in this, though -- that gives me a bit more motivation. I'm anxious to start working on some projects I've been developing "on paper" for the past several months to see how they develop in actuality. Stay tuned and ..... peace.

    Leave a comment:


  • Sampleconstruct
    replied
    Originally posted by manducator View Post
    Originally posted by Sampleconstruct View Post
    Another convolution experiment:
    Experiment processing a physically modelled bell sound
    (playing a slow tonal impro) with a multiband convolution plugin (Melda). Each of the 3 bands carries a different female vocal sample (recorded last week for a sounddesign project), the band crossover frequencies are modulated by a slow LFO. Some VVVerb was added in the Mix.

    https://soundcloud.com/sampleconstruct/convoluted-vox
    Beautiful!
    Thank you

    Leave a comment:


  • Sampleconstruct
    replied
    Originally posted by AstroAndMusic View Post
    Originally posted by Sampleconstruct View Post
    Another convolution experiment:
    Experiment processing a physically modelled bell sound
    (playing a slow tonal impro) with a multiband convolution plugin (Melda). Each of the 3 bands carries a different female vocal sample (recorded last week for a sounddesign project), the band crossover frequencies are modulated by a slow LFO. Some VVVerb was added in the Mix.
    Wow that's really nice. I'm soooooooo excited about writing my own convolution software through SuperCollider --- now that your example is proof (to me) that audio convolution can be so interesting. As an astronomer and programmer, my main line of work was high resolution astronomical imaging. So all of the optical "tricks" that I've learned over the past 20+ years in astronomy can pretty much be directly applied to audio. Waves is waves as far as the fundamental algorithms go. The "bible" of optics and a very much used book in my library is Max Born and Emil Wolf's 'Priciples of Optics'. Most of what's contained in this book can be directly applied to audio. Is there an equivalent bible for audio?

    I just gotta get over this stupid hurdle of learning yet another f*cking programming language! Argh!!!!!

    Peace to All
    That sounds intriguing, how can we help you to get over that hurdle and start programming NOW

    Leave a comment:


  • AstroAndMusic
    replied
    Originally posted by Sampleconstruct View Post
    Another convolution experiment:
    Experiment processing a physically modelled bell sound
    (playing a slow tonal impro) with a multiband convolution plugin (Melda). Each of the 3 bands carries a different female vocal sample (recorded last week for a sounddesign project), the band crossover frequencies are modulated by a slow LFO. Some VVVerb was added in the Mix.
    Wow that's really nice. I'm soooooooo excited about writing my own convolution software through SuperCollider --- now that your example is proof (to me) that audio convolution can be so interesting. As an astronomer and programmer, my main line of work was high resolution astronomical imaging. So all of the optical "tricks" that I've learned over the past 20+ years in astronomy can pretty much be directly applied to audio. Waves is waves as far as the fundamental algorithms go. The "bible" of optics and a very much used book in my library is Max Born and Emil Wolf's 'Priciples of Optics'. Most of what's contained in this book can be directly applied to audio. Is there an equivalent bible for audio?

    I just gotta get over this stupid hurdle of learning yet another f*cking programming language! Argh!!!!!

    Peace to All

    Leave a comment:


  • manducator
    replied
    Originally posted by Sampleconstruct View Post
    Another convolution experiment:
    Experiment processing a physically modelled bell sound
    (playing a slow tonal impro) with a multiband convolution plugin (Melda). Each of the 3 bands carries a different female vocal sample (recorded last week for a sounddesign project), the band crossover frequencies are modulated by a slow LFO. Some VVVerb was added in the Mix.

    https://soundcloud.com/sampleconstruct/convoluted-vox
    Beautiful!

    Leave a comment:


  • Sampleconstruct
    replied
    Another convolution experiment:
    Experiment processing a physically modelled bell sound
    (playing a slow tonal impro) with a multiband convolution plugin (Melda). Each of the 3 bands carries a different female vocal sample (recorded last week for a sounddesign project), the band crossover frequencies are modulated by a slow LFO. Some VVVerb was added in the Mix.

    https://soundcloud.com/sampleconstruct/convoluted-vox
    Last edited by Sampleconstruct; 07-28-2013, 01:52 AM.

    Leave a comment:


  • Sampleconstruct
    replied
    A piano impro played simultaneously by several students of musicology who participated in a seminar about electronic sound generation which I conducted at the university in Münster/Germany.
    This impro was processed with 2 convolution reverbs, each one using a segment from another impro we did on the same day as an impulse response. Some algorithmic reverb (B2) and EQ was added in the mix, a modulated stereo spreader is active on one of the convolution reverbs. No dry signal is audible throughout the track, it's all the convoluted signals.

    http://soundcloud.com/sampleconstruct/convoluted-mystery-seminar

    Leave a comment:


  • Sampleconstruct
    replied
    Originally posted by MetaDronos View Post
    Originally posted by Sampleconstruct View Post
    RX2 is really a fantastic allrounder tool, not only for repairing/cleaning stuff but also for sample editing and sounddesign if you abuse it in a sensible way.
    Could you elaborate on this? I'm always interested in subject of using and misusing tools to produce interesting sounds.
    Sure, here are a few examples:
    One can use the denoiser in many ways, usually you would take a footprint of the noise you want to remove and then adjust the amount of denoising to taste. But you can also filter a signal (e.g. a cello note) with a very steep bandpass, so that only a smaller frequency band is left, then use that as the "noise" footprint and apply it to the full signal. A partial of the signal will be removed, with very high reduction settings this can lead to very interesting new sounds. Or take a noise footprints from e.g. a recording in a factory and apply that to a totally different signal which has nothing to do with the original footprint source. Or take the recording of a bird flock, take a footprint of the background noise and totally overdo the reduction (100% only on the noise parameter, not on the tonality parameter) so that you're left with only the bird sounds and nothing else, sounding like under water or behind thick glass. Then import that into a sampler or granulator and start building textures, or import it into Alchemy via additive resynthesis (resynth algos hate noisy backgrounds), remove the pitch modulation, stretch it to almost standstill and you have the most beautiful spectral drone texture, as all the pitches sung by the bird are now harmonics of the root note (sorry if I'm repeating myself here).

    Or use the spectral repair function to create new sounds. Take an audio file, select a portion in the middle of it - full frequency band, e.g. 3 seconds long, then use e.g. the "Replace" algo with enough surrounding length, so that RX2 will interpolate the portion to remove with frequencies from before and after that gap. This can yield very interesting result. Later you can then isolate only the interpolated portion of the resulting file and stretch those 3 seconds to 5 minutes with apps like paulstretch. Great for some interesting drone textures you would hardly create otherwise. And, or, or and, and, or or or or, and

    Leave a comment:


  • MetaDronos
    replied
    Originally posted by Sampleconstruct View Post
    RX2 is really a fantastic allrounder tool, not only for repairing/cleaning stuff but also for sample editing and sounddesign if you abuse it in a sensible way.
    Could you elaborate on this? I'm always interested in subject of using and misusing tools to produce interesting sounds.

    Leave a comment:


  • falls a star
    replied
    I tried the demo last night. The regular version isn't that special to me (aside from a nice interface), but RX2 Advanced is really pretty amazing. Deconstruct is a wonderful thing. I can't really justify the cost right now though, sadly.

    Leave a comment:


  • Sampleconstruct
    replied
    Originally posted by falls a star View Post
    First you get me interested in Iris, then in RX2. I hope Izotope is giving you a commission. :D

    Cool experiment.
    Thank's - naa, no commission, just an invitation for beta-testing which I denied as and I'm already on too many beta teams it's too time consuming - I need apps that work
    RX2 is really a fantastic allrounder tool, not only for repairing/cleaning stuff but also for sample editing and sounddesign if you abuse it in a sensible way.

    Leave a comment:


  • falls a star
    replied
    First you get me interested in Iris, then in RX2. I hope Izotope is giving you a commission. :D

    Cool experiment.

    Leave a comment:


  • Sampleconstruct
    replied
    Here is an experiment I made today as a chillout session after too many hours of programming - some windchimes I just recorded, a one-minute long texture playing the chimes with my fingers recorded with 3 mics (L-C-R), retuned in Melodyne to an indian Raga scale and transposed downwards quite a bit, then removed the resulting artifacts and balanced the frequencies with RX2, then imported the sample into Padshop Pro using several grain streams laying out a big chord over 5 octaves in D minor (plus something), adjusting the playback speed on the fly - a tuned Highpass filter (key follow) is active in PSP, the (high) filter resonance is also tweaked on the fly so the tonality increases and decreases over time. This signal is then sent into ÜberMod inserted on the PSP track, a huge space with subtle pitch modulation, about 50% wet. Then the signal goes into a 3 band instance of Saturn, different saturation modes and feedback amounts are active in the 3 bands, some LFOs are controlling the crossover and feedback frequencies. This is then sent to a Bus where a huge dual engine B2 space is happily welcoming the signal to do it's duty.

    http://soundcloud.com/sampleconstruct/indian-chimes
    Last edited by Sampleconstruct; 05-01-2013, 11:32 AM.

    Leave a comment:


  • Sampleconstruct
    replied
    Originally posted by S1gnsOfL1fe View Post
    And you, Sampleconstruct are the PERFECT person to start a thread like this!!!!!!! :cool: I can't wait to try this out. As a huge fan of Alchemy (It's my main tool in my arsenal) I look forward to gaining more knowledge about these tooks from our users like yourself.

    Cheers and keep 'em coming!!!! :tu:
    Thank's - Let's see what we can come up with in this thread then...

    Leave a comment:


  • S1gnsOfL1fe
    replied
    And you, Sampleconstruct are the PERFECT person to start a thread like this. :cool: I can't wait to try this out. As a huge fan of Alchemy (It's my main tool in my arsenal) I look forward to gaining more knowledge about these tooks from our users like yourself.

    Cheers and keep 'em coming. :tu:
    Last edited by S1gnsOfL1fe; 05-01-2013, 11:00 PM.

    Leave a comment:

Working...
X