To turn this sound into a usable modulated reverb, I think you could take said sample and just have a shallow LPF pitch sweep down over time alongside having it fade out (shrink amplitude wise).
To use an arbitrary sound as a believable modulated reverb, you need the sound to have broad spectral content (e.g city ambiance). This understanding leads me to some thoughts. For a discussion rhythmic reverb the bands drop significantly and have repetitive pulses of recurrence. For a “feature reverb” all bands vary arbitrarily. For a modulated reverb, certain bands will increase and decrease (with a downward trend). For a dissipative reverb with resonance, certain bands will decrease much slower than their peers. Assuming the impulse has equal energy at all frequency bands, in a purely dissipative reverb each frequency band will monotonically decrease in amplitude over time. I wanted to check my intuition for how the spectral content of the response translates into the reverb. Then can apply various effects to a response (trim start and length, filter, reverse, gain etc) meaning you need little preprocessing. I was immediately struck with how much it was capable of, in terms of Impulse creation (it has an algorithm to create a response, or can load a file, as well as make a test tone and deconvolve). This thread inspired me to experiment a little in particular with the convolution reverb in Reaper.
I’m not tremendously experienced with convolution reverbs, but sine years ago a composer friend of mine expressed surprise that they hadn’t taken over the world. I don’t want to fully discourage discussion about various plugins, modules and Max patches as they do add to the conversation.)
(If anyone has suggestions how to clarify the OP, I’m open to suggestions. I think I might’ve worded the OP poorly because the first few posts are about gear - which is cool, I think it’s a nuance worth considering - whereas what I think is the beauty of convolution is that it’s pretty agnostic as to which plugin you use, they all load audio files and apply the same hundreds of years old operation to them and the live audio to produce the same kinds of results, meaning we can share these files once we’ve created them and inspire each other.
I don’t think you’re off-topic at all, understanding how convolution works and how devices using it work is very useful in allowing us to use it to create interesting sounds beyond the trial and error approach of trying out your entire sample library on a sound and see what doesn’t sound terrible. I’ve tried to read about convolution online but the descriptions of the math (which I don’t really understand in the first place) and how it’s applied in audio were always disparate to me. Thanks, that explains a lot that I had an inkling of but didn’t fully understand. comes with some funny IRs, like animal sounds, and allows importing On-topic, if you happen to own iZotope Trash 2 it’s also worth digging into as a nice convolution engine. (and i’m sorry for derailing a second thread with this discussion, but at least it is more directly relevant here…) (added latency being the tradeoff of using the FFT instead of a brute-force windowed convolution.) The more complicated tricks with using “non-uniform partitioning” are just about refining this method for reduced latency. but the power spectrum per se never really comes into it. the FFT algorithm makes this much faster than brute-force sample convolution, especially when you can pre-calculate the DFT of the impulse response. by a mathematical trick, convolving two signals turns out to be the same as taking the DFT of each signal and multiplying them. (I’m probably using “sample” here wrong, I’m using it to mean slice of the frequency spectrum, had trouble finding what the actual term should beĪctually you’re using it right - we are talking about convolution in the time domain.