It does work and this is why...
I bought the VOICEPRINT DI because it was developed to measure the impulse response of my guitar, leveraging the processing power of my iPhone to accurately capture it’s one-of-a-kind voice, transforming my pickup into its most authentic sound. A search on impulse response makes clear that the idea is to use acoustic modeling to enhance the inherent limitations of my transducers with an algorithm that adds the complex acoustical depth I hear when it's played. So the process starts with the mic - in an acoustic space - along with everything that implies.
All of my voiceprints (including the most recent one described yesterday) have been made in the same relatively small room, and in each case the mic and the guitar were about half way between the ceiling and the floor. More recent attempts were also about halfway between the closest pair of walls. A search on standing waves, room modes, and null modes show destructive interference occurs at a given frequency on center (and at other multiples) between any set of parallel walls, and since the majority of the acoustical energy is the lowest frequencies, that cancellation eliminates a lot of the low end of the guitar from the impulse response that's measured.
I haven't found a lot that explains the details of this processing, but it stands to reason that the boomy, tubby, and loose low end many people are getting is due to the algorithm trying to optimize the low frequency energy that's not there to be optimized, so the end result is an increase in low frequency noise, which is what I was getting before.
The inexpensive anechoic chamber isolates the mic from the room acoustics, so it's able to hear the way the really guitar sounds, and that allows the algorithm to work as intended. To be clear, that is open cell foam. Something stiffer might create the same problems at higher frequencies.
The output of my DI is connected to the input of my mixer so I can adjust the preamp level to avoid overload and optimize it's signal to noise ratio. The first thing I noticed, even before I listened to the new voiceprint, was the input was now overdriven because there was so much more REAL low frequency energy. When I did listen that was obviously the case. I haven't tried to document the increased signal level, but I would guess it's easily twice as loud, if not more, than it was before.
I think different acoustic spaces probably explains most of the variations people are getting in their voiceprints, since this starts with an acoustical process, and a significant flaw in that process cannot be corrected by the algorithm or any EQ in the processing.
I'm definitely glad to find my evaluation of the Voiceprint DI before I bought it was correct and that it can work as described when used in a way that keeps its from working against itself.
I hope this helps.
I bought the VOICEPRINT DI because it was developed to measure the impulse response of my guitar, leveraging the processing power of my iPhone to accurately capture it’s one-of-a-kind voice, transforming my pickup into its most authentic sound. A search on impulse response makes clear that the idea is to use acoustic modeling to enhance the inherent limitations of my transducers with an algorithm that adds the complex acoustical depth I hear when it's played. So the process starts with the mic - in an acoustic space - along with everything that implies.
All of my voiceprints (including the most recent one described yesterday) have been made in the same relatively small room, and in each case the mic and the guitar were about half way between the ceiling and the floor. More recent attempts were also about halfway between the closest pair of walls. A search on standing waves, room modes, and null modes show destructive interference occurs at a given frequency on center (and at other multiples) between any set of parallel walls, and since the majority of the acoustical energy is the lowest frequencies, that cancellation eliminates a lot of the low end of the guitar from the impulse response that's measured.
I haven't found a lot that explains the details of this processing, but it stands to reason that the boomy, tubby, and loose low end many people are getting is due to the algorithm trying to optimize the low frequency energy that's not there to be optimized, so the end result is an increase in low frequency noise, which is what I was getting before.
The inexpensive anechoic chamber isolates the mic from the room acoustics, so it's able to hear the way the really guitar sounds, and that allows the algorithm to work as intended. To be clear, that is open cell foam. Something stiffer might create the same problems at higher frequencies.
The output of my DI is connected to the input of my mixer so I can adjust the preamp level to avoid overload and optimize it's signal to noise ratio. The first thing I noticed, even before I listened to the new voiceprint, was the input was now overdriven because there was so much more REAL low frequency energy. When I did listen that was obviously the case. I haven't tried to document the increased signal level, but I would guess it's easily twice as loud, if not more, than it was before.
I think different acoustic spaces probably explains most of the variations people are getting in their voiceprints, since this starts with an acoustical process, and a significant flaw in that process cannot be corrected by the algorithm or any EQ in the processing.
I'm definitely glad to find my evaluation of the Voiceprint DI before I bought it was correct and that it can work as described when used in a way that keeps its from working against itself.
I hope this helps.
Comment