- Traden4Alpha
**Posts:**23951**Joined:**

As outrun said, there' a one-to-one correspondence between the original signal and the DWT coefficients. Setting any non-zero DWT to zero will lose some information and change the reconstructed time-domain signal.Have you considered using a high-pass filter to remove the low frequency components?

- dimitriosg
**Posts:**4**Joined:**

The DWT is a time-scale representation. In contrast to FFT, where the time-frequency resolution is fixed throughout the time-frequency plane, in DWT higher frequencies have better time resolution and lower frequencies have lower while at the same time they have better frequency resolution because the frequency ranges (intervals) at lower frequencies get decreased more and more since the signal passed through the DWT filters gets halved in size in every level of the transform.Data reduction, as with every transform, can simply happen because we go from the time domain where the signal is usually not sparse (it has very few if any non-zero values) to a representation where the signal might have more zeros (some of the coefficients will be zer) and thus can be discarded without loss of information. Other options for dimensionality reduction could be Principal component analysis (PCA).You can easily find good DWT tutorials on the web to better grasp what the tranform does.Best,Dimitrios

blackscholes - I think I understand your confusion. if I understand you, you didn't understand how to remove low frequency components from the time series - so you didn't really need wavelets at all.you want to remove low frequency components from your signal.there are essentially two ways of doing this... either do fft and zero out low frequencies then inverse fft or design/use a filter that you apply directly to your signal ( ie in time domain). this could be as simple as subtracting the running mean (say).ie you a) identify frequencies you want to remove (by doing fft of your data), then design filter that removes those freqencies

- blackscholes
**Posts:**87**Joined:**

I'm back. I did some more reading. I found a Wavelet Tutorial by Robi Polikar. I understand everything until this graph and I think this is where I need to make the connection.http://users.rowan.edu/~polikar/WAVELET ... 004.jpgThe top graph is the original signal and the bottom graph is the DWT coefficients after the transform was applied. Here is the verbiage from the website :"The frequency bands that are not very prominent in the original signal will have very low amplitudes, and that part of the DWT signal can be discarded without any major loss of information, allowing data reduction. Figure 4.2 illustrates an example of how DWT signals look like and how data reduction is provided. Figure 4.2a shows a typical 512-sample signal that is normalized to unit amplitude. The horizontal axis is the number of samples, whereas the vertical axis is the normalized amplitude. Figure 4.2b shows the 8 level DWT of the signal in Figure 4.2a. The last 256 samples in this signal correspond to the highest frequency band in the signal, the previous 128 samples correspond to the second highest frequency band and so on. It should be noted that only the first 64 samples, which correspond to lower frequencies of the analysis, carry relevant information and the rest of this signal has virtually no information. Therefore, all but the first 64 samples can be discarded without any loss of information. This is how DWT provides a very effective data reduction scheme."I don't quite get how all but the first 64 samples can be discarded without any loss of information. I understand that the first 64 DWT coefficients are prominent but how does that correspond with the first 64 samples of the original signal from the first graph. Most of the information is in the middle part of the signal.Any clue?

- Traden4Alpha
**Posts:**23951**Joined:**

QuoteOriginally posted by: blackscholesI'm back. I did some more reading. I found a Wavelet Tutorial by Robi Polikar. I understand everything until this graph and I think this is where I need to make the connection.The top graph is the original signal and the bottom graph is the DWT coefficients after the transform was applied. Here is the verbiage from the website :"The frequency bands that are not very prominent in the original signal will have very low amplitudes, and that part of the DWT signal can be discarded without any major loss of information, allowing data reduction. Figure 4.2 illustrates an example of how DWT signals look like and how data reduction is provided. Figure 4.2a shows a typical 512-sample signal that is normalized to unit amplitude. The horizontal axis is the number of samples, whereas the vertical axis is the normalized amplitude. Figure 4.2b shows the 8 level DWT of the signal in Figure 4.2a. The last 256 samples in this signal correspond to the highest frequency band in the signal, the previous 128 samples correspond to the second highest frequency band and so on. It should be noted that only the first 64 samples, which correspond to lower frequencies of the analysis, carry relevant information and the rest of this signal has virtually no information. Therefore, all but the first 64 samples can be discarded without any loss of information. This is how DWT provides a very effective data reduction scheme."I don't quite get how all but the first 64 samples can be discarded without any loss of information. I understand that the first 64 DWT coefficients are prominent but how does that correspond with the first 64 samples of the original signal from the first graph. Most of the information is in the middle part of the signal.Any clue?The "first 64 samples" of the DWT are the lower frequency components of the ENTIRE time domain (not just of the first samples of the original signal). The original signal contains a relatively low-frequency object (of about 0.09 samples/cycle). Thus, there's little information in the DWT coefficient in highest-frequency coefficients 129-256 (= 0.5 samples/cycle) or the second-highest frequency coefficients 65-128 (= 0.25 samples/cycle). Most of the information is in coefficients 33-64 (= 0.125 samples/cycle), and coefficients 17-32 (= 0.0625 samples/cycle).Although each DWT coefficient might have some correspondence to a subarea of time domain signal, it's not a simple relationship like "the first 64 DWT coefficients correspond with the first 64 samples of the original signal."To really understand what's going on, I strongly recommend you generate some signals, run the DWT and look at how the DWT coefficients change when you change the shape of your signal, scale it wider and narrower, or move it earlier or later in time.

- blackscholes
**Posts:**87**Joined:**

Thanks all!! I think I finally got it. I was stumped on the meaning behind 'data reduction' in this context and 'dropping insignificant data'.I thought dropping insignificant data meant actually removing it from the data set but based on the discussion here and what I've read, it really meant discarding the information by setting it to zero so that when you reconstruct the signal the non-essential parts don't show up anymoreI thought that if you can remove 15 coefficients and go back to the original signal with 15 less samples.I played around with this in Matlab taking a 256 signal, performing a wavelet transform and thresholding some of the coefficients to zero leaving me with 128 significant coefficients then using these 128 coefficients to go back to the original signal. When I did that the original signal was not the same as the reconstructed signal. However, when I just zeroed out the wavelet coefficients instead of actually removing them from the vector and then reconstructing the signal, it matched up with the original one with some noise removed.Whew!!!

Last edited by blackscholes on March 15th, 2013, 11:00 pm, edited 1 time in total.

- blackscholes
**Posts:**87**Joined:**

QuoteOriginally posted by: outrunYou can also try the wavelet packet transform. That one tries various forms of wavelet transforms and allows you to pick the most compact representation of your signal.Also: the Haar wavelets (and also other wavelets like the Daubechies family) are orthogonal. In that sense this resembles a PCA (although PCA is more optimal, the resemblance is in the approximation of a signal as a limited sum of orthogonal vectors)Are you referring to the function 'wpdec' in Matlab?

GZIP: On