SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
blackscholes
Topic Author
Posts: 87
Joined: February 16th, 2012, 12:58 pm

Data Reduction using Discrete Wavelet Transform

March 11th, 2013, 1:27 am

Is anyone familiar with discrete wavelet transforms? I understand it's possible to reduce data without loss of information by using discrete wavelet transforms.I don't quite get what's going and wanted to see if there are any physics/electrical engineers out there. I just need to understand the basic concept, nothing theoretical or mathematical. How does it reduce data?
 
User avatar
Stale
Posts: 209
Joined: November 7th, 2006, 3:20 pm

Data Reduction using Discrete Wavelet Transform

March 11th, 2013, 6:32 am

Wavelets are a time-frequency decomposition of your time series. It will split your time series into different components.As such it won't reduce your dataset unless you drop one of these components, so reduction of data requires you too loose something.S
 
User avatar
blackscholes
Topic Author
Posts: 87
Joined: February 16th, 2012, 12:58 pm

Data Reduction using Discrete Wavelet Transform

March 11th, 2013, 12:12 pm

Thanks for the responses Stale and outrun.So I pass an original signal through a filter (ex: Haar) and it performs a transformation decomposing the signal into it's time-frequency constituents (coefficients).I assume that data reduction would require dropping some of these transformed coordinates/coefficients. Let's say I have a time signal of 256 points and I applied a Haar wavelet transformation so it would generate 256 coefficients. I decide to drop 128 points.1) There is no loss in accuracy right since those 128 points that I dropped are nonessential?2) Would this speed up computation since I only have 128 points remaining?
 
User avatar
blackscholes
Topic Author
Posts: 87
Joined: February 16th, 2012, 12:58 pm

Data Reduction using Discrete Wavelet Transform

March 12th, 2013, 12:41 pm

QuoteOriginally posted by: outrunQuoteOriginally posted by: blackscholesThanks for the responses Stale and outrun.So I pass an original signal through a filter (ex: Haar) and it performs a transformation decomposing the signal into it's time-frequency constituents (coefficients).I assume that data reduction would require dropping some of these transformed coordinates/coefficients. Let's say I have a time signal of 256 points and I applied a Haar wavelet transformation so it would generate 256 coefficients. I decide to drop 128 points.1) There is no loss in accuracy right since those 128 points that I dropped are nonessential?2) Would this speed up computation since I only have 128 points remaining?no to both.1) that's why I mentioned that it's a linear transform y = A x. To reconstruct you need to essentially (from an information perspective) do x = (A^-1) y, and if some bits of y are missing then you won't be able to reconstruct x. 2) The transform and it's inverse take something like 2N operations (4N in total, .. the matrix form with N*N is inefficient -it has lots of predictable zeros-) so that's not going to give a speedup.In the wavelet domain, isn't the signal made sparse so you can theoretically speed up your computation by dropping out the zeros.
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Data Reduction using Discrete Wavelet Transform

March 12th, 2013, 1:42 pm

QuoteOriginally posted by: blackscholesIn the wavelet domain, isn't the signal made sparse so you can theoretically speed up your computation by dropping out the zeros.Unless the original signal was created by a noise-free wavelet-based process (with a small number of modes and the same kernel as your chosen transform), there will be no "zeros" and no lossless data reduction. In general, all components will have non-zero energy and removing any of the components will mean losing some information. Whether that lost information is "essential" is another issue.First, you need to model the signal process and any noise processes. Then you can pick the best wavelet kernel that creates the greatest discrimination between signal and noise. Then you can develop a proper threshold for deleting wavelet components that have a very low (but always non-zero) probability of containing useful information.You might also think in transmitter-receiver terms with a given low frequency rate of information encoded on a high frequency time-varying signal (e.g., a person striking X piano keys per second producing Y kHz audio signal) that's imperfectly measured by a microphone and run through a wavelet-based process to decide which key they hit and when they hit it.
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Data Reduction using Discrete Wavelet Transform

March 12th, 2013, 1:57 pm

QuoteOriginally posted by: outrunYou think he needs to denoise, but he needs to speed up calculation (and can perhaps toletate some approximate solution).If so, he still needs to model the signal and the noise to determine what computation processes (and approximations) provide the highest probability of not losing information.
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Data Reduction using Discrete Wavelet Transform

March 12th, 2013, 2:44 pm

QuoteOriginally posted by: outrunQuoteOriginally posted by: Traden4AlphaQuoteOriginally posted by: outrunYou think he needs to denoise, but he needs to speed up calculation (and can perhaps toletate some approximate solution).If so, he still needs to model the signal and the noise to determine what computation processes (and approximations) provide the highest probability of not losing information.Indeed, that's exactly what I'm saying now. And that has nothing to do with wavelets. Start out with analysing the computation, then think if you need a speedup via approximation -at whicvh point you need to model the signal space-.Actaully first think about the real problem with computation speed: maybe it's a response time issue that can be solved with caching?Exactly! It might even be as simple as downsampling the signal. And why wait until one has 256 data points such as he mentioned ? What does data point #1 indicate about the P(signal) vs. P(noise)?
 
User avatar
blackscholes
Topic Author
Posts: 87
Joined: February 16th, 2012, 12:58 pm

Data Reduction using Discrete Wavelet Transform

March 12th, 2013, 5:56 pm

I guess my confusion lies whether there is a one to one correspondence between the original signal the the DWT coefficients.For 256 data points, the discrete wavelet transform generates 256 coefficients. If 128 of them are insignificant can I drop the corresponding 128 data points from the original signal so I can go forward with my processing. I am just using this as an example. In my application, I have a large number of data points some of which are low frequency data that I don't really need. I just want to extract the high frequency components.FFT would only tell you that the frequency exists at all locations. DWT would tell you at which time/location these frequency components exist so I figured that it can help me identify which part of the original signal is insignificant and then drop these bad boys =)
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On