How should I calculate power spectral density of signal with too high sampling rate

7 views (last 30 days)
Dear All,
I have a signal with high sampling rate (10000 Hz) and I would like to calculate power spectral density in the frequency range 0-50 Hz. Therefore, sampling rate 100 Hz would be sufficient.
My question is whether signal should be resampled by averaging 100 samples and then the spectrum should be calculated, or the spectrum should be calculated on the original signal. Should the results be the same? If not, which one would be more correct?
Best, Urban

Accepted Answer

Honglei Chen
Honglei Chen on 10 Oct 2014
Edited: Honglei Chen on 10 Oct 2014
I would recommend first design a filter, to filter at 100Hz, then down sample/resample, and then do PSD calculation. Actually I believe resample does that anti-aliasing for you already.
In my opinion, doing computation on the original signal is unnecessary and expensive. You may also be able to get better resolution within 0-50Hz via resample since again it's more computational friendly to zero pad a low sampling rate signal.

More Answers (2)

Image Analyst
Image Analyst on 10 Oct 2014
Yeah but it would reduce your resolution. I agree with Star, just do the whole thing and crop out or look at just the range you want. Especially since you have such a tiny amount of data (I'm assuming your data doesn't amount to hundreds of millions, or billions, of samples or anything). You might use pwelch() in the Signal Processing Toolbox, which is what the Mathworks recommended in the latest spectral filtering webinar, rather than fft.

Urban
Urban on 10 Oct 2014
Thanks for the answers so far. Just to clarify the problem: the data length is doable with original resolution as I have 1.8M samples, so the speed is not an issue. But the problem I have is that I don't get the same results when doing pwelch() on original data and when first take average of 100 consecutive data and then do pwelch() on that reduced data; obviously with 100 times lower segment length. I mean, the results are considerably different; in one case I get peak, where it does not exist in another case. Therefore I don't know which one to believe - or I shouldn't believe any.
Do I introduce some additional frequency when averaging 100 consecutive samples, or do I just disclose something that is not visible when doing pwelch() on original data?
  2 Comments
Image Analyst
Image Analyst on 10 Oct 2014
Well if you have a periodic signal that gets wiped out when you blur/average with a window size of 100 then the spike will go away. Of course you know that your frequency axis is now changed so you can't look at the same number of elements from pwelch and assume they correspond to the same frequency range as the original full signal.
Urban
Urban on 10 Oct 2014
The signal I have is noise signal that is not completely random, but contains some characteristic vibrations. And the strange thing is that averaging 100 samples does introduce new spikes in the 0-50 Hz spectrum, so I suspect that higher frequency spikes somehow translate to that frequency range.
I tried resample() and decimate(), and I get similar results as when using pwelch() on original data. So, averaging consecutive measurements is probably not good idea, but I do not understand why :-)

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!