Our DSP engineering team has worked hard over the last few months to prepare two new features: wavelets and DSP autotuning. Today we are happy to announce their official release!
Read the Wavelet feature release sectionWavelet feature release
Read the What is a wavelet? sectionWhat is a wavelet?
Wavelet stands for “wave-like oscillation that is localized in time." It is a mathematical function that can analyze signals and data in both the time and frequency domains. The wavelet can be seen as a short, oscillating waveform that is localized in time and has a finite duration.
You can plot several approximations of wavelets using this Google Colab notebook:
One way to look at it is an octave filter bank with a specially designed filter kernel. All the filters are derived from a unique waveform (wavelet). There are many types of wavelets, and each has its unique characteristics and can be useful for certain types of applications. For example, the Haar wavelet is great at detecting edges.
Read the Wavelet decomposition block sectionWavelet decomposition block
One of the key advantages of wavelet decomposition is that it provides a multi-resolution representation of a signal, meaning that a representation of the signal at different levels of frequency resolution can be obtained. This is particularly useful in applications where the signal has different levels of detail, such as image compression and denoising, where the high-frequency noise can be removed while preserving the important low-frequency details.
The Wavelet block implements the discrete wavelet decomposition plus feature extraction and dimensionality reduction. After decomposition, a number of features (14 features in this version) are calculated at each level, including entropy, statistics, skewness, kurtosis, zero-crossing rate, and mean-crossing rate. For example, a four-level decomposition with 14 features per component will generate 70 features in total.
Two parameters can be adjusted for the discrete wavelet decomposition (along with the filters):
The level of decomposition and the wavelet function (also called the mother wavelet)
- Level of decomposition: This determines the number of times the signal is decomposed into different frequency components. A higher level of decomposition results in more detail in the signal representation, but also requires more computational resources.
- Wavelet function: This is the basic waveform used as the building block for generating the wavelets used in the transform. The choice of the mother wavelet can significantly impact the performance of the wavelet transform and the interpretation of the resulting coefficients.
Choosing the right combination of parameters is always tricky (even for our DSP experts). That’s why we released the DSP autotuner along with the wavelet support to help you choose the best trade-off. See below to learn more about the autotuner.
Read the Fast Fourier transform vs. wavelet decomposition sectionFast Fourier transform vs. wavelet decomposition
FFT-based spectral analysis decomposes the signal using sinusoidal bases, which typically works best with stationary signals with defined repetition. However, with non-stationary signals (or if you want to extract temporal information), or signals where other types of base functions fit better, FFT-based spectral analysis fails to provide helpful information. For example, in the FFT of a digital clock signal, you only see a bunch of harmonics, while using wavelet decomposition with the Haar base, the edges are easily detected.
We also prepared two Edge Impulse public projects, PhysioNet ECG Dataset - Wavelets and PhysioNet ECG Dataset - FFT. The dataset contains a set of ECG measurements of healthy persons (indicated as Normal sinus rhythm, NSR) and persons with either an arrhythmia (ARR) or congestive heart failure (CHF).
|DSP resources (estimated for a Cortex-M7)||Processing: 1ms|
Peak Ram: 60kB
Peak Ram: 41kB
Read the Autotuning feature release sectionAutotuning feature release
Selecting the appropriate parameters for configuring wavelet decomposition can be a daunting and time-consuming task, even for experienced digital signal processing engineers. To simplify this process, we have developed the autotuning feature.
Currently, the autotune feature is limited to the Spectral Features, MFE (audio), MFCC (audio), and Spectrogram pre-processing blocks. We are working on expanding its capabilities to other blocks in the future.
Read the DSP autotuning vs. EON Tuner sectionDSP autotuning vs. EON Tuner
While DSP autotuning is primarily focused on extracting relevant features that will enhance the performance of your neural network, the EON Tuner takes a broader approach by finding the optimal combination of parameters in both pre-processing and learning blocks to meet your device's constraints.
DSP autotuning can provide quick insights about the signal and features, making strong suggestions that can help you select the appropriate DSP parameters. In contrast, the EON Tuner is more device-aware and has a more extensive scope, but it requires more time to run before providing initial results because it fully trains many models.