Sensor Signal Processing
Cognitive electronics can be described as hardware/software electronic systems that embed some form of intelligence. In this context, the cognitive cycle includes sensing, collecting, processing, and analyzing data, as well as learning patterns and taking decisions.
Such cognitive functions can be implemented on a wide range of technological platforms that include micro and nano-sensors followed by signal processing elements such as microprocessors (GPP), digital signal processors (DSP), field programmable gate arrays (FPGA), and customized circuits (ASIC). Very often, such platforms also include wireless connectivity, as for example in Wireless Body Area Networks.
In practice, such platforms are severely constrained in terms of e.g. execution time, area, and energy consumption. At the same time, the complexity of the applications that are expected to run on such platforms is increasing. This calls for new architectures, methods and tools that enable energy-efficient sensor signal processing.
Examples of (state-of-the-art) approaches that allows dealing with the above contradiction at the algorithmic level include:
- Compressed sensing;
- Sparse digital signal processing;
- Approximate computing;
- Transient computing;
- Embedded deep learning.
This, in turn, calls for matching hardware, for example:
- Customized and optimized data-paths, control units, …
- Embedded multi-processor architectures;
- Reconfigurable architectures (e.g. FPGAs and nano-FPGAs);
- Non-volatile architectures (e.g. FRAM- and Flash-based).
Finally, it is also necessary to consider the design methods and supporting tools, e.g. approaches such as HW/SW co-design, efficient mapping of algorithms onto HW/SW platforms and their joint optimization.
This topic also supports implementation aspects of our NB-IoT activities.
Examples of defended or ongoing PhD theses related to this research area:
- "Algorithms and methods for wireless indoor tracking, positioning and object locating systems" by Taavi Laadung (Industrial PhD with Eliko). Expected 2023.
"Algorithms for Learning and Adaptation over Networks – Distributed Leader Selection" by Sander Ulp. Defended in 2019.
"Parameter Estimation by Sparse Reconstruction with Wideband dictionaries" by Maksim Busenko. Defended in 2018.
"Distributed Signal Processing in Cognitive Radio Networks" by Ahti Ainomäe. Defended in 2018.
"Classification and Denoising of Objects in TEM and CT Images Using Deep Neural Networks" by Anindya Gupta. Defended in 2018.
"Modeling and Implementation of Linear Energy Prediction for Energy Harvesting in Intermittently Powered Wireless Sensor Nodes" by Faisal Ahmed. Defended in 2018.
Algorithms and methods for wireless indoor tracking, positioning and object locating systems
Energy-Efficient Techniques and Architectures for the Green Internet of Things (IoT)
PhD student: Sikandar Muhammad Zulqarnain Khan
Supervisors: Yannick Le Moullec, Muhammad Mahtab Alam
PhD thesis expected to be defended in 2022.
With the tremendous growth of the internet of things (IoT), millions of connected electronic devices will be scattered in our living and working environments. Powering such vast amounts of devices brings many challenges in terms of sustainability. This has spurred a new wave of research to address them; this trend is also highlighted by recent efforts such as the IEEE Initiative on Green Information and Communication Technologies (Green ICT).
This PhD work seeks to explore, propose and test new methods, algorithms and architectures for enabling the so-called “battery-less IoT”. Activities include to research, develop and publish on various aspects related to the selection, design, and optimization of: Efficient energy harvesting techniques (e.g. solar, back-scattering) to power IoT nodes with no or little energy storage; Software and hardware techniques for approximate computing to reduce the computational requirements, and hence the energy consumption of the IoT nodes; Transient computing algorithms and their implementation on non-volatile devices (e.g. microcontrollers based on FRAM or other emerging memristive technologies) to deal with the intermittent and unbalanced nature of energy harvesting; Methods for the joint design space exploration and optimization of the above together with communication protocols such as NB-IoT and/or LTE Cat M1.
NB-IoT infrastructure at TalTech
SNR and RSSI at different elevations
Approximation and predictions on top of transient computing
PhD thesis expected to be defended in 2022.
Algorithms for Learning and Adaptation Over Networks – Distributed Leader Selection
PhD student: Sander Ulp
Supervisors: Muhammad Mahtab Alam; Yannick Le Moullec; Tõnu Trump
Learning and adaptation over networks is a rapidly evolving topic with many possible applications and fields. There are several unanswered and unresearched aspects of these algorithms and their applications. In particular, analysing the performance and developing algorithms for distributed estimation are essential for future smart and self-organizing networks. Indeed, enabling complex and sophisticated behaviour through cooperation of simpler units in the network allows to accomplish more demanding tasks that are unattainable to single units and current solutions. Estimating from different sensors and cooperating to achieve better performance is a challenging task when taking into account the limitations and constraints presented by applications.
Classification of different communication strategies used in estimation
The algorithms should enable the networks to learn and adapt to different situations under constraints, such as limited bandwidth, limited battery capacity, physical and communication restrictions, security etc. In this work the motivation for and the background to the learning and adaptation are presented and an overview of different methods for distributed estimation as well as the author’s contributions to the existing work are given.
This PhD work proposes 1) an improvement to the existing diffusion algorithm weight calculation as well as 2) a novel algorithm for distributed leader selection (DLS). The method using MDL (minimum description length) subspace algorithm to estimate the SNR (signal to noise ratio) of the estimated signal allows the weight calculation to incorporate the values and improve the performance of the diffusion algorithm in comparison to the equally weighed diffusion algorithm.
The proposed DLS algorithm is able to select the best performing node as the leader node in a fully distributed manner and achieve its performance across the network. The algorithm outperforms both the equally weighed diffusion algorithm and the non-cooperating network. The theoretical and simulation results show that the DLS algorithm is also able to attain similar performance than that of the diffusion algorithm using optimal weights under certain conditions and overall is less complex and more robust compared to the diffusion algorithm.
The work also analyses the energy consumption and computational complexity of the diffusion algorithm and the DLS algorithm. The DLS algorithm is modified to improve its energy efficiency. The proposed energy-efficient distributed leader selection algorithm is able to reduce the energy consumption of the network by 32 − 53% and extend its lifetime by 14 − 46%.
Clockwise: Network convergence MSE with Rayleigh fading; Network MSD performances in both scenarios for the non-cooperating, the relative variance rule, the relative degree variance and the leader selection; (a) Radio communication energy consumption at different nodes for the diffusion algorithm, DLS algorithm and EEDLS algorithm, (b) Network radio communication energy consumption for the diffusion algorithm, DLS algorithm and EEDLS algorithm. Source: Sander Ulp, Algorithms for Learning and Adaptation Over Networks – Distributed Leader Selection, PhD thesis, Tallinn University of Technology, 2019
Parameter Estimation by Sparse Reconstruction with Wideband dictionaries
PhD student: Maksim Butsenko
Supervisors: Olev Märtens; Yannick Le Moullec; Tõnu Trump
Parameter estimation in general and spectral analysis in particular have been fruitful research areas for a long time and still rightfully remain so. Many algorithms and methods for parameter estimation were proposed during decades of research, but as technology evolves, so evolve our requirements to estimators. Non-parametric estimators were for a long time the most popular method of spectral analysis; however, their major drawback is their resolution limitation and high variance. Parametric estimators can provide high-resolution estimates, however this requires considered signal to correspond well to underlying signal model and they perform much worse than non-parametric estimators if this is not the case. Semi-parametric estimators can often provide high-resolution estimates without big dependency on the signal model as their only assumption is that the signal should be sparse. In fact, a wide range of common applications considers signals that are well approximated by sparse reconstruction framework, and this area has attracted noteworthy interest in the recent literature. Considerable number of these work focuses on formulating convex optimization algorithms that make use of different sparsity inducing penalties, thereby encouraging solutions that are well represented using only a few elements from some dictionary matrix.
It can be shown that when the dictionary is chosen properly, even limited number of measurements allows for an accurate signal reconstruction. In this work a novel procedure for constructing dictionaries for parameter estimation by sparse reconstruction methods is considered. Instead of forming the dictionary as a finite set of discrete narrowband components for evaluation of continuous parameter space, this work considers wideband dictionary elements, such that continuous parameter space is divided into B subsets. During the estimation procedure, the activated subsets are selected for further refinement and non-activated subsets are discarded from further optimization. Afterwards, a new dictionary is formed for each of the activated subsets, resulting in the zoomed dictionary for that particular region of considered parameter space. An iterative procedure may then be repeated further until required resolution is reached. The initial problem statement and the plausibility of the approach are validated by showing that the method is suitable for one-dimensional data.
The formulation of the wideband framework for multi-dimensional data and validation on different estimators (LASSO, SPICE, IAA), sampling scenarios and data from different sources are considered. An efficient implementation of the algorithm by alternative direction method of multipliers is formulated and corresponding complexity reduction calculations considered. A comprehensive overview of the best approaches to selection of the parameters for the framework is provided. The proposed approach is tested mainly by using corresponding signal model and conducting series of numerical experiments by running multiple Monte-Carlo simulations. However, the proposed method is also tested on real-life signals by considering NMR data and by investigating the possibility of employing similar sparse reconstruction framework for the separation of cardiac and respiratory signal components from the electrical bioimpedance measurements. The wideband framework allows for considerable reduction in computational complexity and decreases probability of missing off-grid components.
For situations where the number of samples is considerably smaller than the number of dictionary elements, the percentage of correct model order estimation for the proposed wideband dictionary is 40 − 50% higher than for the conventional method (90 − 100% vs 50 − 60%). Most of the errors in this situation come from missing off-grid components. Similar methods for grid-selection issues exist; however, they often impose increasing complexity of the problem formulation and therefore limiting size of considered problems as for example in the case of atomic-norm minimization; or loosing the benefits of convexity of a solution as in the case of adaptive grid approach. By employing iterative zooming procedure and decreasing risk of missing off-grid components we show that it is possible to formulate problem with smaller initial dictionary and therefore reduce amount of computations that are needed for the required resolution.
For the same resolution, the computational complexity of the proposed method can be 20 − 30 times lower, which results in considerable reduction in the time required to make an estimation. The proposed wideband dictionary method has the additional benefit of its adaptability and intuitive implementation. This can help to encourage those working in the area of signal estimation to consider applying the proposed method for their problems as wideband dictionary successfully replaces the classical narrowband dictionary for considered problems by providing at least similar performance and often outperforming the classical framework.
Clockwise: Mean-square error curves for different SNR levels for the single-stage narrowband dictionary, using L = 1000, as compared to the two-stage dictionary, using B integrated wide-band elements in the first stage, followed by Q narrowband elements in the second stage; The resulting estimates using a dictionary with 2000 narrowband elements (top), and a two-stage zooming approach using wideband elements, using B1 = 40 in the first stage and B2 = 50 for each activated bands in a second stage. The signal is a measured NMR signal of length N = 256; Cardiac component frequency estimate compared to the frequency estimate as measured from ECG signal; The peak resolving ability of the estimators; the proposed speed-up does not reduce the resolution of the resulting estimates. As expected, the use of a wideband dictionary is even yielding a somewhat improved performance. Source: Maksim Butsenko, Parameter Estimation by Sparse Reconstruction with Wideband Dictionaries, PhD thesis, Tallinn University of Technology, 2018
Distributed Signal Processing in Cognitive Radio Networks
PhD student: Ahti Ainomäe
Supervisors: Tõnu Trump; Mats Bengtsson (KTH, Sweden); Yannick Le Moullec
The lack of available radio frequencies is seen to be an increasing problem for implementing new modern radio communication solutions. Recent studies have shown that, while the available licensed radio spectrum becomes more occupied, the assigned spectrum is significantly underutilized.
To alleviate the situation, cognitive radio (CR) technology has been proposed to provide an opportunistic access to the licensed spectrum areas. Secondary CR systems need to cyclically detect the presence of a primary user (PU) by continuously sensing the spectrum area of interest. Radiowave propagation effects like fading and shadowing often
complicate sensing of spectrum holes. When spectrum sensing is performed in a cooperative manner, then the resulting sensing performance can be improved and stabilized.
Basic layout of CR network
In this work, three fully distributed and adaptive cooperative PU detection solutions for CR networks are studied.
First, we study a distributed energy detection scheme without using any fusion center. Due to reduced communication such a topology is more energy efficient. We propose the usage of distributed, diffusion least mean square (LMS) type of power estimation algorithms with different network topologies. We analyze the resulting energy detection performance by using a common framework and verify the theoretical findings through simulations.
Second, we propose a fully distributed detection scheme, based on the largest eigenvalue of adaptively estimated correlation matrices, assuming that the primary user signal is temporally correlated. Different forms of diffusion LMS algorithms are used for estimating and averaging the correlation matrices over the CR network. The resulting detection performance is analyzed
using a common framework. In order to obtain analytic results on the detection performance, the adaptive correlation matrix estimates are approximated by a Wishart distribution. The theoretical findings are verified through simulations.
Third, we propose a fully distributed largest eigenvalue detection scheme, where the observations of the elements of correlation matrices are weighted by independently estimated local SNR values. The resulting detection performance is analysed by using a common framework. The theoretical findings are verified through MATLAB simulations.
Clockwise: Local power estimation, fixed step , for the recursive ring-round topology; Probability of detection, CTA topology; Proposed diffusion method; ATC, DoF |H1 values with perturbations 0 dB, -1 dB and 2 dB. Source: Ahti Ainomae, Distributed Signal Processing in Cognitive Radio Networks, PhD thesis, Tallinn University of Technology, 2018
Classification and Denoising of Objects in TEM and CT Images Using Deep Neural Networks
PhD student: Anindya Gupta
Supervisors:Olev Märtens; Yannick Le Moullec; Ida-Maria Sintorn (Uppsala University, Sweden); TõnisSaar
The digitization of biomedical and medical images has benefited the clinicians in comprehending (or detecting) obscure abnormalities. However, manual analysis is labor-intensive and time-consuming. Since the last few decades, computer-aided detection (CAD) systems employing learning-based methods and conventional image analysis-based methods have successfully paved the landscape for the detection (and/or classification) of deadly abnormalities. Lately, the inception of deep neural networks (DNN) (often synonymized as deep learning) as a powerful recognition module has shifted the research interest from problem-specific solutions to increasingly problem-agnostic methods that rely on learning from data.
In particular, convolutional neural networks (CNNs) have rapidly become a primary choice for many CAD systems due to their astonishing results. This impulse has been sparked by increased computational power (graphical processing units) and the evolution of learning-based methods.
This work comprises a total of five solutions: four DNN-based solutions for classification of structures in biomedical and medical images, and one solution for denoising of biomedical images to improve the image quality. The work is focused on the applications of two variants of DNN: the CNN, and the multi-layer perceptrons (MLP).
Infographical overview of the PhD work
From a biomedical image analysis perspective, the first solution is associated with improving the performance of automated workflow for primary ciliary dyskinesia (PCD) analysis. To classify cilia and non-cilia structures in low-magnification (LM) transmission electron microscopy (TEM) images,a CNN-based classifier is developed as a false positives (FP) reduction module. Although computing discriminative features of cilia structures at very low magnification is challenging, the developed CNN classifier substantially improves the F-score from 0.47 to 0.59.
The second solution takes a side step from classification and focuses on denoising. Denoising is often considered as a preprocessing step to improve the image quality for automated analysis. Given this, the second solution is associated with enhancing the structural information in short exposure high-magnification TEM images. A novel multi-stream CNN-based model is developed to denoise 100 short exposure HM images acquired at the same spatial location in the cell section. Three different strategies for combining denoising and image merging are investigated to determine the optimal structure enhancing strategy. The CNN denoising model is only trained for one strategy and used as it is for other two strategies, thus presenting the transfer learning perspective of DNN as a potential add-on to automated analysis. The presented model achieves an improved PSNR of 40.84 dB.
From a medical image analysis perspective, the third solution is associated with improving the performance of a CAD system for the early detection of multiple sizes of nodules (3 - 30 mm) in computed tomography (CT) scans. To classify nodules and non-nodules, an MLP-based classifier is developed as an FP reduction module. The CAD is extensively tested on four publically available CT datasets; this makes it the only system to be successfully validated on such large scale. The developed CAD system achieves a high sensitivity of 85.6% with only 8 FPs/scan.
Until recently, conventional CAD systems employing learning-based methods depended on handcrafted representations (features). Designing features by hand is challenging and often result in limited discriminative power; thus, this is insufficient to classify micronodules (≤ 4 mm) and cross-sectional vessels. The fourth solution is associated with developing a CAD system for the detection of micronodules in CT scans. To classify micronodules and small cross-sectional vessels, a novel 3D CNN classifier is developed as an FP reduction module. Using the largest publically available CT dataset, the developed CAD system achieves a high sensitivity of 86.7% with only 8 FPs/scan.
The fifth solution is associated with improving the performance and efficiency of automated workflow for detecting multiple sizes of vascular nodes in CT angiography (CTA) scans. To classify cross-sections of different sizes of vessel and non-vessel nodes, a patch-based CNN classifier is developed as an FP reduction module. On the given 25 CTA volumes from the clinical routine, the presented classifier substantially improves the F-score from 0.43 to 0.82.
Clockwise: overview of the overall workflow consisting of the CNN model; noisy and denoised close ups of a cilium instance obtained with the methods; overview of the developed CAD pipeline; Examples of (a) lung segmentation refinement method; (b) different types of nodules detected by CAD system in SPIE-AAPM dataset; c nodules not marked in the ground-truth list of PCF subset but detected by the proposed CAD. Source: Anindya Gupta, Classification and Denoising of Objects in TEM and CT Images Using Deep Neural Networks, PhD thesis, Tallinn University of Technology, 2018.
Modeling and Implementation of Linear Energy Prediction for Energy Harvesting in Intermittently Powered Wireless Sensor Nodes
PhD student: Faisal Ahmed.
Supervisors: Yannick Le Moullec; Paul Annus; Gert Tamberg
The advent and growth of the IoT has opened new directions and challenges for the scientific community. In particular, IoT enabling devices such as wireless sensor nodes are powered by energy-limited batteries, which affects their life-time and reliability in case of intensive utilization, and eventually leads to increased maintenance requirements and related cost. Thus, researchers have investigated and proposed various solutions under the so-called energy harvesting concept.
Such solutions help overcoming the limited batteries’ capacities by providing a supplementary or alternative source of energy to operate e.g. smart devices, wireless sensor nodes, home appliances, industrial machine etc. The positive impact of energy harvesting in IoT enables innovative applications that are no longer hindered by the batteries limits. However, energy harvesting poses several challenges both at the hardware and software levels when designing energy-autonomous wireless sensor nodes.
Indeed, energy harvesting from the environment such as from solar, wind, thermal, RF etc sources typically exhibits intermittent characteristics. This means that the wireless sensor nodes may be left without power, which in turn impacts the application’s performance in terms of e.g. connectivity and reliability.
Firstly, this work proposed a system-level framework that uses coarse-grain models of various single and hybrid energy harvesting technologies for wireless sensor nodes. Experimental results illustrate how the framework can be used to evaluate various energy harvesting sources for powering WSN nodes.
Then the work assessed the practical feasibility of powering a wireless sensor node from an energy harvesting source without energy storage. A salient feature of the work is the implementation of a transient computing mechanism on a non-volatile (FRAMbased) node. The experimental results illustrate that energy harvesting, combined with transient computing, is indeed feasible.
Next the work proposed an energy prediction model named LINE-P (Linear Energy Prediction). It builds upon sampling and approximation theory. LINE-P is more suitable for dual EH sources and various data time intervals than state-of-the-art models. The simulation results show that LINE-P’s prediction accuracy is up to ca. 98% for solar energy and up to ca. 96% for wind-based prediction.
Thereafter, the work deployed a transient computing mechanism for bidirectional communication where energy harvesting is used in combination with transient computing and the LINE-P energy prediction model. This allows firing communication tasks only if sufficient and stable energy is predicted. The results for a peer-to-peer wireless setup illustrate that the combined two modalities require only 15% of the node’s memory, and this proposed approach (combined) yields an average receiving rate up to 94.6%.
Finally, the work designed the Adaptive LINE-P model that addresses the fixed weighting parameter issue by calculating adaptive weighting parameters based on the stored energy profiles. In addition, a profile compression method has been proposed to reduce the memory requirements. The results illustrate that Adaptive LINE-P’s accuracy is up to 90-94% and compression method can 50% reduce memory overheads.
Diagram and photograph of the experimental transient-computing peer-to-peer wireless setup. Source: Ahmed, F.; Kervadec, C.; Le Moullec, Y.; Tamberg, G.; Annus, P. (2018). Autonomous Wireless Sensor Networks: Implementation of Transient Computing and Energy Prediction for Improved Node Performance and Link Quality. The Computer Journal, 1−18