Statistical signal processing exam learning notes(Aalto)

Important concepts

Sufficient statistic

Statistic is a function of data. Sufficient statistic is a class of statistics that contains all the relevant information present in the random variable that is sampled.

Consistency

An estimator is called consistent if it converges in probability to the real value of the parameter being estimated as the sample size grow to infinity.

Bias

In statistics, bias is the difference between an estimator's expected value and the true value of the parameter being estimated.

Fisher information

Fisher Information measures the amount of information that an observable data variable carries about an unknown parameter upon which the likelihood depends.

Score function

Score function is the gradient of the log-likelihood function w.r.t. the parameter vector

Influence function

Influence function measures the sensitivity of an estimator to small changes or contamination in the data set.

Breakpoint

The breakpoint of an estimator is a measure of the estimator's robustness. Specifically, it refers to the largest proportion of incorrect observations (outliers or errors) an estimator can handle before giving an arbitrarily large error in the estimation. It is often expressed as a percentage of the total data set.

Modes of convergence and their relationships

  • Almost Sure Convergence
  • Convergence in r-th mean
  • Convergence in Probability
  • Convergence in Distribution (Weak Convergence)

Efficiency of the estimator

The degree to which the estimator makes full use of the information in the sample data.

  • Relative Efficiency: The estimator with a smaller variance is said to be more efficient. (Between two unbiased estimator)
  • Asymptotic Efficiency: An estimator is said to be asymptotically efficient if it achieves the lowest possible variance among all estimators as the sample size approaches infinity, which is defined by the Cramer-Rao Lower Bound. An asymptotically efficient estimator is also consistent and unbiased.

Asymptotic unbiasedness

An estimator is said to be asymptotically unbiased if the bias of the estimator approaches zero as the sample size increases indefinitely.

Bayes Risk

The Bayes risk is the minimum expected loss achievable by any estimator. It integrates the loss over all possible values of the unknown parameters and all possible observations, weighted by their joint probability (which combines the prior and the likelihood of the observations).

Innovation

The measurement residual or prediction error is called innovations. Innovations are the new information brought to the system.

Divergence of Kalman Filter

The actual estimation error variance significantly exceeds the values theoretically predicted by filter. The error may become unbounded even if the error variance in Kalman algorithm is vanishingly small.

Array aperture and resolution

Array aperture: Size of array in wavelengths

Array resolution: Refers to the array's ability to distinguish between two closely spaced objects.

Array steering vector

It's a mathematical representation that describes how the phase of a signal varies across the elements of an array to achieve a desired beam direction or to steer the beam towards a particular direction in space.

Signal and noise subspaces

Signal subspace: This is the part of the signal space that contains the user-specified signals of interest.

Noise subspace: This is the orthogonal complement to the signal subspace and contains everything not accounted for by the signal subspace, which is typically noise.

Beamformer

A beamformer is a system that processes signals from an array of sensors to selectively receive energy from a particular direction. It combines the signals from multiple elements in such a way as to enhance the signal from a specific location while suppressing signals from other directions

M-estimation

M-estimation is a robust statistical technique used for parameter estimation in the presence of outliers, where the goal is to minimize a sum of a specified function of the residuals (differences between observed and fitted values) rather than the sum of squared residuals as in ordinary least squares. This approach reduces the influence of outliers by using a function that increases less rapidly than the square of the residuals, resulting in parameter estimates that are less sensitive to atypical data. The process typically involves an iterative method, such as iteratively reweighted least squares, to find the parameter values that minimize the objective function, making M-estimation a flexible and robust alternative to traditional estimation methods.

Expectation-Maximization

The Expectation-Maximization (EM) algorithm is a statistical technique for finding maximum likelihood estimates of parameters in probabilistic models, especially when the data is incomplete or has missing values. The EM algorithm alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These two steps are repeated until convergence.

Describe briefly the idea of Minimum Variance Distortionless Beamformer. What properties does this method have. How is its computation commonly done.

The MVDR beamformer aims to minimize the variance of the beamformer's output signal, subject to a constraint that the gain in the direction of the desired signal is equal to one. This means it allows the signal from the target direction to pass through without attenuation (distortionless response), while signals from other directions are attenuated. The beamformer weights can adapt to changes in signal and noise characteristics.

Computation:

  1. Covariance Matrix: Compute the covariance matrix of the received signal vector from the array sensors.
  2. Inversion: Invert the covariance matrix to find the beamforming weights.
  3. Constraint Application: Apply the distortionless constraint to maintain the gain in the target direction while minimizing output variance.
  4. Weight Calculation: Calculate the beamforming weights that minimize the output power while keeping the constraint in place.
Describe briefly the principles of FIR Wiener Filtering and give an example how it could be used.

FIR Wiener filtering is a signal processing technique aimed at reducing noise from signals. It works by constructing a filter that minimizes the mean square error between the desired signal and the actual signal. An FIR Wiener filter uses only current and past input values (hence "finite impulse") to produce the output, making it inherently stable and free from feedback loops.

For example, an FIR Wiener filter could be used in digital image processing to reduce noise in digital photos without blurring important edges and details.

What is the difference between common Kalman filter, Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) and Particle Filter.

Kalman filter: linear system.

EKF: Non-linear system by linearizing around current estimate using Talor expansion.

UKF: It is based on the idea that it is easier to approximate a probability distribution than it is to approximate an arbitrary nonlinear function or transformation

Difference between classical beamformer, MVDR beamformer, MUSIC
  1. Classical Beamformer:
  • Calculate the steering vector a ( θ ) a(\theta) a(θ) for each look direction θ \theta θ.
  • The output power P ( θ ) P(\theta) P(θ) for each direction is given by P ( θ ) = a ( θ ) H x x H a ( θ ) P(\theta)=a(\theta)^H x x^H a(\theta) P(θ)=a(θ)HxxHa(θ), where x x x is the received signal vector.
  • Plot P ( θ ) P(\theta) P(θ) to find the peaks, which correspond to the estimated DoAs.
  1. MVDR (Capon) Beamformer:
  • Estimate the signal covariance matrix R x x R_{x x} Rxx using the received signal snapshots x x x, R x x = 1 N ∑ i = 1 N x i x i H R_{x x}=\frac{1}{N} \sum_{i=1}^N x_i x_i^H Rxx=N1∑i=1NxixiH, where N N N is the number of snapshots.
  • For each look direction θ \theta θ, calculate the steering vector a ( θ ) a(\theta) a(θ).
  • The output power P ( θ ) P(\theta) P(θ) for each direction is given by P ( θ ) = 1 a ( θ ) H R x x − 1 a ( θ ) P(\theta)=\frac{1}{a(\theta)^H R_{x x}^{-1} a(\theta)} P(θ)=a(θ)HRxx−1a(θ)1.
  • Plot P ( θ ) P(\theta) P(θ) to find the peaks, which correspond to the estimated DoAs.
  1. MUSIC Algorithm:
  • Estimate the signal covariance matrix R x x R_{x x} Rxx using the received signal snapshots x x x, just like in the MVDR method.
  • Perform eigenvalue decomposition on R x x R_{x x} Rxx to separate it into signal and noise subspaces.
  • For each look direction θ \theta θ, calculate the steering vector a ( θ ) a(\theta) a(θ).
  • The MUSIC spectrum for each direction is given by P M U S I C ( θ ) = 1 a ( θ ) H E n E n H a ( θ ) P_{M U S I C}(\theta)=\frac{1}{a(\theta)^H E_n E_n^H a(\theta)} PMUSIC(θ)=a(θ)HEnEnHa(θ)1, where E n E_n En represents the eigenvectors corresponding to the noise subspace.
  • Plot P M U S I C ( θ ) P_{M U S I C}(\theta) PMUSIC(θ) to find the peaks, which correspond to the estimated DoAs.

Maximum likelihood estimate problem

  1. Construct the likelihood function by taking the product of the pdfs for all observations.
  2. take the natural logarithm of the likelihood function.
  3. Differentiate the log-likelihood function with respect to the parameter being estimated.
  4. Find the value of the estimated parameter that sets the derivative to zero.
  5. *To decide if an estimator is unbiased, we check if the expected value of the estimator is equal to the true parameter value.

Cramer-Rao lower bound problem

  1. Compute Fisher information. One way is expected value of the square of the score function, which is the derivative of the log-likelihood. The other is negative expectation of the second derivative of the log-likelihood.
  2. Compute CRLB
  3. Check for Unbiased Estimators. The CRLB applies only to unbiased estimators.

MAP and Mean square estimator problem

MAP:

  1. Specify the Likelihood Function
  2. The posterior distribution is proportional to the likelihood function times prior distribution.
  3. MAP estimate is found by maximizing the posterior distribution
  4. Differentiate the log of the posterior and solve the parameter.

Mean square estimator:

If the prior probability distributions of the noise and the parameter are known, the MMSE estimator can be explicitly calculated as the conditional expectation of the parameter given the observable data: θ ^ M S = E [ θ ∣ y ] = ∫ θ f ( θ ∣ y ) d θ \hat{\theta}_{M S}=E[\theta \mid \mathbf{y}]=\int \theta f(\theta \mid \mathbf{y}) d \theta θ^MS=E[θ∣y]=∫θf(θ∣y)dθ.

Method of Moments

Usually set two moments. mean and variance. Equating the theoretical moments with the corresponding sample moments. Solve the equations of the system.

Determine if the statistic is sufficient

According to the Fisher-Neyman factorization theorem, a statistic T ( Y ) T(Y) T(Y) is sufficient for θ \theta θ if and only if the joint probability density function (pdf) or probability mass function ( p m f \mathrm{pmf} pmf ) of the sample can be factored into a product of two functions:

  1. One function h ( y ) h(y) h(y) that depends only on the sample data y y y and not on θ \theta θ,
  2. Another function g ( T ( y ) , θ ) g(T(y), \theta) g(T(y),θ) that depends on the sample data only through the statistic T ( y ) T(y) T(y) and on θ \theta θ.

Find the least square estimator

Write the square residual. Try to minimize this by differential with respect to the parameter. Set to zero and solve for parameters.

相关推荐
五味香1 天前
C++学习,信号处理
android·c语言·开发语言·c++·学习·算法·信号处理
看山还是山,看水还是。3 天前
交换机如何开启FTP服务
网络协议·http·信息与通信·信号处理
王伯爵4 天前
TDD(时分双工 Time Division Duplexing)和FDD(频分双工 Frequency Division Duplexing)
5g·信息与通信·信号处理·tdd
nick98764 天前
信号处理之中值滤波
人工智能·算法·信号处理
行十万里人生4 天前
信号处理: Block Pending Handler 与 SIGKILL/SIGSTOP 实验
c++·后端·深度学习·ubuntu·serverless·信号处理·visual studio code
limingade5 天前
手机实时提取SIM卡打电话的信令声音-(题外、插播一条广告)
android·物联网·计算机外设·音视频·webrtc·信号处理
小鹿会议播报6 天前
【学术会议征稿】2024年遥感技术与图像处理国际学术会议(RSTIP 2024)
大数据·图像处理·人工智能·云计算·制造·信号处理
Crazy learner7 天前
深入了解 Ne10:优化 ARM 处理器的数字信号处理库
arm开发·信号处理
王伯爵7 天前
RSRP SNR SINR
5g·信息与通信·信号处理
一个通信老学姐9 天前
专业120+总分400+中国科学技术大学843信号与系统考研经验中科大电子信息通信工程,生物医学工程,苏医工,真题,大纲,参考书。
考研·信息与通信·信号处理