• 대한전기학회
Mobile QR Code QR CODE : The Transactions of the Korean Institute of Electrical Engineers
  • COPE
  • kcse
  • 한국과학기술단체총연합회
  • 한국학술지인용색인
  • Scopus
  • crossref
  • orcid

  1. (Department of Automotive Engineering, Kookmin University, Seoul, Korea)
  2. (School of Electronic Engineering, Kumoh National Institute of Technology, Gumi-si, Gyeongsangbuk-do, Korea)



variable step-size, affine projection algorithm, impulsive noise, P-norm

1. Introduction

The least-mean-square algorithm and its normalized version have been used in a wide range of application such as acoustic noise control, motion artifact cancellation, noise cancellation, channel equalization and inverse modeling because of their small computational complexity and ease of implementation [1]. In addition, the affine projection algorithm has been proposed to improve the performance in terms of the convergence speed with colored input signals [2]-[6]. However, these algorithms suffer from performance degradation with impulsive noise, as shown in Fig. 1, because they are based on L2-norm optimization.

Recently, the affine projection sign algorithm (APSA) has been proposed [7], which is based on L1-norm optimization. The APSA achieves a fast convergence speed and small misalignment errors with impulsive noise. However, in high probability impulsive noise, the APSA also suffers from performance degradation.

Fig. 1. Impulsive noises

../../Resources/kiee/KIEE.2020.69.2.344/fig1.png

Therefore, to overcome this problem, we propose a p-norm-like affine projection algorithm (APPA) that is obtained by minimizing the p-norm-like of an error vector. In addition, the step-size algorithm for the APPA is proposed to improve the performance in terms of convergence speed and misalignment errors, which is derived by minimizing MSD of the APPA. The performance of the proposed algorithm is tested in the channel identification scenario. By simulation results, we confirm that the proposed algorithm achieves the better performance than the APSA with fixed step size and variable step size [8] in high probability impulsive noise.

2. Proposed Algorithm

2.1 P-norm-like Affine Projection Algorithm (APPA)

Consider the data d(k) that is obtained from an unknown system

(1)
$$ d(k)=\boldsymbol{u}^{T}(k) \boldsymbol{w}_{o}+v(k) $$

where $\boldsymbol{w_{o}}$ is an n-dimensional column vector that we expect to estimate, $v(k)$ accounts for a measurement noise with variance $\sigma_{v}^{2}$, and input vector is defined as

(2)
$$\boldsymbol{u}(k)=[u(k)u(k-1)\cdots u(k-n+1)]^{T}.$$

The output error vector, the desired output data vector, and the data matrix are defined as

(3)
$\begin{align*} e(k)&=d(k)- \boldsymbol{U^{T}}(k)\hat{\boldsymbol{w}}(k)\\ &=[e(k)e(k-1)\cdots e(k-M+1)]^{T} \end{align*},$

(4)
$$\boldsymbol{d}(k)=[d(k)d(k-1)\cdots d(k-M+1)]^{T},$$

(5)
$$\boldsymbol{U}(k)=[\boldsymbol{u}(k)\boldsymbol{u}(k-1)\cdots \boldsymbol{u}(k-M+1)]^{T},$$

where $\hat{\boldsymbol{w}}(k)$, which is the estimate of $\boldsymbol{w_{o}}$ at iteration k.

The proposed algorithm is derived by minimizing the p-norm-like of an error vector as follows:

(6)
$$\min _{\hat{\boldsymbol{w}}(k)}||\boldsymbol{d}(k)-\boldsymbol{U^{T}}(k)\hat{\boldsymbol{w}}(k)||_{p},$$

where $|| \boldsymbol{x}(k)||_{p}=\sum_{i=1}^{n}|x(k)|^{p},\: 0\le p\le 1$ [9]. From (6), the proposed cost function is obtained as

(7)
$$J(\hat{\boldsymbol{w}}(k))=|| \boldsymbol{e}(k)||_{p}.$$

To minimizing the cost function with respect to the weight vector $\hat{\boldsymbol{w}}(k+1)$, the cost function is differentiated as follows:

(8)
$$ \begin{aligned} &\nabla_{\hat{\boldsymbol{w}}(k)} J(\hat{\boldsymbol{w}}(k))=\frac{\partial J(\hat{\boldsymbol{w}}(k))}{\partial \hat{\boldsymbol{w}}(k)}\\ &\begin{array}{l} {=-\sum_{m=0}^{M-1} \operatorname{sgn}(e(k-m))|e(k-m)|^{p-1} \boldsymbol{u}(k-m)} \\ {=-\boldsymbol{U}(k) f_{p}(\boldsymbol{e}(k))} \end{array} \end{aligned} $$

where $sgn(·)$ is the sign function, and

(9)
$$f_{p}(\boldsymbol{e}(k))=[sgn(e(k))|e(k)|^{p-1}sgn(e(k-1))|e(k-1)|^{p-1}\\ \cdots sgn(e(k-M+1))|e(k-M+1)|^{p-1}]^{T}.$$

The update equation for the APPA is derived by the normalized gradient method [10] as follows:

(10)
$$ \begin{aligned} \hat{\boldsymbol{w}}(k+1)& = \hat{\boldsymbol{w}}(k)-\mu \frac{\nabla_{\hat{\boldsymbol{w}}(k)} J(\hat{\boldsymbol{w}}(k))}{\left|\nabla_{\hat{\boldsymbol{w}}(k)} J(\hat{\boldsymbol{w}}(k))\right|} \\ & = \hat{\boldsymbol{w}}(k)+\mu \frac{\boldsymbol{U}(k) f_{p}(\boldsymbol{e}(k))}{\sqrt{f_{p}\left(\boldsymbol{e^{T}}(k)\right) \boldsymbol{U^{T}}(k) \boldsymbol{U}(k) f_{p}(\boldsymbol{e}(k))}} ,\end{aligned} $$

where $\mu$ is a step size.

2.2 Variable Step-Size APPA

We define the weight-error vector as $\widetilde{\boldsymbol{w}}(k)= \boldsymbol{w_{o}}- \hat{\boldsymbol{w}}(k)$ and can rewritten in terms of $\widetilde{\boldsymbol{w}}(k)$ by subtracting $\boldsymbol{w_{o}}$ both side of (10) as follows:

(11)
$$\widetilde{\boldsymbol{w}}(k+1)=\widetilde{\boldsymbol{w}}(k)-\mu\dfrac{\boldsymbol{U}(k)f_{p}(\boldsymbol{e}(k))}{\sqrt{f_{p}(\boldsymbol{e^{T}}(k))\boldsymbol{U^{T}}(k)\boldsymbol{U}(k)f_{p}(\boldsymbol{e}(k))}}.$$

The update recursion of mean-square deviation (MSD) is derived by taking the expectation after squaring both sides of (11) as follows:

(12)
$$ \begin{aligned} {MSD}(k+1)& =E\left(\widetilde{\boldsymbol{w}}^{T}(k+1)\widetilde{\boldsymbol{w}}(k+1)\right)\\ & =E\left(\widetilde{\boldsymbol{w}}^{T}(k)\widetilde{\boldsymbol{w}}(k)\right)+g(k) \end{aligned} $$

where

(13)
$$g(k)=-2\mu E\left(\dfrac{f_{p}(\boldsymbol{e}(k))\boldsymbol{U^{T}}(k)\widetilde{\boldsymbol{w}}(k)}{\sqrt{f_{p}(\boldsymbol{e^{T}}(k))\boldsymbol{U^{T}}(k)\boldsymbol{U}(k)f_{p}(\boldsymbol{e}(k))}}\right)+\mu^{2}.$$

If $g(k)<0$, MSD decreases monotonically. By minimizing the value of MSD$({k}+1)$ with respect to the step size $\mu$, the optimal step size $\mu^{*}(k)$ of APPA is derived by

(14)
$$ \begin{aligned} \mu^{*}(k)& = E\left(\dfrac{f_{p}(\boldsymbol{e}(k))\boldsymbol{U^{T}}(k)\widetilde{\boldsymbol{w}}(k)}{\sqrt{f_{p}(\boldsymbol{e^{T}}(k))\boldsymbol{U^{T}}(k)\boldsymbol{U}(k)f_{p}(\boldsymbol{e}(k))}}\right)\\ & = E\left(\dfrac{f_{p}(\boldsymbol{e}(k))\boldsymbol{U^{T}}(k)\left(\boldsymbol{w_{o}}-\hat{\boldsymbol{w}}(k)\right)}{\sqrt{f_{p}(\boldsymbol{e^{T}}(k))\boldsymbol{U^{T}}(k)\boldsymbol{U}(k)f_{p}(\boldsymbol{e}(k))}}\right)\\ & = E\left(\dfrac{f_{p}(\boldsymbol{e}(k))(\boldsymbol{e}(k)- \boldsymbol{v}(k))}{\sqrt{f_{p}(\boldsymbol{e^{T}}(k))\boldsymbol{U^{T}}(k)\boldsymbol{U}(k)f_{p}(\boldsymbol{e}(k))}}\right)\\ & = E\left(\dfrac{∥\boldsymbol{e}(k)∥_{p}- f_{p}(\boldsymbol{e^{T}}(k))\boldsymbol{v}(k)}{\sqrt{f_{p}(\boldsymbol{e^{T}}(k))\boldsymbol{U^{T}}(k)\boldsymbol{U}(k)f_{p}(\boldsymbol{e}(k))}}\right), \end{aligned} $$

where

(15)
$$\boldsymbol{v}(k)=[v(k)v(k-1)\cdots v(k-M+1)]^{T}.$$

Because the noise $v(k)$ is unknown, however, it is difficult to calculate the exact value of the optimal step size $\mu^{*}(k)$. Therefore, the proposed step size is obtained as

(16)
$$\mu(k)=E\left(\dfrac{∥\boldsymbol{e}(k)∥_{p}}{\sqrt{f_{p}(\boldsymbol{e^{T}}(k))\boldsymbol{U^{T}}(k)\boldsymbol{U}(k)f_{p}(\boldsymbol{e}(k))}}\right).$$

Since it is hard to determine the exact step size (11) directly, we propose to calculate $\mu(k+1)$ recursively by time-averaging as follows:

(17)
$\begin{align*} \hat\mu(k+1)& = \alpha\hat\mu(k)\\ & +(1-\alpha)\min\left(\dfrac{∥\boldsymbol{e}(k)∥_{p}}{\sqrt{f_{p}(\boldsymbol{e^{T}}(k))\boldsymbol{U^{T}}(k)\boldsymbol{U}(k)f_{p}(\boldsymbol{e}(k))}},\:\hat\mu(k)\right) \end{align*}$

with a smoothing factor $\alpha(0\le\alpha <1)$. If $p=1$, this algorithm performs like a variable step-size APSA[8].

Fig. 2. Illustration of the relationship between $\mu(k)$ and $g(k)$

../../Resources/kiee/KIEE.2020.69.2.344/fig2.png

Fig. 2. illustrates the relationship between $\mu(k)$ and $g(k)$. The optimal step size $\mu^{*}(k)$ leads to the largest decrease in the MSD because $g(k)$ has the smallest value when $\mu(k)=\mu^{*}(k)$. Because we cannot calculate $\mu^{*}(k)$, however, we use the step size (17) that is in the bracket whose size is related to the measurement noise. Therefore, the MSD of the proposed step-size algorithm decrease monotonically until $2\mu^{*}(k)$ is smaller than the bracket.

2.3 Simulation Results

Computer simulations in the channel identification are used to evaluate the performance of the proposed algorithm. In these simulations, the unknown channel is the acoustic impulse response of a room truncated to 512 taps ($n=512$) as shown in Fig. 3, and we assume that the adaptive filter and the unknown channel have same number of taps.

Fig. 3. Acoustic impulse response of a room.

../../Resources/kiee/KIEE.2020.69.2.344/fig3.png

The colored input signals are generated by filtering white Gaussian noise through a first-order system as follows:

(18)
$$G(z)=\dfrac{1}{1-0.9z^{-1}}.$$

The signal-to-noise ratio (SNR) and the normalized mean squared deviation (NMSD) are defined as

(19)
$$SNR = 10\log_{10}\left(\dfrac{E[y^{2}(k)]}{E[v^{2}(k)]}\right),$$

(20)
$$NMSD = 10\log_{10}\left(\dfrac{E[\widetilde{\boldsymbol{w^{T}}}(k)\widetilde{\boldsymbol{w}}(k)]}{E[\boldsymbol{w^{T}_{o}}\boldsymbol{w_{o}}]}\right),$$

where $y(k)= \boldsymbol{u^{T}}(k)\boldsymbol{w_{o}}$. The measurement noise is added to the output y(k) such that the SNR is set to 30dB. The impulsive noise $\eta(k)$ is generated as $\eta(k)=\kappa(k)A(k)$, where $\kappa(k)$ is a Bernoulli process with probability of success $P[\kappa(k)=1]= Pr$, and $A(k)$ is a zero-mean Gaussian with power $\sigma^{2}_{A}= 1000\sigma^{2}_{y}$. Each adaptive filter is tested for $M=4$ and Pr that denotes the probability occurring the impulsive noise is set 0.5. The simulation results are obtained by ensemble averaging over 10 trials.

Fig. 4. MSD learning curves of APPAs at $(\mu =0.005)$ for colored input generated by G(z) and impulsive noises with $Pr = 0.5$

../../Resources/kiee/KIEE.2020.69.2.344/fig4.png

Fig. 5. MSD learning curves of VSS-APPAs for colored input generated by G(z) and impulsive noises with $Pr = 0.5$

../../Resources/kiee/KIEE.2020.69.2.344/fig5.png

Fig. 4 shows the NMSD learning curves for APPAs with fixed step size $(\mu =0.005)$. In high probability impulsive noise, the proposed APPAs with small $p$ has smaller misalignment errors than the APPA with $p=1$, which is APSA. Fig. 5 shows the NMSD learning curves of VSS-APPAs for $\alpha = 0.8$, and $\mu(0)=\sqrt{\dfrac{\sigma^{2}_{d}}{M\sigma^{2}_{u}}}$, where $\sigma^{2}_{d}$ are $\sigma^{2}_{u}$ the power of the observed output and input respectively. As can be seen, the proposed VSS-APPAs have smaller misalignment errors than VSS-APSA when $p$ is set to less than 1.

3. Conclusion

In this letter, a p-norm-like affine projection algorithm (APPA) and its variable step-size algorithm have been proposed. By minimizing the cost function that consists of the p-norm-like of an error vector, the APPA was obtained. Therefore, its channel identification performance has been improved under impulsive noise environment. In addition, to improve convergence speed in transient state and channel estimation accuracy in steady state, the step-size algorithm for the APPA was derived from MSD minimization. The simulation results showed that the proposed algorithm is better than the VSS-APSA in high probability impulse noise environment. The proposed algorithms have been applied underwater acoustic communication system, active noise cancellation system, and channel estimation in broadband communication system.

Acknowledgements

This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP; Ministry of Science, ICT & Future Planning) (No. 2017R1C1B 5017968). This work was supported by the Soonchunhyang University Research Fund.

References

1 
A. Ghosh, S. Devadas, K. Keutzer, J. White, 1992, Estimation of Average Switching Activity in Combinational and Sequential Circuits, in Proc. of ACM/IEE Design Automation Conf., pp. 253-259DOI
2 
F. N. Najm, Dec 1994, A Survey of Power Estimation Techniques in VLSI Circuits, IEEE Trans. on VLSI Systems, pp. 446-455DOI
3 
J. Monteiro, S. Devadas, B. Lin, 1994, A Methodology for Efficient Estimation of Switching Activity in Sequential Logic Circuits, ACM/IEEE Design Automation Conf., pp. 12-17DOI
4 
R. Burch, F. N. Najm, P. Yang, T. N. Trick, March 1993, A Monte Carlo Approach for Power Estimation, IEEE Trans. on VLSI systems, Vol. 1, No. 1, pp. 63-71DOI
5 
A. Papoulis, 1991, Probability, Random Variables, and Stochastic Processes, 3rd Edition, McGraw-Hill, New YorkGoogle Search
6 
S. Haykin, 2002, Adaptive filter theory 4th ed, Prentice-Hall, Upper Saddle River, NJGoogle Search
7 
K. Ozeki, T. Umeda, 1984, An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties, Electronics and Communications in Japan, Vol. 67-a, No. 5, pp. 19-27DOI
8 
H.-C. Shin, A. H. Sayed, W.-J. Song, Feb 2004, Variable step-size nlms and affine projection in algorithm, IEEE Signal Process. Letters, Vol. 11, No. 2, pp. 132-135DOI
9 
S.-J. Kong, K.-Y Hwang, W.-J. Song, Aug 2007, An affine projection algorithm with dynamic selection of input vectors, IEEE Signal Process. Lett., Vol. 14, No. 8, pp. 529-532DOI
10 
S.-E. Kim, S.-J. Kong, W.-J. Song, Nov 2009, An affine projection algorithm with evolving order, IEEE Signal Process. Lett., Vol. 16, No. 11, pp. 937-940DOI
11 
Y. Fan, J. Zhang, Aug 2009, Variable step-size affine projection algorithm with exponential smoothing factors, Electrion. Lett., Vol. 45, No. 17, pp. 911-913DOI
12 
T. Shao, Y. R. Zheng, J. Benesty, 2010, An Affine projection Sign Algorithm Robust Against Impulsive Interference, IEEE Trans. Signal Process. Lett., Vol. 17, No. 4, pp. 327-330DOI
13 
J. Shin, J. Yoo, P. Park, 2012, Variable step-size affine projection sign algorithm, Electronics Letters, Vol. 48, No. 9, pp. 483-485DOI
14 
B. D. Rao, K. K. Delgado, 1999, An affine scaling methodology for best basis selection, IEEE Transactions on Signal Processing, Vol. 47, No. 1, pp. 187-200DOI
15 
Y.-S. Choi, H.-C. Shin, W.-J. Song, 2007, Adaptive regularization matrix for affine projection algorithm, IEEE Trans. Circuits Syst. II, Express Briefs, Vol. 54, No. 12, pp. 1087-1091DOI

저자소개

유진우 (JinWoo Yoo)
../../Resources/kiee/KIEE.2020.69.2.344/au1.png

He received his B.S., M.S., Ph.D. in electrical engineering from Pohang University of Science and Technology (POSTECH) in 2009, 2011, 2015, respectively.

He was a senior engineer at Samsung Electronics from 2015 to 2019.

He is currently an assistant professor in the department of automotive engineering at Kookmin University.

His current research interests are signal/image processing and autonomous driving.

박범용 (Bum Yong Park)
../../Resources/kiee/KIEE.2020.69.2.344/au2.png

He received his M.S. and Ph.D. degrees in Electrical and Electronic Engineering from POSTECH (Pohang University of Science and Technology), Pohang, Korea, in 2011 and 2015, respectively.

He joined KIT (Kumoh National Institute of Technology), Gumi, Korea, in 2017 and is currently an assistant professor at School of Electronic Engineering in KIT.

His research interests include robust control and signal processing for embedded control systems, robot manipulator system.

신재욱 (JaeWook Shin)
../../Resources/kiee/KIEE.2020.69.2.344/au3.png

He received his B.S. degree in electrical engineering and computer science at Kyungpook National University, Korea, in 2008, and his M.S. and Ph.D. degrees in electrical engineering at Pohang University of Science and Technology (POSTECH), Korea, in 2010 and 2014, respectively.

Since 2017, he has been affiliated with the Department of Medical and Mechatronics En- gineering, Soonchunhyang University, where he is currently a professor.

His current research interests include adaptive filter, robust control, and biomedical signal processing.