4 Algorithm for Linearly Time-Varying Second Order Blind Identification

This section introduces the new Linearly Time-Varying Second Order Blind Identification (LTV-SOBI) algorithm pursuing improved mathematical accuracy given TV-SOS model (3.1). This algrorithm is developed from Yeredor’s (2003) original TV-SOBI algorithm, but does not require the assumption \((B4)\) in model (3.1). LTV-SOBI mainly includes three steps which use sample autocovariance matrices and applying joint diagonalization after applicable decomposition. Various matrix operations are heavily utilized in all steps.

4.1 Decomposition of Autocovariance Structure

Being a second-order approach, the LTV-SOBI algorithm is based on the sample autocovariances of pre-centered observation. As previously demonstrated in equation (3.2), the autocovariances are

\[\begin{equation} \begin{aligned} \text{Cov}(\boldsymbol x_t, \boldsymbol x_{t+\tau}') & = \mathbb E(\boldsymbol x_t \boldsymbol x_{t+\tau}') \\ & = \mathbb E[( \boldsymbol I + t \boldsymbol{\mathcal E}) \boldsymbol\Omega_0 \boldsymbol z_t \ \boldsymbol z_{t+\tau}' \boldsymbol\Omega_0' [ \boldsymbol I + (t + \tau) \boldsymbol{\mathcal E}]'] \\ &= \underline {\boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \boldsymbol \Omega_0'} + t (\underline{ \boldsymbol{\mathcal E} \boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \boldsymbol \Omega_0' + \boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \boldsymbol \Omega_0' \boldsymbol{\mathcal E}'}) \\ &\ \ \ \ \ \ + t(t+\tau) ( \underline{\boldsymbol{\mathcal E} \boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \boldsymbol \Omega_0' \boldsymbol{\mathcal E}'}) + \tau ( \underline {\boldsymbol{\mathcal E} \boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \boldsymbol \Omega_0'}) \end{aligned}, \tag{4.1} \end{equation}\]

where \(\tau\in\{0\} \bigcup L\), and \(L = \{\tau_1, \tau_2,\dots, \tau_{l}\}\) are the set of pre-defined lags. The assumption of stationarity is demonstrated as \(\boldsymbol \Lambda_\tau = \mathbb E( \boldsymbol z_ t \boldsymbol z_{t+\tau}')\) being invariant for all \(\tau\in L\) and unreliant on any \(t=1,2,\dots, T-\tau\). Instead of Yeredor’s approximation methodology, the proposed LTV-SOBI algorithm seeks to improve accuracy by preserving all terms in autocovariance matrices.

After decomposition, the observation is summarized into \(l+1\) matrices of autocovariance. The number reflects the quantity of pre-selected lags plus the one for the sample covariance matrix \(\mathbb E( \boldsymbol x_t \boldsymbol x_t')\). Consider element-wise equivalence, for \(i,j=1,2,…,p\), the autocovariance structure in (4.1) is equivalent to \(\boldsymbol S_\tau = \boldsymbol H^*_\tau \boldsymbol \beta^*_\tau\) defined as,

\[\begin{equation} \underbrace{\begin{bmatrix} \boldsymbol x_1 \boldsymbol x_{1+\tau}'\ [i,j] \\ \boldsymbol x_2 \boldsymbol x'_{2+\tau}\ [i,j] \\ \vdots \\ \boldsymbol x_{T-\tau} \boldsymbol x_{T}'\ [i,j]\end{bmatrix}}_{:= \boldsymbol S_\tau} = \underbrace {\begin{bmatrix} 1 & 1 & 1(1+\tau) &\tau\\ 1 & 2 & 2(2+\tau) &\tau \\ \vdots &\vdots &\vdots &\vdots\\ 1 & T-\tau & (T-\tau)T &\tau \end{bmatrix}} _ {:= \boldsymbol H^*_\tau} \underbrace {\begin{bmatrix} (\boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0') \ [i,j]\\ ({\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' + \boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' \, \boldsymbol{\mathcal E}'})\ [i,j]\\ ({ \boldsymbol{\mathcal E}\, \boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' \, \boldsymbol{\mathcal E}'}) \ [i,j] \\ ( {\boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' \, \boldsymbol{\mathcal E}'}) \ [i,j]\end{bmatrix}} _ {:= \boldsymbol \beta^*_\tau}\ . \tag{4.2} \end{equation}\]

As the \(p\)-vector \(\boldsymbol{x}\) is the only available observation, it seems that element-wise linear regression is an unpretentious solution to decompose the autocovariances into the structure as in (4.1). The challenge of equation (4.2) is that the last column of \(\boldsymbol H^*_\tau\) is constant given \(\tau\) and fortunately, the modified form in (4.3) can comfortably tackle it, where the fourth element in \(\boldsymbol \beta^*_\tau\) are merged into the first row within the same matrix.

\[\begin{equation} \underbrace{\begin{bmatrix} \boldsymbol x_1 \boldsymbol x_{1+\tau}'\ [i,j] \\ \boldsymbol x_2 \boldsymbol x'_{2+\tau}\ [i,j] \\ \vdots \\ \boldsymbol x_{T-\tau} \boldsymbol x_{T}'\ [i,j]\end{bmatrix}}_{:= \boldsymbol S_\tau} = \underbrace {\begin{bmatrix} 1 & 1 & 1(1+\tau) \\ 1 & 2 & 2(2+\tau) \\ \vdots &\vdots &\vdots \\ 1 & T-\tau & (T-\tau)T \end{bmatrix}} _ {:= \boldsymbol H_\tau} \underbrace {\begin{bmatrix} (\boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' + \tau {\boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' \, \boldsymbol{\mathcal E}'}) \ [i,j]\\ ({\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' + \boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' \, \boldsymbol{\mathcal E}'})\ [i,j]\\ ({ \boldsymbol{\mathcal E}\, \boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' \, \boldsymbol{\mathcal E}'}) \ [i,j] \end{bmatrix}} _ {:= \boldsymbol \beta_\tau}\ . \tag{4.3} \end{equation}\]

Therefore, a proper linear regression can be applied for each \(i,j = 1,2,\dots,p\) and each \(\tau\in\{0\}\cup L\) in the form of \(\boldsymbol S_\tau[i,j] = \boldsymbol H_\tau[i,j] \boldsymbol\beta_\tau[i,j]\), where \(\boldsymbol S_\tau\) are known and \(\boldsymbol H_\tau\) are design matrices as expressed in (4.3); ultimately, matrices of \(\boldsymbol\beta_\tau\) are fully estimated in an element-wise manner after looping. For better efficiency, the vectorization form as follows is generally recommended as the alternative for looping over each \(i,j\).

\[\begin{equation} \underbrace{\begin{bmatrix} \text{vec}(\boldsymbol x_1 \boldsymbol x_{1+\tau}') \\ \text{vec}(\boldsymbol x_2 \boldsymbol x'_{2+\tau}) \\ \vdots \\ \text{vec}(\boldsymbol x_{T-\tau} \boldsymbol x_{T}')\end{bmatrix}}_{:= \text{vec}(\boldsymbol S_\tau)} = \underbrace {\begin{bmatrix} 1 & 1 & 1(1+\tau) \\ 1 & 2 & 2(2+\tau) \\ \vdots &\vdots &\vdots \\ 1 & T-\tau & (T-\tau)T \end{bmatrix} \otimes \boldsymbol I_{p^2}} _ {:= \boldsymbol H_\tau \otimes \boldsymbol I_{p^2}} \ \ \underbrace {\begin{bmatrix} \text{vec}(\boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' + \tau {\boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' \, \boldsymbol{\mathcal E}'}) \\ \text{vec}({\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' + \boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' \, \boldsymbol{\mathcal E}'}) \\ \text{vec}({ \boldsymbol{\mathcal E}\, \boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' \, \boldsymbol{\mathcal E}'}) \end{bmatrix}} _ {:= \text{vec}(\boldsymbol \beta_\tau)}\ . \tag{4.4} \end{equation}\]

The classical linear model theory suggests the unique maximum likelihood estimator of \(\boldsymbol\beta_\tau\) in (4.4) to be \(\text{vec} (\widehat{\boldsymbol \beta}_\tau) = \big[ (\mathbf H_\tau \otimes \boldsymbol I_{p^2})' (\mathbf H_\tau \otimes \boldsymbol I_{p^2})\big]^{-1} (\mathbf H_\tau \otimes \boldsymbol I_{p^2})' \text{vec}(\mathbf S_\tau)\). The estimator coincides with least squared ones (e.g. Myers & Myers, 1990), and the inverse of vectorization is straightforward. In conclusion, the first step of LTV-SOBI decomposes autocovariance matrices into \(3(\tau + 1)\) matrices as defined in (4.5) that contains second-order information on the observation, where \(\boldsymbol \beta_{2,\tau}\) can be viewed as partially time-varying autocovariance, and \(\boldsymbol \beta_{3,\tau}\) as time varying autocovariances. For all \(\tau \in \{0\}\cup L\), it becomes fully estimated that

\[\begin{equation} \begin{cases} \widehat {\boldsymbol\beta}_{1,\tau} = \boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \boldsymbol\Omega_0' +\tau \, {\boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \boldsymbol\Omega_0'\, \boldsymbol{\mathcal E}'} \\ \widehat {\boldsymbol\beta}_{2,\tau} = {\boldsymbol{\mathcal E} \boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \boldsymbol\Omega_0' + \boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \boldsymbol\Omega_0'\, \boldsymbol{\mathcal E}'}\\ \widehat {\boldsymbol\beta}_{3,\tau} ={ \boldsymbol{\mathcal E} \boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \ \boldsymbol\Omega_0' \boldsymbol{\mathcal E}'} \end{cases}. \tag{4.5} \end{equation}\]

4.2 Finding \(\boldsymbol \Omega_0\) with Approximate Joint Diagnolization

In TV-SOS model (3.1), the mathematical properties of \(\widehat{\boldsymbol\beta}_{1,\tau},\ \widehat{\boldsymbol\beta}_{2,\tau}\) and \(\widehat{\boldsymbol\beta}_{3,\tau}\) are not particular except for the latter two’s symmetry. Hence, this step tries to further process the results in (4.5) and then match the assumptions, especially the diagonal property. Since \(\widehat {\boldsymbol\beta}_{1,\tau} + \widehat {\boldsymbol\beta}_{1,\tau}'= \boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \ \boldsymbol\Omega_0' + \tau \, {\boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \boldsymbol\Omega_0'\, \boldsymbol{\mathcal E}'} + \boldsymbol\Omega\ \boldsymbol\Lambda_\tau \boldsymbol\Omega_0' + \tau \, {\boldsymbol{\mathcal E} \boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \boldsymbol\Omega_0' } = 2 \boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \boldsymbol\Omega_0' + \tau\, \widehat {\boldsymbol\beta}_{2,l}\), it is possible to find the representing formula as in (4.6), where \(\boldsymbol R_\tau\) is a short-hand notation for \(\boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \ \boldsymbol\Omega_0'\), that is,

\[\begin{equation} \boldsymbol R_\tau \overset{\text{def}}= \boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \ \boldsymbol\Omega_0' = \frac 1 2 \bigg( \widehat {\boldsymbol\beta}_{1,\tau} + \widehat {\boldsymbol\beta}_{1,\tau}' - \tau \widehat {\boldsymbol\beta}_{2,\tau} \bigg) \text{ for all }\tau\in \{0\}\cup L \ . \tag{4.6} \end{equation}\]

Despite both \(\boldsymbol \Omega_0\) and \(\boldsymbol \Lambda_\tau\) are unknown, the items \(\boldsymbol\Omega_0 \boldsymbol\Lambda_\tau \boldsymbol\Omega_0'\) are analytic using approximate joint diagonalization (JADE, Clarkson & Jennrich, 1988), which is possible because of diagonal property in \(\boldsymbol\Lambda_\tau\) (the uncorrelatedness assumption in sources) (Belouchrani et al., 1997; Li & Zhang, 2007; Miettinen et al., 2017; Yeredor, 2002). Unlike the sample covariance matrix used in SOBI, LTV-SOBI has to use \(\boldsymbol R_0\) due to the complex mixing mechanism. The JADE task in this step can be formulated as optimization for \(\boldsymbol{\Omega}_0\) using known matrices \(\boldsymbol R_{\tau_1},\ \boldsymbol R_{\tau_1}, \dots, \boldsymbol R_{\tau_l}\) with the conditions set in (4.7).

\[\begin{equation} \begin{cases} \boldsymbol{R}_0 = \boldsymbol{\Omega}_0 \boldsymbol{\Omega}_0' & \Leftrightarrow\ \boldsymbol{\Omega}_0^{-1} \boldsymbol{R}_0 \boldsymbol{\Omega}_0^{-1'} = \boldsymbol{I} \ \ \ (\text{whitening restriction}) \\ \boldsymbol{R}_{\tau_1} = \boldsymbol{\Omega}_0 \boldsymbol{\Lambda}_{\tau_1} \boldsymbol{\Omega}_0' & \Leftrightarrow\ \boldsymbol{\Omega}_0^{-1} \boldsymbol{R}_{\tau_1} \boldsymbol{\Omega}_0^{-1'} = \boldsymbol{\Lambda}_{\tau_1} , \ \text{ and } \boldsymbol \Lambda_1 \text{ is diagonal}\\ \boldsymbol{R}_{\tau_2} = \boldsymbol{\Omega}_0 \boldsymbol{\Lambda}_{\tau_2} \boldsymbol{\Omega}_0' & \Leftrightarrow\ \boldsymbol{\Omega}_0^{-1} \boldsymbol{R}_{\tau_2} \boldsymbol{\Omega}_0^{-1'} = \boldsymbol{\Lambda}_{\tau_2}, \ \text{ and } \boldsymbol \Lambda_2 \text{ is diagonal}\\ \vdots \\ \boldsymbol{R}_{\tau_l} = \boldsymbol{\Omega}_0 \boldsymbol{\Lambda}_{\tau_l} \boldsymbol{\Omega}_0' & \Leftrightarrow\ \boldsymbol{\Omega}_0^{-1} \boldsymbol{R}_{\tau_l} \boldsymbol{\Omega}_0^{-1'} = \boldsymbol{\Lambda}_{\tau_l}, \ \text{ and } \boldsymbol\Lambda_3 \text{ is diagonal} \end{cases} \ . \tag{4.7} \end{equation}\]

It is not possible to exactly diagonolize all \(\boldsymbol\Lambda_{\tau_1},\ \boldsymbol\Lambda_{\tau_2}, \dots, \boldsymbol\Lambda_{\tau_l}\) simultaneously. However, JADE is shown to provide good estimates for \(\boldsymbol\Lambda_{\tau_1}, \boldsymbol\Lambda_{\tau_2}, \dots\) (Miettinen et al., 2016). This method is based on the idea of minimizing the sum of off-diagonal elements \(\sum_{\tau=\tau_1}^{\tau_l} ||\text{off}(\boldsymbol\Lambda_{\tau})||^2\). The established JADE algorithm by Miettinen et al. (2016) efficiently finds an orthogonal matrix \(\boldsymbol{V}\) such that \(\sum_{\tau=\tau_1}^{\tau_l}||\text{off}(\boldsymbol{VR}_{\tau} \boldsymbol{V}')||^2\) is minimized given a set of matrices \(\boldsymbol R_{\tau_1},\dots, \boldsymbol R_{\tau_l}\). To achieve the goal of finding non-orthogonal matrix \(\boldsymbol \Omega_0\), the LTV-SOBI algorithm requires a whitening step, such that for all \(\tau \in L \setminus\{0\},\ \ \boldsymbol R_\tau\) is whitened by \(\boldsymbol R_0^{-1/2}\); that is the whitened items \(\tilde {\boldsymbol{R}}_\tau = \boldsymbol{R}_{0}^{-1/2}\ \boldsymbol {R}_\tau\ \boldsymbol {R}_0^{-1/2\ '}\). Joint diagonalization procedure can then be applied to find an orthogonal \(p\times p\) matrix \(\boldsymbol V\) that maximize the diagonality of \(\boldsymbol \Lambda_{\tau_1},\ \boldsymbol \Lambda_{\tau_2},\dots, \boldsymbol \Lambda_{\tau_l}\) that maximizes

\[\begin{equation} \begin{cases} \boldsymbol V \tilde {\boldsymbol {R}}_{\tau_1} \boldsymbol V' & = \boldsymbol\Lambda_{\tau_1} \\ \boldsymbol V \tilde {\boldsymbol {R}}_{\tau_2} \boldsymbol V' & = \boldsymbol\Lambda_{\tau_2} \\ &\vdots \\ \boldsymbol V \tilde {\boldsymbol {R}}_{\tau_l} \boldsymbol V' & = \boldsymbol\Lambda_{\tau_L} \\ \end{cases}\ . \tag{4.8} \end{equation}\]

Joint diagonalization steps are concluded with unwhitening, that is, \(\boldsymbol \Omega_0\) is found by \(\boldsymbol\Omega_0 = \boldsymbol{R}_0^{1/2} \boldsymbol V\).

The whitening requires \(\boldsymbol{R}_0\) to be positive semi-definite, and mathematically, it is also embedded with this property as it represents the unobservable sample covariance matrix of the true signal. However, \(\boldsymbol R_0\) can only be found through the autocovariance decomposition result (4.5), which is achieved by a linear estimator from equation (4.3), and naturally, the estimator is not guaranteed to be positive semi-definite due to residuals and linear fitting. Practically, in such a case, the Nearest Positive Definite Matrix (nearPD) algorithm by Bates and Martin Maechler (2018) is appointed to replace the problematic \(\boldsymbol R_0\) with its nearest positive definite neighbor, where the nearest distance is measured by Frobenius norm. Details about correcting non-positive semi-definite matrix were proposed by Knol and ten Berge (1989) and have been further developed by Cheng and Higham (1998). Meanwhile, it should be noted that unlike in SOBI, the whitening is achievable in observed data, it is not possible to conduct similar whitening on the observation for LTV-SOBI since the sample covariance used in whitening is estimated after autocovariance decomposition.

After the above steps, the following matrices are entirely solved

\[\begin{equation} \begin{cases} \widehat{\boldsymbol\Omega}_0 \\ \widehat{\boldsymbol\Lambda}_{\tau_1},\ \widehat{\boldsymbol\Lambda}_{\tau_2},\dots, \widehat{\boldsymbol\Lambda}_{\tau_l} \end{cases}\ . \tag{4.9} \end{equation}\]

4.3 Finding \(\boldsymbol{\mathcal E}\) through \(\widehat{\boldsymbol\beta_2}\) and \(\widehat {\boldsymbol \Omega_0}\)

Unlike ordinary SOBI, \(\widehat{ \boldsymbol\Omega}_0\) alone would not identify TV-SOS model, and thus LTV-SOBI procedure continues using outcomes from previous steps (4.9) and (4.5). Observing \(\widehat {\boldsymbol\beta}_{2, \tau} \approx \boldsymbol{\mathcal E} \widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_\tau \widehat{ \boldsymbol\Omega}_0' + \widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_\tau \widehat{ \boldsymbol\Omega}_0' \boldsymbol{\mathcal E}'\) and with the help of commutation matrix \(\boldsymbol K^{(m,n)}\) that ensures \(\boldsymbol K^{(m,n)} \text{vec}( \boldsymbol A) = \text{vec}(\boldsymbol A')\) for any \(m\times n\) matrix \(\boldsymbol A\), the vectorization leads to

\[\begin{equation} \begin{aligned} \text{vec}( \widehat {\boldsymbol\beta}_{2, \tau}) &= \text{vec}(\boldsymbol{\mathcal E} \widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_\tau \widehat{ \boldsymbol\Omega}_0')+ \text{vec}(\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_\tau \widehat{ \boldsymbol\Omega}_0' \boldsymbol{\mathcal E}') \\ &= \text{vec}( \boldsymbol I \boldsymbol{\mathcal E} (\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_\tau \widehat{ \boldsymbol\Omega}_0'))+ \text{vec}((\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_\tau \widehat{ \boldsymbol\Omega}_0') \boldsymbol{\mathcal E}' \boldsymbol I) \\ &= \bigg((\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_\tau \widehat{ \boldsymbol\Omega}_0')' \otimes \boldsymbol I \bigg) \text{vec}( \boldsymbol{\mathcal E}) + \bigg( \boldsymbol I' \otimes (\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_\tau \widehat{ \boldsymbol\Omega}_0') \bigg) \text{vec}( \boldsymbol{\mathcal E}') \\ &= \bigg((\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_\tau \widehat{ \boldsymbol\Omega}_0') \otimes \boldsymbol I \bigg) \text{vec}( \boldsymbol{\mathcal E}) + \bigg( \boldsymbol I \otimes (\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_\tau \widehat{ \boldsymbol\Omega}_0') \bigg) \boldsymbol K ^{(p,p)}\text{vec}( \boldsymbol{\mathcal E}) \\ &= \bigg((\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_\tau \widehat{ \boldsymbol\Omega}_0') \otimes \boldsymbol I + (\boldsymbol I \otimes (\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_\tau \widehat{ \boldsymbol\Omega}_0') ) \boldsymbol K ^{(p,p)}\bigg ) \text{vec}( \boldsymbol{\mathcal E}) \end{aligned}\ . \tag{4.10} \end{equation}\]

Stacking (row-binding) over all \(\tau\in L\) will provide the equation with only one unknown matrix \(\boldsymbol{\mathcal E}\), that is,

\[\begin{equation} \begin{bmatrix} \text{vec}( \widehat {\boldsymbol\beta}_{2, \tau_1}) \\ \text{vec}( \widehat {\boldsymbol\beta}_{2, \tau_2}) \\ \vdots \\ \text{vec}( \widehat {\boldsymbol\beta}_{2, \tau_l}) \end{bmatrix} = \begin{bmatrix} (\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_{\tau_1} \widehat{ \boldsymbol\Omega}_0') \otimes \boldsymbol I + (\boldsymbol I \otimes (\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_{\tau_1} \widehat{ \boldsymbol\Omega}_0') ) \boldsymbol K ^{(p,p)} \\ (\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_{\tau_2} \widehat{ \boldsymbol\Omega}_0') \otimes \boldsymbol I + (\boldsymbol I \otimes (\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_{\tau_2} \widehat{ \boldsymbol\Omega}_0') ) \boldsymbol K ^{(p,p)} \\ \vdots \\ (\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_{\tau_l} \widehat{ \boldsymbol\Omega}_0') \otimes \boldsymbol I + (\boldsymbol I \otimes (\widehat{\boldsymbol\Omega}_0 \widehat{ \boldsymbol\Lambda}_{\tau_l} \widehat{ \boldsymbol\Omega}_0') ) \boldsymbol K ^{(p,p)} \end{bmatrix} \text{vec}( \boldsymbol{\mathcal E}) \ . \end{equation}\]

The solution of \(\text{vec}( \boldsymbol{\mathcal E} )\) can be found directly through the matrix inverse.

4.4 Signal Restoration

The aforementioned three steps lead to closed-form estimates of parameters \(\boldsymbol \Omega_0\) and \(\boldsymbol{\mathcal E}\), and and the signals can be restored simply by using the inverse of the time-varying mixing. Yet, unlike the SOBI and most other BSS methodologies, the source signals would not be restored by a single matrix calculation. As the time index \(t\) still persists in equation (4.11), the restoration has to be conducted one-by-one. Luckily, the restoration procedure itself is straightforward and computation resource-friendly, as

\[\begin{equation} \widehat{\boldsymbol z}_t = \bigg( (\boldsymbol I + t\, \widehat{\boldsymbol{\mathcal E}}) \widehat{\boldsymbol\Omega}_0 \bigg)^{-1} \boldsymbol x_t \ . \tag{4.11} \end{equation}\]

4.5 Minor Alternatives for LTV-SOBI

Sections 4.1, 4.2 and 4.3 composed the three steps of LTV-SOBI algorithm, and after signal restoration, the TV-SOS problem is fully identified. It can also be perceived that in the decomposition of autocovariance matrices (step 1, section 4.1), the estimated \(\widehat{ \boldsymbol\beta}_{3,\tau}\) is never used in later steps within the proposed LTV-SOBI algorithm (note that it affects the estimation of other components). To resolve such redundancy, one naive way is to avoid estimating it by replacing (4.4) with the approximate equation, and this alternative is referred as linear form LTV-SOBI, programmed in R-function ltvsobi as a binary optional parameter quadratic = FALSE. The naming of quadratic and linear form originates from the greatest power of \(t\). The approximate equation is now

\[\begin{equation} {\begin{bmatrix} \text{vec}(\boldsymbol x_1 \boldsymbol x_{1+\tau}') \\ \text{vec}(\boldsymbol x_2 \boldsymbol x'_{2+\tau}) \\ \vdots \\ \text{vec}(\boldsymbol x_{T-\tau} \boldsymbol x_{T}')\end{bmatrix}} \approx {\begin{bmatrix} 1 & 1 & 1(1+\tau) \\ 1 & 2 & 2(2+\tau) \\ \vdots &\vdots &\vdots \\ 1 & T-\tau & (T-\tau)T \end{bmatrix} \otimes \boldsymbol I_{p^2}} \ \ {\begin{bmatrix} \text{vec}(\boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' + \tau {\boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' \, \boldsymbol{\mathcal E}'}) \\ \text{vec}({\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' + \boldsymbol\Omega_0\boldsymbol\Lambda_\tau \boldsymbol\Omega_0' \, \boldsymbol{\mathcal E}'}) \end{bmatrix}}\ . \end{equation}\]

Another option with LTV-SOBI is to prioritize \(\widehat{ \boldsymbol\beta}_{3,\tau}\), namely the LTV-SOBI-alt algorithm. By writing \(\widehat {\boldsymbol\beta}_{3,\tau} ={ (\boldsymbol{\mathcal E} \boldsymbol\Omega_0 ) \boldsymbol\Lambda_\tau ( \boldsymbol{\mathcal E \Omega}_0 )'}\), notice that \(\boldsymbol{\mathcal E} \boldsymbol\Omega_0\) can be estimated by approximate joint diagonalization following the algorithm in (4.8). The preceeding step with this alternative is to find \(\boldsymbol\Omega_0\) and \(\boldsymbol{\mathcal E}\) separately. The solution is based on a similar idea as in (4.10), but the known item is different; the expression is presented as,

\[\begin{equation} \begin{aligned} \text{vec}(\widehat{\boldsymbol\beta}_{2,\tau}) &= \text{vec}( (\widehat{\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0}\ \widehat{\boldsymbol\Lambda_\tau}) \boldsymbol{\Omega_0}') + \text{vec}( \boldsymbol{\Omega_0} (\widehat{\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0}\ \widehat{\boldsymbol\Lambda_\tau})') \\ &= \text{vec}( (\widehat{\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0}\ \widehat{\boldsymbol\Lambda_\tau}) \boldsymbol{\Omega_0' I'}) + \text{vec}( \boldsymbol{I \Omega_0} (\widehat{\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0}\ \widehat{\boldsymbol\Lambda_\tau})') \\ &= (\boldsymbol I \otimes (\widehat{\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0}\ \widehat{\boldsymbol\Lambda_\tau})) \text{vec}( \boldsymbol \Omega_0') + ((\widehat{\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0}\ \widehat{\boldsymbol\Lambda_\tau}) \otimes \boldsymbol I) \text{vec}( \boldsymbol \Omega_0) \\ &= (\boldsymbol I \otimes (\widehat{\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0}\ \widehat{\boldsymbol\Lambda_\tau})) \boldsymbol K ^{(p,p)}\text{vec}( \boldsymbol \Omega_0) + ((\widehat{\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0}\ \widehat{\boldsymbol\Lambda_\tau}) \otimes \boldsymbol I) \text{vec}( \boldsymbol \Omega_0) \\ &= \bigg( (\boldsymbol I \otimes (\widehat{\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0}\ \widehat{\boldsymbol\Lambda_\tau})) \boldsymbol K ^{(p,p)} + (\widehat{\boldsymbol{\mathcal E}\, \boldsymbol\Omega_0}\ \widehat{\boldsymbol\Lambda_\tau}) \otimes \boldsymbol I \bigg) \text{vec}( \boldsymbol \Omega_0) \end{aligned} \ . \tag{4.12} \end{equation}\]

By stacking over \(\tau\in L\) for (4.12), \(\boldsymbol \Omega_0\) can be estimated through inverse matrix or maximal likelihood estimator, and \(\widehat{\boldsymbol{\mathcal E}}\) is obtained as \(\widehat{\boldsymbol{\mathcal E}} = \widehat{\boldsymbol{\mathcal E \Omega}_0} (\widehat{ \boldsymbol\Omega_0}) ^{-1}\).

4.6 Summary of Algorithms

In the above sections, several statistical approaches have been presented to solve linearly time-varying blind source separation problem using sample autocovariance. Although the flows of the algorithm are analogous, there are so distinguishable as to almost surely result in diversified outcomes. Figure 4.1 visualizes the different approaches in the manner of flow chart. The original algorithm proposed by Yeredor (2003) is labeled as “Y-TVSOBI” for reference.

Comparison of Y-TVSOBI, LTV-SOBI and its alternative, briefly showing the different paths on how singnals are restored given observation

Figure 4.1: Comparison of Y-TVSOBI, LTV-SOBI and its alternative, briefly showing the different paths on how singnals are restored given observation

References

Bates, D., & Maechler, M. (2018). Matrix: Sparse and dense matrix classes and methods. https://CRAN.R-project.org/package=Matrix

Belouchrani, A., Abed-Meraim, K., Cardoso, J.-F., & Moulines, E. (1997). A blind source separation technique using second-order statistics. IEEE Transactions on Signal Processing, 45(2), 434–444.

Cheng, S. H., & Higham, N. J. (1998). A modified Cholesky algorithm based on a symmetric indefinite factorization. SIAM Journal on Matrix Analysis and Applications, 19(4), 1097–1110.

Clarkson, D. B., & Jennrich, R. I. (1988). Quartic rotation criteria and algorithms. Psychometrika, 53(2), 251–259.

Knol, D. L., & Berge, J. M. ten. (1989). Least-squares approximation of an improper correlation matrix by a proper one. Psychometrika, 54(1), 53–61.

Li, X.-L., & Zhang, X.-D. (2007). Nonorthogonal joint diagonalization free of degenerate solution. IEEE Transactions on Signal Processing, 55(5), 1803–1814.

Miettinen, J., Illner, K., Nordhausen, K., Oja, H., Taskinen, S., & Theis, F. J. (2016). Separation of uncorrelated stationary time series using autocovariance matrices. Journal of Time Series Analysis, 37(3), 337–354.

Miettinen, J., Nordhausen, K., & Taskinen, S. (2017). Blind source separation based on joint diagonalization in R: The packages JADE and BSSasymp. Journal of Statistical Software, 76.

Myers, R. H., & Myers, R. H. (1990). Classical and modern regression with applications (Vol. 2). Duxbury Press Belmont, CA.

Yeredor, A. (2002). Non-orthogonal joint diagonalization in the least-squares sense with application in blind source separation. IEEE Transactions on Signal Processing, 50(7), 1545–1553.

Yeredor, A. (2003). TV-sobi: An expansion of SOBI for linearly time-varying mixtures. Proc. 4th International Symposium on Independent Component Analysis and Blind Source Separation (ICA’03), Nara, Japan.