1 Introduction
Imagine a situation that a small musical band composed of a piano, a bass, and a drum is playing at the stage, while three microphones are recording its performance. Rather than each microphone records one specific instrument, it can be expected that each microphone captures a slightly different mixture of the original sounds from the trio. In a real-world situation, there exist diversified needs for extracting structured information from observable mixtures of unknown signals. For example, a recording of speech may contain external noise from nearby road traffic, minor discussion among the audience and constant electronic interference in addition to the speech voice itself.
In most cases, the mixing mechanism is unknown or too costly to measure. Thus several statistical methods which try to recover the source signals given the observed ones based on some statictical assumptions, such as independence, have been developed (Hyvärinen & Oja, 2000). Such methods are called as Blind Source Separation (BSS, Jutten & Herault, 1991) methods. Further, the mixing mechanism is not necessarily static; in recording, the microphones can be moving slowly towards or away from the artists due to some relative movement (dancing, walking, etc.). Figure 1.1 provides a simple illustration of such time-varying BSS problem.
BSS methods include a set of unsupervised machine learning algorithms which take input as a single data matrix. The characteristics of output usually cannot be accurately foreseen in advance due to lack of other relevant evidence and such algorithms mostly serve as exploratory purposes (Hyvärinen, 2013). As compared with supervised learning methods, such as regression and classification tree, unsupervised ones are more challenging and tend to be more subjective in the absence of a clear goal. Nevertheless, BSS and other exploratory data analysis methods gain increasing importance especially in the online marketing and healthcare industry (James et al., 2013). The modern information technology makes the massive quantity of data available, but subtracting structural insights can be an enormous challenge. The “blind” approaches endeavor to provide a unique perspective of data if computation resources are sufficiently available with rather limited human interferences.
This thesis expands the static blind source separation problem into linearly time-varying one (Yeredor, 2003) employing relatively modern tools. A full functional implementation and associated utilities in R will be produced. The thesis is organized as follows. Section 2 will first formulate the BSS problems and review established solutions, and then Section 3 elaborates the linearly time-varying structure of second-order source separation. Section 4 presents the new algorithms in detail, followed by Section 5, which will discuss the performance measures and provides simulation studies. Finally, the thesis concludes with a discussion of potential extension and performance-related topics in Section 6.
Regarding the notations, vectors and matrices are always marked as bold symbols while a lower case letter stands for a real-valued number. The most commonly used symbols are summarized below.
Table 1: Notation
Symbol | Meaning | Note |
---|---|---|
\(\boldsymbol I\) | identity matrix | compatible dimensions are assigned |
\(\boldsymbol z\) | source signals | \(p\)-vector; a realization of (unobservable) stochastic process \((Z_t)\) |
\(\boldsymbol x\) | observed signals | \(p\)-vector; an observable stochastic process \((X_t)\) |
\(\boldsymbol\mu\) | location parameter | \(p\)-vector |
\(\boldsymbol\Omega_t\) | mixing matrix at given time point \(t\) | \(p\times p\) matrix. \(\boldsymbol\Omega_t = ( \boldsymbol I+t \boldsymbol{\mathcal E}) \boldsymbol\Omega\) |
\(\boldsymbol\Gamma_t\) | unmixing matrix at \(t\) | \(\boldsymbol\Gamma_t \overset{\text{def}}= \boldsymbol\Omega^{-1}_t\) |
\(\boldsymbol\Omega\) | mixing matrix at \(t=0\) | \(\boldsymbol\Omega \overset{\text{def}}= \boldsymbol\Omega_0\) |
\(\boldsymbol{\mathcal E}\) | time-varying mixing factor | \(p\times p\) matrix |
\(\boldsymbol\Lambda_{\tau}\) | autocovariance given stationarity | \(p\times p\) matrix that depends only on lag \(\tau\) and assumed to be diagonal |
\(\boldsymbol W\) | affine tranformation matrix used in joint diagonalization | not confused with \(\boldsymbol\Gamma\) |
\(\boldsymbol R_{\tau} = \boldsymbol{\Omega \Lambda}_{\tau} \boldsymbol\Omega'\) | partially mixed autocovariance | \(p\times p\), a short-hand notation |
\(\boldsymbol K ^{(p,p)}\) | commutation matrix | \(p^2\times p^2\) |
\(t=0,1,2,\dots,T\) | index of time | use in subscript. \(t=1,\ t=T\) are the first and last observation in the time series correspondingly |
\(\tau \in \{0\}\bigcup L\) | pre-selected time lag | An integer. 0 is included for covenience |
\(L = \{\tau_1,\tau_2,\dots,\tau_l\}\) | set of pre-selected lags | \(l\)-item set of positive integers |
\(\mathcal C\) | set of permutation matrices | all real-valued matrices that contain exactly 1 non-zero element in each row |
\(\boldsymbol C\) | a permutation matrix | \(\boldsymbol C \in \mathcal C\) |
Operator \(\boldsymbol{A}'\) | matrix transpose of \(\boldsymbol A\) | |
Subscript \(\boldsymbol{A}_t\) | can be matrix or \(t\)-th column vector | representing \(A\) at time \(t\) |
Function \(\text{vec}( \boldsymbol A)\) | vectorization (by-column) of \(\boldsymbol A\) | |
Operator \(\otimes\) | Kronecker product | |
Index \(\boldsymbol{A}[i,j]\) | value at the \(i\)-th row \(j\)-th column of matrix \(\boldsymbol{A}\) |
References
Hyvärinen, A. (2013). Independent component analysis: Recent advances. Philosophical Transactions of the Royal Society A, 371(1984), 20110534.
Hyvärinen, A., & Oja, E. (2000). Independent component analysis: Algorithms and applications. Neural Networks, 13(4-5), 411–430.
James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning (Vol. 112). Springer.
Jutten, C., & Herault, J. (1991). Blind separation of sources, part i: An adaptive algorithm based on neuromimetic architecture. Signal Processing, 24(1), 1–10.
Yeredor, A. (2003). TV-sobi: An expansion of SOBI for linearly time-varying mixtures. Proc. 4th International Symposium on Independent Component Analysis and Blind Source Separation (ICA’03), Nara, Japan.