next up previous
Next: A Worked-Out Example: Depth-Perception Up: Construction by Coherence-Detection Previous: Preknowledge coded in synaptic


Robust Estimation

If confronted with severely corrupted data, the behaviour of an estimator can be described by its breakdown point \( \beta \). This is the smallest fraction of outliers, i.e. of data not obeying the assumed noise model, which can cause the estimator to produce arbitrarily bad results.

As a simple example, consider \( n \) measurements \( x_{i}=s+\eta _{i} \) of a signal \( s \) corrupted by additive noise \( \eta _{i} \). We further assume the noise to be Gaussian noise, i.e., \( P(\eta )\sim \exp (-\eta ^{2}/2\sigma ^{2}) \). The maximum likelihood estimate \( s^{*} \) of the signal is than given by a least-square fit, which in our example yields the mean of the measurements, \( s^{*}=1/n\, \sum _{i}x_{i} \), as estimation formula.

However, the widely used assumption of signals corrupted by additive Gaussian noise is slightly questionable. Since extremely low probability values are assigned to large values of noise, these values distort the estimate if they occur. Indeed, in our example, a single large deviation in one the measurements, say \( x_{k} \), will cause the mean \( s^{*} \) to deviate arbitrarily far from the true value. It follows that this estimator has a breakdown point of \( \beta =1/n \) and asymptotically, i.e. for \( n\rightarrow \infty \), a breakdown point of 0. This is a property common to all least-square-based estimators [37].

There exist other classes of estimators, called robust estimators, which can tolerate a non-zero percentage of outliers. They are typically non-linear estimation schemes and therefore hard to implement. A classical example of a robust estimator is the median of \( n \) data points, which is insensitive to a few large outliers in its set of measurements. In fact, this estimator has a breakdown point of \( 0.5 \), i.e., as much as 50% of the data can be corrupted before this estimator fails.

Dynamical coherence detection also realizes a robust estimation scheme. This is because the coherence detection process connects only a small subset of the whole neural population to the coherence cluster, namely the ones which agree in their estimates with each other. All other neurons, representing the outliers, stay asynchronous with the coherence cluster.

Since the neurons coding outliers stay asynchronous, the contribution of the outliers to the total output current will be a current which fluctuates approximately with an amplitude of \( 1/\sqrt{n} \) , with \( n \) being the number of incoherently firing neurons. This part of the total output current has no detectable time structure, i.e. it fluctuates randomly in time. Instead, the coherence cluster contributes an oscillating current with an amplitude \( \sim n_{\mathcal{C}} \), with \( n_{\mathcal{C}} \) being the number of neurons in the coherence cluster. The output current of the coherence network is thus composed of an oscillatory component, reflecting the correlated dynamics of the coherence cluster, plus a small random component, caused by the noisy rest of the pool (compare Fig. 3).

It is rather easy for an appropriately designed output layer to lock exclusively onto the oscillatory component, discarding the data from all non-coherent neurons. This behaviour is highly advantageous for the process of percept creation - probably only robust estimators are able to extract stable perceptions out of noisy data, which is recorded, transmitted and analyzed by unreliable neural hardware.

Fig. 4, A-D, compares the performance of a coherence-based network to a neural network transmitting only the average of the incoming signals to subsequent layers. In the experiment, a varying percentage of the neurons in the input layer is allocated to transmit the signal, a noisy sinusoidal. The rest of the neural population in the input layer is driven by random signals, much larger than the sinusoidal signal variation.

In the experiment, the size of the signal-transmitting group of neurons is varied between 0 and 100% of the total number of neurons in the input layer. The output layer of the network simply responds to the average current coming from the input layer [compare equations (11, 13) in the Appendix]. Thus, by switching off the interlayer synaptic couplings of the input layer, the network calculates the mean of the incoming signals, realizing the maximum likelihood estimator for Gaussian noise.

As expected, the network without interlayer links (Fig. 4B) can not follow the incoming signal correctly if the percentage of neurons carrying the signal is too low compared with the percentage of noisy neurons. In addition, the modulation of the output current gives no hint of the varying signal quality which the maximum likelihood estimator for Gaussian noise is delivering to subsequent network layers. The modulation depth stays largely constant (Fig. 4D).

Figure 4: Signaltransmission through a coherence layer (A) and a layer of uncoupled neurons (B), in a noisy environment. The increasing modulation depth of the output current of the coherence layer (C) reflects the increasing signal fidelity. In the uncoupled case (D), the modulation depth carries no information. Coherence detection is moderated by the coupling constant in the coherence layer; there exists a broad range of values where the coherence detection layer can lock onto the signal (E). As more and more neurons are drawn into the coherence cluster, the modulation depth of the output current increases. For details, refer to main text.
\resizebox* {0.9\textwidth}{!}{\includegraphics{figs/robust/robust4.eps}}

This situation changes drastically if one switches on the interlayer synaptic links in the coherence layer. As can be seen by referring to Fig. 4A, the network acts now as a robust estimator: it locks onto the signal, even if the number of outliers exceeds the number of signal-carrying neurons. In addition, the modulation depth of the output current (Fig 4C) rises monotonically with the number of signal transmitting neurons, thus indicating increasing confidence in the estimate.

The decision of which neurons can participate in the coherence cluster and which neuron will be assigned as ``coding an outlier'' depends on the coherence threshold \( \epsilon \) in equation (1). In the dynamical coherence detection scheme realized with neural oscillators, the coherence threshold \( \epsilon \) is replaced the interlayer coupling constant \( w_{\mathcal{CC}} \) [compare equations (11, 12) in the Appendix]. In (Fig. 4E) network behaviour is analyzed while this coupling constant is varied. The number of signal carrying neurons was fixed in this simulation to 33% of the total neuron population.

With \( w_{\mathcal{CC}}\approx 0 \), the sinusoidal signal variation is not transmitted faithfully through the network, as expected. Around \( w_{\mathcal{CC}}\approx 0.2 \), the system starts to lock onto the signal, and at \( w_{\mathcal{CC}}\approx 0.7 \) the modulation depth rises sharply to a higher value (Fig 4F). This indicates that at this value of the coupling constant, all neurons belonging to the coherence cluster are able to synchronize. At values \( w_{\mathcal{CC}}>1.5 \), neurons coding noise are beginning to be drawn into the coherence cluster. This causes a more noisy estimate of the coherence network for these values of \( w_{\mathcal{CC}} \) and indicates the limit of the weak-coupling regime.

In summary, the introduction of weak synaptic links between neurons allows for the dynamical computation of coherence between neural signals, realizing a robust estimator of incoming sensory stimuli. In this context, is interesting to note that the experimental data in [38] suggests the use of robust estimation in a task related to stereo vision.


next up previous
Next: A Worked-Out Example: Depth-Perception Up: Construction by Coherence-Detection Previous: Preknowledge coded in synaptic

2000-11-20