Textures seem to be analyzed by humans mainly along the dimensions ``direction'' and ``granularity'' [43]. Granularity refers here to textures with no prominent direction, but similar spatial texture variation. In the case of stereo vision, we are only interested in texture directions, indicating local image shifts, i.e., distances of objects.

Texture directions can be analyzed quite simply with neural hardware. Transforming a texture with one or several prominent texture directions into Fourier space, the texture directions show up as distinct spikes in the energy spectrum of the signal (Fig. 6).
Spikes in the spectrum can easily be detected by local measurements in Fourier space. For example, one might sample the local energy available in blobs placed around a circle. The blob with the largest energy content will indicate the main texturedirection (Fig. 6, bottom left).
Local texture energy can also be measured directly in the original signal space. Since the Fourier transformation of a Gaussian is a Gabor function and vice versa, the masking of signal energy in Fourier space by a Gaussian is equivalent to the convolution of the original signal with two Gabor filters, each related to each other by a phase shift of . Filter functions of this type are called quadraturefilters (or Hilbert transform pairs). Squaring and adding together the resulting filter amplitudes gives a local measure of signal energy (filtered signals in Fig. 6).
The important point is that these filter kernels and nonlinear point operations easily map to receptive field profiles and transfer functions of simple and complex cells in the visual cortex. In disparity space only two slices out of the full spacetime texture of the moving camera are available (compare Fig. 5), so the twodimensional filter kernels used in Fig. 6 are reduced to two simple onedimensional filter profiles, convolving data either from the left or from the right eye. To compute the local energy, the signals coming from the quadraturepaired filters have to be squared and summed. This results in a circuit identical in structure to the one sketched in Fig. 7: units with Gaborlike receptive fields sample data from the left and right retinae, and the squared output of these units is summed by a energy unit , giving finally local texture energy (only this time in disparityspace). Interestingly, a circuit structurally equivalent to the one in Fig. 7 was proposed in [42] to account for experimental data measured from complex cell recordings in the visual cortex, with units representing simple cells, and unit representing a complex cell.

Note that the raw local energies calculated by any neural circuit similar to the one in Fig. 7 can not be used directly for texture analysis, since the estimated energies depend also on local image contrast (compare Fig. 6). To deduce texture direction, one has to perform either a maximum detection around a circle in Fourier space, as already discussed, or some kind of contrast normalization. Adapting these two possibilities to the task of disparity estimation, we recover either the approaches by Qian (disparity estimation via maximumdetection, in [44]) or Adelson & Bergen (normalization, formulated in the context of optical flow estimation, in [41]).
In the network simulation presented here, disparity estimators derived from the original optical flow estimator of Adelson & Bergen [41] are used. This means that the difference between the output of two complex cells (estimating left and right disparity energies) is normalized by the output of a corresponding complex cell (measuring local contrast). For details, see the Appendix.