## Core processing steps

### NeuralField

The NeuralField step is an implementation of a dynamic neural field, the core building block of dynamic field theory (DFT). This tutorial will cover only the parameters of the step, assuming that you are already familiar with dynamic neural fields. If this is not the case, please have a look at some of the scientific publications on dynamic field theory, for example the book "Dynamic Thinking - A primer on dynamic field theory" by Gregor Schöner, John P. Spencer, and the DFT research group (2015). The parameters of the NeuralField step shape the mathematical equation that is approximated in the background.

The name parameter sets the name of the field in the architecture. It does not have an influence on its function.

The dimensionality parameter sets the number of feature dimensions the field is defined over. The default is a dimensionality of 2; such a field may for instance be used to represent objects on a (two-dimensional) table surface, where the two feature dimensions are the x- and y-coordinate of the surface. While the implementation allows for an arbitrary number of dimensions, we very rarely use fields that are defined over more than three dimensions. This is because with higher-dimensional fields, the required number of neurons quickly becomes unrealistic. Instead, we employ multiple low-dimensional fields that are bound over shared feature dimensions - often physical space.

The sizes parameter does not show up in the mathematical formulation of a field but is an important implementation detail. It determines how densely each feature dimension is sampled. The size parameter sets the number of sampling points per dimension.

The time scale parameter determines how quickly the field relaxes into an attractor. In the mathematical equation, it is commonly named tau. Intuitively, the parameter changes how fast the dynamics reacts to change, with larger values leading to a more sluggish behavior.

The resting level parameter corresponds to the variable h in the field equation and determines the level of activation that the field relaxes to without any input. It is set to a negative value to bring the entire field into the "off" state.

The input noise gain parameter sets the strength of random noise that is added at every position of the field to simulate fluctuations in neural firing or noisy sensory input.

The sigmoid parameter set is comprised of a few parameters that shape the sigmoidal function, often called g() or sigma(). The sigmoid function determines the output of the field and is zero for negative activation and one for positive activation, with a smooth transition around the threshold of zero. You can select multiple types of sigmoids, set how steep the sigmoid is at its transition around zero (beta parameter) and even change the threshold itself (which we usually do not do).

All remaining parameters determine the lateral interaction within the field. Changes in these parameters only have an effect when there is activation above the treshold in the field.

The global inhibition parameter determines the strength of inhibition that acts upon the entire field when activation rises above threshold (above zero) at some position. This parameter may be used to implement a field that is selective. You have to use a fairly strong global inhibition in conjunction with strong local self-excitation (see next parameter). If you do not want the field to be selective, the global inhibition can be set to zero.

The lateral kernels parameter set shapes the interaction kernel of the field. You can build a kernel from an arbitrary number of kernel modes. Kernel modes are simply  functions, usually Gaussians, which are summed to form the final shape of the kernel. For instance, by adding a sharp and strong positive (excitatory) Gaussian function to a broad negative (inhibitory) Gaussian function, gives you a Mexican hat type kernel (local excitation and mid-range inhibition).
The drop-down menu next to the lateral kernels label enables you to select different types of modes that you can add to your kernel. The only sensible option here is "Gauss", do not select "Box". By clicking on the small plus-button, you can add the mode.
For every (Gaussian) mode, you can set the amplitude (how strong the Gaussian is) and the width, which is determined by the sigma parameters individually for each dimension. In the advanced tab, you can also change the shift of the mode, which enables you to implement traveling peaks. This is something we do not use but other researchers sometimes employ. There are other settings that we do not cover here.
Every mode has a small X button in the top right corner, which enables you to delete the mode.

The lateral kernel convolution parameter set enables you to change how the lateral kernel is applied to the field. Internally, the lateral interaction is computed using a convolution between the kernel and the sigmoided output of the field. Using the borderType parameter, you can determine how the convolution deals with the edges of the field. For fields that are defined over cyclic feature dimensions, such as the hue-dimension (color), set this type to cyclic borders. Other parameters in this section are only very rarely used.

### GaussInput

The GaussInput step produces a matrix that contains a sampled Gaussian function (see figure below for a two-dimensional example). The step only has an output and does not have an input. We use the step mostly as an input to dynamic neural fields. This is useful when you create a prototype of an architecture that is not yet connected to real sensory input (e.g., from a camera). In this case, Gaussian blobs can be used to simulate the input an object would give at a certain position, or the input it would produce along a feature dimension. The parameters of the GaussInput step are as follows.

The name, dimensionality, and sizes parameters are analogous to those of the NeuralField step.

The amplitude parameter determines the value of the Gaussian function at its maximum displacement from zero. If the amplitude is positive, it determines the maximum of the function; if it is negative, it determines the minimum and flips the entire function into the negative value range.

The centers parameters determine the position along every dimension on which the Gaussian function is centered. At those positions, the Gaussian function will reach the value determined by the amplitude parameter.

The sigmas parameters determine the width of the Gaussian function, where larger values make the function broader. Please note that the Gaussian function is not normalized and the width and amplitude are thus independent parameters.

The cyclic parameter determines whether or not the dimensions over which the Gaussian function is defined are cyclic. If you check this box and move the center of a Gaussian close to the edge (near zero or its sampling size), the Gaussian will wrap around to the other edge (see figure below for a one-dimensional example). This is useful for modeling cyclic feature dimensions, like the hue dimension (color). ### StaticGain

The StaticGain step takes an input and multiplies it with a scalar value. When the input is a matrix, it multiplies the matrix element-wise. We use this processing step most commonly to set the connection strength between two fields. The output of a field with the default sigmoidal output function is always going to have values between 0 and 1. When you draw a connection from a field A to a field B, you can put a StaticGain step in between to multiply the output of field A by a scalar value (see figure below). This enables you to control, for instance, whether or not the output of field A will make the activation of field B rise above theshold. If the scalar by which you multiply is positive, the synaptic connection will be excitatory. If the scalar is negative, the connection will be inhibitory.

Please note that you can, of course, also use the StaticGain step in other contexts to multiply an input with a scalar.

Apart from the name parameter, which is analogous to the one of the NeuralField step, the StaticGain step only has the gain factor parameter.
The gain factor parameter determines the scalar value by which any input is multiplied (element-wise).

### Projection

The Projection step enables coupling dynamic neural fields (or other steps) of different dimensionalities. It supports expansion couplings, where the source dimensionality is lower than the target dimensionality, and contraction couplings, where the source dimensionality is higher than the target dimensionality. It also supports one-to-one couplings, where source and target dimensionality are the same but the order of the dimensions must be rearranged (which is an implementation detail).

The parameters of the Projection step are as follows:

The name parameter is analogous to that of the NeuralField step.

The dimension mapping parameters determines how the incoming dimensions are treated. When you connect an input to the Projection step, the dimension mapping parameter will have a drop-down box for every dimension of the input. The drop-down boxes are in order, from top to bottom with the dimensions of the input. In every drop-down box, you can select to drop the dimension, in which case the Projection step will reduce the dimensionality by this dimension. Alternatively, you may choose to map the dimension onto any of the output dimensions by selecting the index of the output dimension from the drop-down box. Before doing so, make sure you set the output dimensionality parameter to the correct value.
Once you have set up the dimension mapping parameter in a feasible way, the drop-down boxes will switch their color from red to green.

The output dimensionality parameter sets the number of dimensions the output of the Projection step will have. Unfortunately, the step cannot detect this automatically, so you have to set it by hand, making sure it matches the dimensionality of the step that will ultimately receive input from the Projection step.

The output dimension sizes parameter determines the sampling size for each of the output dimensions. For dimensions that are mapped from the input, the size is determined automatically. For other dimensions, for example when expanding the dimensionality, you will have to set this manually. Make sure that it matches the size of the step that will ultimately receive input from the Projection step.

The compression type parameter determines how dimensions are handled that you select to "drop". A compression types consist of a mathematical operation that is applied to all values in the dimensions that are to be dropped. The compression type Sum computes the sum of all values in the dimensions, Average computes the mean of all values, Maximum takes the maximum of all values, and Minimum takes the minimum of all values.

##### Expansion Couplings

When you would like to create a connection from a field A to a field B, where field A has a smaller dimensionality than field B, the missing dimensions have to be "filled in". This is usually done by repeating the dimensions that are present in field A. For example, let's say field A is defined over the color-dimension and field B is defined over the same color-dimension but two additional spatial dimensions. In this case the color-dimension must be repeated (i.e., copied) for every position along both spatial dimensions. In CEDAR, first connect field A to the input of the Projection step. Then set the output dimensionality parameter to the dimensionality for field B (e.g., 3 dimensions). Since the input dimensionality is 1, you only have one drop-down box in the dimension mapping parameter. In this example, you would set it to 2, to map the color dimension of field A to the third dimension of field B (the indices are counted starting at 0). Set the output dimension sizes parameter for the first two output dimensions to the sampling sizes of the dimensions in field B (e.g., 50 for both the horizontal and vertical spatial dimension). You do not have to set the compression type parameter, as it only has an effect in contraction couplings.

##### Contraction Couplings

When you would like to create a connection from a field A to a field B, where field A has a larger dimensionality than field B, the superfluous dimensions have to be removed. This is done by compressing them into the remaining dimensions. For example, let's say field A is defined over two spatial dimensions and a color-dimension and field B is defined only over the two spatial dimensions. In this case the color-dimension must be removed. In CEDAR, first connect field A to the input of the Projection step. Then set the output dimensionality parameter to the dimensionality of field B (e.g., 2 dimensions) . Since the input dimensionality is 3, you have three drop-down boxes in the dimension mapping parameter. In this example, you would set the first one to 0, to map the horizontal spatial dimension of field A to the first dimension of field B (the indices are counted starting at 0). You would then set the second drop-down box to 1 to map the vertical spatial dimension of field A to the second dimension of field B. Finally, you would set the third drop-down box to "drop" to remove the color-dimension of field A. You do not have to set the output dimension sizes parameter because those can be determined automatically. By setting the compression type parameter, you can determine how the color-dimension is removed. For instance, by setting it to Average, the Projection step will compute a mean along the values in the color-dimension. It will compute this mean at every position along the two spatial dimensions. The output is a two-dimensional matrix, defined over the two spatial dimensions, that contains all the means of the color-dimension.

##### Rearranging dimensions

When you would like to create a connection from a field A to a field B, where field A has the same dimensionality than field B, but the dimensions are arranged in different order, you can use the Projection step to rearrange the dimensions. In CEDAR, first connect field A to the input of the Projection step. Then set the output dimensionality parameter to the dimensionality of field B (e.g., 2 dimensions). Select how you would likt to rearrange the dimensions in the dimension mapping parameter. You do not have to set the output dimension sizes parameter because those can be determined automatically. You also do not have to set the compression type parameter, as it only has an effect in contraction couplings.

### Boost

The Boost step outputs an adjustable scalar value and can be switched between this value (on) and zero (off). When used as an input to dynamic neural fields, the step enables the user to interact with a DFT architecture. Switching on a Boost step raises the resting level of the receiving field and, if parameterized correctly, may bring the field into a dynamic regime where it forms a peak. We often use Boost steps in prototype architectures to guide the architecture through different sequences of cognitive processes. In other cases, we use Boost steps to represent input for which we do not (yet) have a complete sensorimotor model. For instance, we use it to replace speech input, where we activate multiple Boost steps, each of which is connected to the representation of a word. Ultimately, we want our architectures to be fully autonomous and replace all Boost inputs with input from real sensors.

The parameters of a Boost step are as follows:

The name parameter is analogous to the one of the NeuralField step.

The strength parameter determines the scalar value that the Boost step outputs if the active parameter is enabled.

The active parameter determines whether the output of the Boost step is the value of the strength parameter (when enabled), or zero (when disabled).

The deactivate on reset parameter determines whether the Boost step is deactivated when a whole architecture is reset. This may be desirable when the Boost input is only activated transiently to mimic an input; it may not be desirable when the Boost step is used as an external parameter, which must always be active.

CEDAR has a special widget that gives control over all Boost steps inside an architecture. It can be opened by clicking on the Boost control icon in the top toolbar. In the exemplary figure below, the Boost control widget contains all six Boost steps. For each Boost input, you can set the strength parameter and activate or deactivate the Boost input. ### Convolution

The Convolution step computes a convolution between a matrix and a kernel. The step has two inputs: the matrix input receives the matrix that is to be convolved and the kernel input receives the kernel that the matrix is to be convolved with. You can use the Convolution step without an external kernel input by selecting a kernel as a parameter (see below). This kernel parameter only has an effect if no kernel input is set.

In CEDAR, we use the Convolution step most commonly to filter the output of a dynamic neural field with a Gaussian, smoothing the output (see figure below). The parameters of the Convolution step are as follows.

The name parameter is analogous to the one of the NeuralField step.

The kernels parameter allows you to add kernels as a parameter, in case there is no kernel input connected to the Convolution step. The parameter works analogously to the lateral kernels parameter of the NeuralField step.

The convolution parameter set contains more specialized implementation details of the Convolution that are only seldomly needed. They are analogous to the lateral kernel convolution parameter set of the NeuralField step.

We also use a Convolution step as an approximation of a steerable neural mapping (Schneegans, Schöner, 2012). A steerable neural mapping is a neural implementation of a coordinate transform. For instance, you may have a dynamic neural field that is defined over retinal space and holds a representation of multiple visible objects. To transform that representation into an allocentric space, which is invariant against eye and head movements, requires to transform the object representation into a different reference frame (in the easiest case, shifting the representation). Neurally, this can be done with a steerable neural mapping. We can use a convolution to approximate that transformation in an algorithmic but computationally more efficient form. In this example, in CEDAR we would connect the output of the retinal field to the matrix input of the Convolution step. We would then connect a representation of the shift parameter (e.g., the head direction) to the kernel input of the Convolution step. The output of the step would be the object representation, transformed into allocentric space (see figure below). ### Resize

The Resize step takes a matrix as input and resizes it to the size you specify. It can be used to connect fields in which the same dimensions are sampled differently. For instance, two color fields, one of which is sampled with 20 positions and the other with 15 positions, can only be connected in CEDAR if you apply a Resize step in between (see figure below). Alternatively, you can make sure that fields are all set up with the same sampling. There should not be an architectural reason for choosing a certain sampling size over another.

The parameter of the Resize step are as follows:

The name parameter is analogous to the one of the NeuralField step.

The output size parameter determines the size of the output for every dimension of the input.

The interpolation parameter determines what type of interpolation is used to resize the data. The options reflect the underlying OpenCV implementation and are explained in the OpenCV documentation.

### Sum

The Sum step adds all up all its inputs. You can connect an arbitrary number of inputs to the Sum step as long as they agree in both dimensionality and sizes. The example in the figure below shows a Sum step that adds two two-dimensional Gaussian functions. The plot shows both the individual terms as well as the sum of the terms.

The Sum step only has the name parameter, which is analogous to the one of the NeuralField step. 