Node

A Node represents one of the fundamental building blocks of an NGC system. These particular objects are meant to perform, per simulated time step, a calculation of output activity values given an internal arrangement of compartments (or sources where signals from other Node(s) are to be deposited).

Node Model

The Node class serves as a root class for the node building block objects of an NGC system/graph. This is a core modeling component of general NGC computational systems. Node sub-classes within ngc-learn inherit from this base class.

class ngclearn.engine.nodes.node.Node(node_type, name, dim)[source]

Base node element (class from which other node types inherit basic properties from)

Parameters
  • node_type – the string concretely denoting this node’s type

  • name – str name of this node

  • dim – number of neurons this node will contain

wire_to(dest_node, src_comp, dest_comp, cable_kernel=None, mirror_path_kernel=None, name=None, short_name=None)[source]

A wiring function that connects this node to another external node via a cable (or synaptic bundle)

Parameters
  • dest_node – destination node (a Node object) to wire this node to

  • src_comp – name of the compartment inside this node to transmit a signal from (to destination node)

  • dest_comp – name of the compartment inside the destination node to transmit a signal to

  • cable_kernel

    Dict defining how to initialize the cable that will connect this node to the destination node. The expected keys and corresponding value types are specified below:

    ’type’

    type of cable to be created. If “dense” is specified, a DCable (dense cable/bundle/matrix of synapses) will be used to transmit/transform information along.

    ’init_kernels’

    a Dict specifying how parameters w/in the learnable parts of the cable are to randomly initialized

    ’seed’

    integer seed to deterministically control initialization of synapses in a DCable

    Note

    either cable_kernel, mirror_path_kernel MUST be set to something that is not None

  • mirror_path_kernel

    2-Tuple that allows a currently existing cable to be re-used as a transformation. The value types inside each slot of the tuple are specified below:

    cable_to_reuse (Tuple[0])

    target cable (usually an existing DCable object) to shallow copy and mirror

    mirror_type (Tuple[1])

    how should the cable be mirrored? If “symm_tied” is specified, then the transpose of this cable will be used to transmit information from this node to a destination node, if “anti_symm_tied” is specified, the negative transpose of this cable will be used, and if “tied” is specified, then this cable will be used exactly in the same way it was used in its source cable.

    Note

    either cable_kernel, mirror_path_kernel MUST be set to something that is not None

  • name

    the string name to be assigned to the generated cable (Default = None)

    Note

    setting this to None will trigger the created cable to auto-name itself

inject(data)[source]

Injects an externally provided named value (a vector/matrix) to the desired compartment within this node.

Parameters

data

2-Tuple containing a named external signal to clamp

compartment_name (Tuple[0])

the (str) name of the compartment to clamp this data signal to.

signal (Tuple[1])

the data signal block to clamp to the desired compartment name

clamp(data, is_persistent=True)[source]

Clamps an externally provided named value (a vector/matrix) to the desired compartment within this node.

Parameters
  • data

    2-Tuple containing a named external signal to clamp

    compartment_name (Tuple[0])

    the (str) name of the compartment to clamp this data signal to.

    signal (Tuple[1])

    the data signal block to clamp to the desired compartment name

  • is_persistent – if True, prevents this node from overriding the clamped data over time (Default = True)

step(injection_table=None, skip_core_calc=False)[source]

Executes this nodes internal integration/calculation for one discrete step in time, i.e., runs simulation of this node for one time step.

Parameters
  • injection_table

  • skip_core_calc – skips the core components of this node’s calculation (Default = False)

calc_update(update_radius=- 1.0)[source]

Calculates the updates to local internal synaptic parameters related to this specific node given current relevant values (such as node-level precision matrices).

Parameters

update_radius – radius of Gaussian ball to constrain computed update matrices by (i.e., clipping by Frobenius norm)

clear()[source]

Wipes/clears values of each compartment in this node (and sets .is_clamped = False).

extract(comp_name)[source]

Extracts the data signal value that is currently stored inside of a target compartment

Parameters

comp_name – the name of the compartment in this node to extract data from

extract_params()[source]
deep_store_state()[source]

Performs a deep copy of all compartment statistics.

Returns

Dict containing a deep copy of each named compartment of this node

SNode Model

The SNode class extends from the base Node class, and represents a (rate-coded) state node that follows a certain set of settling dynamics. In conjunction with the corresponding ENode and FNode classes, this serves as the core modeling component of a higher-level NGCGraph class used in simulation.

class ngclearn.engine.nodes.snode.SNode(name, dim, beta=1.0, leak=0.0, zeta=1.0, act_fx='identity', batch_size=1, integrate_kernel=None, prior_kernel=None, threshold_kernel=None, trace_kernel=None, samp_fx='identity')[source]
Implements a (rate-coded) state node that follows NGC settling dynamics according to:
d.z/d.t = -z * leak + dz + prior(z), where dz = dz_td + dz_bu * phi’(z)
where:
dz - aggregated input signals from other nodes/locations
leak - controls strength of leak variable/decay
prior(z) - distributional prior placed over z (such as a kurtotic prior)
Note that the above is used to adjust neural activity values via an integator inside a node. For example, if the standard/default Euler integrator is used then the neurons inside this node are adjusted per step as follows:
z <- z * zeta + d.z/d.t * beta
where:
beta - strength of update to node state z
zeta - controls the strength of recurrent carry-over, if set to 0 no carry-over is used (stateless)
Compartments:
* dz_td - the top-down pressure compartment (deposited signals summed)
* dz_bu - the bottom-up pressure compartment, potentially weighted by phi’(x)) (deposited signals summed)
* z - the state neural activities
* phi(z) - the post-activation of the state activities
* S(z) - the sampled state of phi(z) (Default = identity or f(phi(z)) = phi(z))
* mask - a binary mask to be applied to the neural activities
Parameters
  • name – the name/label of this node

  • dim – number of neurons this node will contain/model

  • beta – strength of update to adjust neurons at each simulation step (Default = 1)

  • leak – strength of the leak applied to each neuron (Default = 0)

  • zeta – effect of recurrent/stateful carry-over (Defaul = 1)

  • act_fx

    activation function – phi(v) – to apply to neural activities

    Note

    if using either “kwta” or “bkwta”, please input how many winners should win the competiton, i.e., use “kwta(N)” or “bkwta(N)” where N is an integer > 0.

  • batch_size – batch-size this node should assume (for use with static graph optimization)

  • integrate_kernel

    Dict defining the neural state integration process type. The expected keys and corresponding value types are specified below:

    ’integrate_type’

    type integration method to apply to neural activity over time. If “euler” is specified, Euler integration will be used (future ngc-learn versions will support “midpoint”/other methods).

    ’use_dfx’

    a boolean that decides if phi’(v) (activation derivative) is used in the integration process/update.

    Note

    specifying None will automatically set this node to use Euler integration w/ use_dfx=False

  • prior_kernel

    Dict defining the type of prior function to apply over neural activities. The expected keys and corresponding value types are specified below:

    ’prior_type’

    type of (centered) distribution to use as a prior over neural activities. If “laplace” is specified, a Laplacian distribution is used, if “cauchy” is specified, a Cauchy distribution will be used, if “gaussian” is specified, a Gaussian distribution will be used, and if “exp” is specified, the exponential distribution will be used.

    ’lambda’

    the scale factor controlling the strength of the prior applied to neural activities.

    Note

    specifying None will result in no prior distribution being applied

  • threshold_kernel

    Dict defining the type of threshold function to apply over neural activities. The expected keys and corresponding value types are specified below:

    ’threshold_type’

    type of (centered) distribution to use as a prior over neural activities. If “soft_threshold” is specified, a soft thresholding function is used, and if “cauchy_threshold” is specified, a cauchy thresholding function is used,

    ’thr_lambda’

    the scale factor controlling the strength of the threshold applied to neural activities.

    Note

    specifying None will result in no threshold function being applied

  • trace_kernel – <unused> (Default = None)

  • samp_fx – the sampling/stochastic activation function – S(v) – to apply to neural activities (Default = identity)

step(injection_table=None, skip_core_calc=False)[source]

Executes this nodes internal integration/calculation for one discrete step in time, i.e., runs simulation of this node for one time step.

Parameters
  • injection_table

  • skip_core_calc – skips the core components of this node’s calculation (Default = False)

clear()

Wipes/clears values of each compartment in this node (and sets .is_clamped = False).

ENode Model

The ENode class extends from the base Node class, and represents a (rate-coded) error node simplified to its fixed-point form. In conjunction with the corresponding SNode and FNode classes, this serves as the core modeling component of a higher-level NGCGraph class used in simulation.

class ngclearn.engine.nodes.enode.ENode(name, dim, error_type='mse', act_fx='identity', batch_size=1, precis_kernel=None, constraint_kernel=None, ex_scale=1.0)[source]
Implements a (rate-coded) error node simplified to its fixed-point form:
e = target - mu // in the case of squared error (Gaussian error units)
e = signum(target - mu) // in the case of absolute error (Laplace error units)
where:
target - a desired target activity value (target = pred_targ)
mu - an external prediction signal of the target activity value (mu = pred_mu)
Compartments:
* pred_mu - prediction signals (deposited signals summed)
* pred_targ - target signals (deposited signals summed)
* z - the error neural activities, set as z = e
* phi(z) - the post-activation of the error activities in z
* L - the local loss represented by the error activities
* avg_scalar - multiplies L and z by (1/avg_scalar)
Parameters
  • name – the name/label of this node

  • dim – number of neurons this node will contain/model

  • error_type – type of distance/error measured by this error node. Setting this to “mse” will set up squared-error neuronal units (derived from L = 0.5 * ( Sum_j (target - mu)^2_j )), and “mae” will set up mean absolute error neuronal units (derived from L = Sum_j |target - mu| ).

  • act_fx – activation function – phi(v) – to apply to error activities (Default = “identity”)

  • batch_size – batch-size this node should assume (for use with static graph optimization)

  • precis_kernel

    2-Tuple defining the initialization of the precision weighting synapses that will modulate the error neural activities. For example, an argument could be: (“uniform”, 0.01) The value types inside each slot of the tuple are specified below:

    init_scheme (Tuple[0])

    initialization scheme, e.g., “uniform”, “gaussian”.

    init_scale (Tuple[1])

    scalar factor controlling the scale/magnitude of initialization distribution, e.g., 0.01.

    Note

    specifying None will result in precision weighting being applied to the error neurons. Understand that care should be taken w/ respect to this particular argument as precision synapses involve an approximate inversion throughout simulation steps

  • constraint_kernel

    Dict defining the constraint type to be applied to the learnable parameters of this node. The expected keys and corresponding value types are specified below:

    ’clip_type’

    type of clipping constraint to be applied to learnable parameters/synapses. If “norm_clip” is specified, then norm-clipping will be applied (with a check if the norm exceeds “clip_mag”), and if “forced_norm_clip” then norm-clipping will be applied regardless each time apply_constraint() is called.

    ’clip_mag’

    the magnitude of the worse-case bounds of the clip to apply/enforce.

    ’clip_axis’

    the axis along which the clipping is to be applied (to each matrix).

    Note

    specifying None will mean no constraints are applied to this node’s parameters

  • ex_scale – a scale factor to amplify error neuron signals (Default = 1)

step(injection_table=None, skip_core_calc=False)[source]

Executes this nodes internal integration/calculation for one discrete step in time, i.e., runs simulation of this node for one time step.

Parameters
  • injection_table

  • skip_core_calc – skips the core components of this node’s calculation (Default = False)

calc_update(update_radius=- 1.0)[source]

Calculates the updates to local internal synaptic parameters related to this specific node given current relevant values (such as node-level precision matrices).

Parameters

update_radius – radius of Gaussian ball to constrain computed update matrices by (i.e., clipping by Frobenius norm)

compute_precision(rebuild_cov=True)[source]

Co-function that pre-computes the precision matrices for this NGC node. NGC uses the Cholesky-decomposition form of precision (Sigma)^{-1}

Parameters

rebuild_cov – rebuild the underlying covariance matrix after re-computing precision (Default = True)

clear()

Wipes/clears values of each compartment in this node (and sets .is_clamped = False).

FNode Model

The FNode class extends from the base Node class, and represents a stateless node that simply aggregates (via summation) its received inputs. In conjunction with the corresponding SNode and ENode classes, this serves as the core modeling component of a higher-level NGCGraph class used in simulation.

class ngclearn.engine.nodes.fnode.FNode(name, dim, act_fx='identity', batch_size=1)[source]
Implements a feedforward (stateless) transmission node:
z = dz
where:
dz - aggregated input signals from other nodes/locations
Compartments:
* dz - incoming pressures/signals (deposited signals summed)
* z - the state values/neural activities, set as: z = dz
* phi(z) - the post-activation of the neural activities
Parameters
  • name – the name/label of this node

  • dim – number of neurons this node will contain/model

  • act_fx – activation function – phi(v) – to apply to neural activities

  • batch_size – batch-size this node should assume (for use with static graph optimization)

step(injection_table=None, skip_core_calc=False)[source]

Executes this nodes internal integration/calculation for one discrete step in time, i.e., runs simulation of this node for one time step.

Parameters
  • injection_table

  • skip_core_calc – skips the core components of this node’s calculation (Default = False)