ngclearn.museum package

Submodules

ngclearn.museum.gncn_pdh module

class ngclearn.museum.gncn_pdh.GNCN_PDH(args)[source]

Bases: object

Structure for constructing the model proposed in:

Ororbia, A., and Kifer, D. The neural coding framework for learning generative models. Nature Communications 13, 2064 (2022).

This model, under the NGC computational framework, is referred to as the GNCN-t1-Sigma/Friston, according to the naming convention in (Ororbia & Kifer 2022).

Historical Note:
(The arXiv paper that preceded the publication above is shown below:)
Ororbia, Alexander, and Daniel Kifer. “The neural coding framework for
learning generative models.” arXiv preprint arXiv:2012.03405 (2020).
Node Name Structure:
z3 -(z3-mu2)-> mu2 ;e2; z2 -(z2-mu1)-> mu1 ;e1; z1 -(z1-mu0-)-> mu0 ;e0; z0
z3 -(z3-mu1)-> mu1; z2 -(z2-mu0)-> mu0
e2 -> e2 * Sigma2; e1 -> e1 * Sigma1 // Precision weighting
z3 -> z3 * Lat3; z2 -> z2 * Lat2; z1 -> z1 * Lat1 // Lateral competition
e2 -(e2-z3)-> z3; e1 -(e1-z2)-> z2; e0 -(e0-z1)-> z1 // Error feedback
Parameters

args – a Config dictionary containing necessary meta-parameters for the GNCN-PDH

DEFINITION NOTE:
args should contain values for the following:
* batch_size - the fixed batch-size to be fed into this model
* z_top_dim: # of latent variables in layer z3 (top-most layer)
* z_dim: # of latent variables in layers z1 and z2
* x_dim: # of latent variables in layer z0 or sensory x
* seed: number to control determinism of weight initialization
* wght_sd: standard deviation of Gaussian initialization of weights
* beta: latent state update factor
* leak: strength of the leak variable in the latent states
* K: # of steps to take when conducting iterative inference/settling
* act_fx: activation function for layers z1, z2, and z3
* out_fx: activation function for layer mu0 (prediction of z0) (Default: sigmoid)
* n_group: number of neurons w/in a competition group for z2 and z2 (sizes of z2 and z1 should be divisible by this number)
* n_top_group: number of neurons w/in a competition group for z3 (size of z3 should be divisible by this number)
* alpha_scale: the strength of self-excitation
* beta_scale: the strength of cross-inhibition
calc_updates(avg_update=True, decay_rate=- 1.0)[source]

Calculate adjustments to parameters under this given model and its current internal state values

Returns

delta, a list of synaptic matrix updates (that follow order of .theta)

clear()[source]

Clears the states/values of the stateful nodes in this NGC system

print_norms()[source]

Prints the Frobenius norms of each parameter of this system

project(z_sample)[source]

Run projection scheme to get a sample of the underlying directed generative model given the clamped variable z_sample

Parameters

z_sample – the input noise sample to project through the NGC graph

Returns

x_sample (sample(s) of the underlying generative model)

set_weights(source, tau=0.005)[source]

Deep copies weight variables of another model (of the same exact type) into this model’s weight variables/parameters.

Parameters
  • source – the source model to extract/transfer params from

  • tau – if > 0, the Polyak averaging coefficient (-1 sets to hard deep copy/transfer)

settle(x, calc_update=True)[source]

Run an iterative settling process to find latent states given clamped input and output variables

Parameters
  • x – sensory input to reconstruct/predict

  • calc_update – if True, computes synaptic updates @ end of settling process (Default = True)

Returns

x_hat (predicted x)

update(x, avg_update=True)[source]

Updates synaptic parameters/connections given inputs x and y

Parameters

x – a sensory sample or batch of sensory samples

ngclearn.museum.gncn_t1 module

class ngclearn.museum.gncn_t1.GNCN_t1(args)[source]

Bases: object

Structure for constructing the model proposed in:

Rao, Rajesh PN, and Dana H. Ballard. “Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects.” Nature neuroscience 2.1 (1999): 79-87.

Note this model includes the Laplacian prior to induce some level of sparsity in the latent activities. This model, under the NGC computational framework, is referred to as the GNCN-t1/Rao, according to the naming convention in (Ororbia & Kifer 2022).

Node Name Structure:
z3 -(z3-mu2)-> mu2 ;e2; z2 -(z2-mu1)-> mu1 ;e1; z1 -(z1-mu0-)-> mu0 ;e0; z0
Parameters

args – a Config dictionary containing necessary meta-parameters for the GNCN-t1

DEFINITION NOTE:
args should contain values for the following:
* batch_size - the fixed batch-size to be fed into this model
* z_top_dim - # of latent variables in layer z3 (top-most layer)
* z_dim - # of latent variables in layers z1 and z2
* x_dim - # of latent variables in layer z0 or sensory x
* seed - number to control determinism of weight initialization
* wght_sd - standard deviation of Gaussian initialization of weights
* beta - latent state update factor
* leak - strength of the leak variable in the latent states
* lmbda - strength of the Laplacian prior applied over latent state activities
* K - # of steps to take when conducting iterative inference/settling
* act_fx - activation function for layers z1, z2, and z3
* out_fx - activation function for layer mu0 (prediction of z0) (Default: sigmoid)
calc_updates(avg_update=True, decay_rate=- 1.0)[source]

Calculate adjustments to parameters under this given model and its current internal state values

Returns

delta, a list of synaptic matrix updates (that follow order of .theta)

clear()[source]

Clears the states/values of the stateful nodes in this NGC system

print_norms()[source]

Prints the Frobenius norms of each parameter of this system

project(z_sample)[source]

Run projection scheme to get a sample of the underlying directed generative model given the clamped variable z_sample

Parameters

z_sample – the input noise sample to project through the NGC graph

Returns

x_sample (sample(s) of the underlying generative model)

set_weights(source, tau=0.005)[source]

Deep copies weight variables of another model (of the same exact type) into this model’s weight variables/parameters.

Parameters
  • source – the source model to extract/transfer params from

  • tau – if > 0, the Polyak averaging coefficient (-1 sets to hard deep copy/transfer)

settle(x, calc_update=True)[source]

Run an iterative settling process to find latent states given clamped input and output variables

Parameters
  • x – sensory input to reconstruct/predict

  • calc_update – if True, computes synaptic updates @ end of settling process (Default = True)

Returns

x_hat (predicted x)

update(x, avg_update=True)[source]

Updates synaptic parameters/connections given inputs x and y

Parameters

x – a sensory sample or batch of sensory samples

ngclearn.museum.gncn_t1_ffm module

class ngclearn.museum.gncn_t1_ffm.GNCN_t1_FFM(args)[source]

Bases: object

Structure for constructing the model proposed in:

Whittington, James CR, and Rafal Bogacz. “An approximation of the error backpropagation algorithm in a predictive coding network with local hebbian synaptic plasticity.” Neural computation 29.5 (2017): 1229-1262.

This model, under the NGC computational framework, is referred to as the GNCN-t1-FFM, a slightly modified from of the naming convention in (Ororbia & Kifer 2022, Supplementary Material). “FFM” denotes feedforward mapping.

Node Name Structure:
z3 -(z3-mu2)-> mu2 ;e2; z2 -(z2-mu1)-> mu1 ;e1; z1 -(z1-mu0-)-> mu0 ;e0; z0
Note that z3 = x and z0 = y, yielding a classifier or regressor
Parameters

args – a Config dictionary containing necessary meta-parameters for the GNCN-t1-FFM

DEFINITION NOTE:
args should contain values for the following:
* batch_size - the fixed batch-size to be fed into this model
* x_dim - # of latent variables in layer z3 or sensory input x
* z_dim - # of latent variables in layers z1 and z2
* y_dim - # of latent variables in layer z0 or output target y
* seed - number to control determinism of weight initialization
* wght_sd - standard deviation of Gaussian initialization of weights
* beta - latent state update factor
* leak - strength of the leak variable in the latent states
* lmbda - strength of the Laplacian prior applied over latent state activities
* K - # of steps to take when conducting iterative inference/settling
* act_fx - activation function for layers z1, z2
* out_fx - activation function for layer mu0 (prediction of z0 or y) (Default: identity)
calc_updates(avg_update=True, decay_rate=- 1.0)[source]

Calculate adjustments to parameters under this given model and its current internal state values

Returns

delta, a list of synaptic matrix updates (that follow order of .theta)

clear()[source]

Clears the states/values of the stateful nodes in this NGC system

predict(x)[source]

Predicts the target (either a probability distribution over labels, i.e., p(y|x), or a vector of regression targets) for a given x

Parameters

z_sample – the input sample to project through the NGC graph

Returns

y_sample (sample(s) of the underlying predictive model)

print_norms()[source]

Prints the Frobenius norms of each parameter of this system

project(x_sample)[source]
(Internal function)
Run projection scheme to get a sample of the underlying directed
generative model given the clamped variable z_sample = x
Parameters

x_sample – the input sample to project through the NGC graph

Returns

y_sample (sample(s) of the underlying predictive model)

set_weights(source, tau=- 1.0)[source]

Deep copies weight variables of another model (of the same exact type) into this model’s weight variables/parameters.

Parameters
  • source – the source model to extract/transfer params from

  • tau – if > 0, the Polyak averaging coefficient (-1 sets to hard deep copy/transfer)

settle(x, y, calc_update=True)[source]

Run an iterative settling process to find latent states given clamped input and output variables

Parameters
  • x – sensory input to clamp top-most layer (z3) to

  • y – target output activity, i.e., label or regression target

  • calc_update – if True, computes synaptic updates @ end of settling process (Default = True)

Returns

y_hat (predicted y)

update(x, y, avg_update=True)[source]

Updates synaptic parameters/connections given inputs x and y

Parameters
  • x – a sensory sample or batch of sensory samples

  • y – a target or batch of targets

ngclearn.museum.gncn_t1_sc module

class ngclearn.museum.gncn_t1_sc.GNCN_t1_SC(args)[source]

Bases: object

Structure for constructing the sparse coding model proposed in:

Olshausen, B., Field, D. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996).

Note this model imposes a factorial (Cauchy) prior to induce sparsity in the latent activities z1 (the latent codebook). Synapses initialized from a (fan-in) scaled uniform distribution. This model would be named, under the NGC computational framework naming convention (Ororbia & Kifer 2022), as the GNCN-t1/SC (SC = sparse coding) or GNCN-t1/Olshausen.

Node Name Structure:
p(z1) ; z1 -(z1-mu0)-> mu0 ;e0; z0
Cauchy prior applied for p(z1)

Note: You can also recover the model learned through ISTA by using, instead of a factorial prior over latents, a thresholding function such as the “soft_threshold”. (Make sure you set “prior” to “none” in this case.) This results in the GNCN-t1/SC emulating a system similar to that proposed in:

Daubechies, Ingrid, Michel Defrise, and Christine De Mol. “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint.” Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences 57.11 (2004): 1413-1457.

Parameters

args – a Config dictionary containing necessary meta-parameters for the GNCN-t1/SC

DEFINITION NOTE:
args should contain values for the following:
* batch_size - the fixed batch-size to be fed into this model
* z_dim - # of latent variables in layers z1
* x_dim - # of latent variables in layer z0 or sensory x
* seed - number to control determinism of weight initialization
* beta - latent state update factor
* leak - strength of the leak variable in the latent states (Default = 0)
* prior - type of prior to use (Default = “cauchy”)
* lmbda - strength of the prior applied over latent state activities (only if prior != “none”)
* threshold - type of threshold to use (Default = “none”)
* thr_lmbda - strength of the threshold applied over latent state activities (only if threshold != “none”)
* n_group - must be > 0 if lat_type != None and s.t. (z_dim mod n_group) == 0
* K - # of steps to take when conducting iterative inference/settling
* act_fx - activation function for layers z1 (Default = identity)
* out_fx - activation function for layer mu0 (prediction of z0) (Default: identity)
calc_updates(avg_update=True)[source]

Calculate adjustments to parameters under this given model and its current internal state values

Returns

delta, a list of synaptic matrix updates (that follow order of .theta)

clear()[source]

Clears the states/values of the stateful nodes in this NGC system

print_norms()[source]

Prints the Frobenius norms of each parameter of this system

project(z_sample)[source]

Run projection scheme to get a sample of the underlying directed generative model given the clamped variable z_sample

Parameters

z_sample – the input noise sample to project through the NGC graph

Returns

x_sample (sample(s) of the underlying generative model)

set_weights(source, tau=0.005)[source]

Deep copies weight variables of another model (of the same exact type) into this model’s weight variables/parameters.

Parameters
  • source – the source model to extract/transfer params from

  • tau – if > 0, the Polyak averaging coefficient (-1 sets to hard deep copy/transfer)

settle(x, K=- 1, cold_start=True, calc_update=True)[source]

Run an iterative settling process to find latent states given clamped input and output variables

Parameters
  • x – sensory input to reconstruct/predict

  • K – number of steps to run iterative settling for

  • cold_start – start settling process states from zero (Leave this to True)

  • calc_update – if True, computes synaptic updates @ end of settling process (Default = True)

Returns

x_hat (predicted x)

update(x, avg_update=True)[source]

Updates synaptic parameters/connections given sensory input

Parameters

x – a sensory sample or batch of sensory samples

ngclearn.museum.gncn_t1_sigma module

class ngclearn.museum.gncn_t1_sigma.GNCN_t1_Sigma(args)[source]

Bases: object

Structure for constructing the model proposed in:

Friston, Karl. “Hierarchical models in the brain.” PLoS Computational Biology 4.11 (2008): e1000211.

Note this model includes a Laplacian prior to induce some level of sparsity in the latent activities. This model, under the NGC computational framework, is referred to as the GNCN-t1-Sigma/Friston, according to the naming convention in (Ororbia & Kifer 2022).

Node Name Structure:
z3 -(z3-mu2)-> mu2 ;e2; z2 -(z2-mu1)-> mu1 ;e1; z1 -(z1-mu0-)-> mu0 ;e0; z0
e2 -> e2 * Sigma2; e1 -> e1 * Sigma1 // Precision weighting
Parameters

args – a Config dictionary containing necessary meta-parameters for the GNCN-t1-Sigma

DEFINITION NOTE:
args should contain values for the following:
* batch_size - the fixed batch-size to be fed into this model
* z_top_dim: # of latent variables in layer z3 (top-most layer)
* z_dim: # of latent variables in layers z1 and z2
* x_dim: # of latent variables in layer z0 or sensory x
* seed: number to control determinism of weight initialization
* wght_sd: standard deviation of Gaussian initialization of weights
* beta: latent state update factor
* leak: strength of the leak variable in the latent states
* lmbda: strength of the Laplacian prior applied over latent state activities
* K: # of steps to take when conducting iterative inference/settling
* act_fx: activation function for layers z1, z2, and z3
* out_fx: activation function for layer mu0 (prediction of z0) (Default: sigmoid)
calc_updates(avg_update=True, decay_rate=- 1.0)[source]

Calculate adjustments to parameters under this given model and its current internal state values

Returns

delta, a list of synaptic matrix updates (that follow order of .theta)

clear()[source]

Clears the states/values of the stateful nodes in this NGC system

print_norms()[source]

Prints the Frobenius norms of each parameter of this system

project(z_sample)[source]

Run projection scheme to get a sample of the underlying directed generative model given the clamped variable z_sample

Parameters

z_sample – the input noise sample to project through the NGC graph

Returns

x_sample (sample(s) of the underlying generative model)

set_weights(source, tau=0.005)[source]

Deep copies weight variables of another model (of the same exact type) into this model’s weight variables/parameters.

Parameters
  • source – the source model to extract/transfer params from

  • tau – if > 0, the Polyak averaging coefficient (-1 sets to hard deep copy/transfer)

settle(x, calc_update=True)[source]

Run an iterative settling process to find latent states given clamped input and output variables

Parameters
  • x – sensory input to reconstruct/predict

  • calc_update – if True, computes synaptic updates @ end of settling process (Default = True)

Returns

x_hat (predicted x)

update(x, avg_update=True)[source]

Updates synaptic parameters/connections given inputs x and y

Parameters

x – a sensory sample or batch of sensory samples

ngclearn.museum.harmonium module

class ngclearn.museum.harmonium.Harmonium(args)[source]

Bases: object

Structure for constructing the Harmonium model proposed in:

Hinton, Geoffrey E. “Training products of experts by maximizing contrastive likelihood.” Technical Report, Gatsby computational neuroscience unit (1999).

Node Name Structure:
z1 -(z1-z0)-> z0
z0 -(z0-z1)-> z1
Note: z1-z0 = (z0-z1)^T (transpose-tied synapses)

Another important reference for designing stable Harmoniums is here:

Hinton, Geoffrey E. “A practical guide to training restricted Boltzmann machines.” Neural networks: Tricks of the trade. Springer, Berlin, Heidelberg, 2012. 599-619.

Note: if you set the samp_fx to the “identity”, you force the Harmonium to

to work as a mean-field Harmonium/Botlzmann machine

Parameters

args – a Config dictionary containing necessary meta-parameters for the Harmonium

DEFINITION NOTE:
args should contain values for the following:
* batch_size - the fixed batch-size to be fed into this model
* z_dim - # of latent variables in layer z1
* x_dim - # of latent variables in layer z0 (or sensory x)
* seed - number to control determinism of weight initialization
* wght_sd - standard deviation of Gaussian initialization of weights
* K - # of steps to take when conducting Contrastive Divergence
* act_fx - activation function for layer z1 (Default: sigmoid)
* out_fx - activation function for layer z0 (prediction of z0) (Default: sigmoid)
* samp_fx - sampling function for layer z1 (Default = bernoulli)
calc_updates(avg_update=True, decay_rate=- 1.0)[source]

Calculate adjustments to parameters under this given model and its current internal state values

Returns

delta, a list of synaptic updates (that follow order of pos_phase.theta)

clear()[source]

Clears the states/values of the stateful nodes in this NGC system

print_norms()[source]

Prints the Frobenius norms of each parameter of this system

sample(K, x_sample=None, batch_size=1)[source]

Samples the underlying harmonium to generate a chain of patterns from a block Gibbs sampling process.

Parameters
  • K – number of steps to run the Gibbs sampler

  • x_sample – inital condition for the sampler (Default = None), if None, this will generate an initial sample of size (batch_size, z1_dim) where z1_dim is the dimensionality of the latent state.

  • batch_size – if x_sample is None, then this dictates how many samples in parallel to create per step of running the Gibbs sampler

settle(x, calc_update=True)[source]

Run an iterative settling process to find latent states given clamped input and output variables.

Parameters
  • x – sensory input to reconstruct/predict

  • calc_update – if True, computes synaptic updates @ end of settling process for both NGC system and inference co-model (Default = True)

Returns

x_hat (predicted x)

ngclearn.museum.snn_ba module

class ngclearn.museum.snn_ba.SNN_BA(args)[source]

Bases: object

A spiking neural network (SNN) classifier that adapts its synaptic cables via broadcast alignment. Specifically, this model is a generalization of the one proposed in:

Samadi, Arash, Timothy P. Lillicrap, and Douglas B. Tweed. “Deep learning with dynamic spiking neurons and fixed feedback weights.” Neural computation 29.3 (2017): 578-602.

This model encodes its real-valued inputs as Poisson spike trains with spikes emitted at a rate of approximately 63.75 Hz. The internal nodes and output nodes operate under the leaky integrate-and-fire spike response model and operate with a relative refractory rate of 1.0 ms. The integration time constant for this model has been set to 0.25 ms.

Node Name Structure:
z2 -(z2-mu1)-> mu1 ; z1 -(z1-mu0-)-> mu0 ;e0; z0
e0 -> d1 and z1 -> d1, where d1 is a teaching signal for z1
Note that z2 = x and z0 = y, yielding a classifier
Parameters

args – a Config dictionary containing necessary meta-parameters for the SNN-BA

DEFINITION NOTE:
args should contain values for the following:
* batch_size - the fixed batch-size to be fed into this model
* z_dim - # of latent variables in layers z1
* x_dim - # of latent variables in layer z2 or sensory x
* y_dim - # of variables in layer z0 or target y
* seed - number to control determinism of weight initialization
* wght_sd - standard deviation of Gaussian initialization of weights (optional)
* T - # of time steps to take when conducting iterative settling (if not online)
clear()[source]

Clears the states/values of the stateful nodes in this NGC system

predict(x)[source]

Predicts the target for a given x. Specifically, this function will return spike counts, one per class in y – taking the argmax of these counts will yield the model’s predicted label.

Parameters

z_sample – the input sample to project through the NGC graph

Returns

y_sample (spike counts from the underlying predictive model)

settle(x, y=None, calc_update=True)[source]

Run an iterative settling process to find latent states given clamped input and output variables, specifically simulating the dynamics of the spiking neurons internal to this SNN model. Note that this functions returns two outputs – the first is a count matrix (each row is a sample in mini-batch) and each column represents the count for one class in y, and the second is an approximate probability distribution computed as a softmax over an average across the electrical currents produced at each step of simulation.

Parameters
  • x – sensory input to clamp top-most layer (z2) to

  • y – target output activity, i.e., label target

  • calc_update – if True, computes synaptic updates @ end of settling process (Default = True)

Returns

y_count (spike counts per class in y), y_hat (approximate probability

distribution for y)

Module contents