GNCN-PDH (Ororbia & Kifer, 2020/2022)¶
This circuit implements one of the models proposed in (Ororbia & Kifer, 2022) [1].
Specifically, this model is unsupervised and can be used to process sensory
pattern (row) vector(s) x
to infer internal latent states. This class offers,
beyond settling and update routines, a projection function by which ancestral
sampling may be carried out given the underlying directed generative model
formed by this NGC system.
The GNCN-PDH is graphically depicted by the following graph:
- class ngclearn.museum.gncn_pdh.GNCN_PDH(args)[source]
Structure for constructing the model proposed in:
Ororbia, A., and Kifer, D. The neural coding framework for learning generative models. Nature Communications 13, 2064 (2022).
This model, under the NGC computational framework, is referred to as the GNCN-t1-Sigma/Friston, according to the naming convention in (Ororbia & Kifer 2022).
Historical Note:(The arXiv paper that preceded the publication above is shown below:)Ororbia, Alexander, and Daniel Kifer. “The neural coding framework forlearning generative models.” arXiv preprint arXiv:2012.03405 (2020).Node Name Structure:z3 -(z3-mu2)-> mu2 ;e2; z2 -(z2-mu1)-> mu1 ;e1; z1 -(z1-mu0-)-> mu0 ;e0; z0z3 -(z3-mu1)-> mu1; z2 -(z2-mu0)-> mu0e2 -> e2 * Sigma2; e1 -> e1 * Sigma1 // Precision weightingz3 -> z3 * Lat3; z2 -> z2 * Lat2; z1 -> z1 * Lat1 // Lateral competitione2 -(e2-z3)-> z3; e1 -(e1-z2)-> z2; e0 -(e0-z1)-> z1 // Error feedback- Parameters
args – a Config dictionary containing necessary meta-parameters for the GNCN-PDH
DEFINITION NOTE:args should contain values for the following:* batch_size - the fixed batch-size to be fed into this model* z_top_dim: # of latent variables in layer z3 (top-most layer)* z_dim: # of latent variables in layers z1 and z2* x_dim: # of latent variables in layer z0 or sensory x* seed: number to control determinism of weight initialization* wght_sd: standard deviation of Gaussian initialization of weights* beta: latent state update factor* leak: strength of the leak variable in the latent states* K: # of steps to take when conducting iterative inference/settling* act_fx: activation function for layers z1, z2, and z3* out_fx: activation function for layer mu0 (prediction of z0) (Default: sigmoid)* n_group: number of neurons w/in a competition group for z2 and z2 (sizes of z2 and z1 should be divisible by this number)* n_top_group: number of neurons w/in a competition group for z3 (size of z3 should be divisible by this number)* alpha_scale: the strength of self-excitation* beta_scale: the strength of cross-inhibition- project(z_sample)[source]
Run projection scheme to get a sample of the underlying directed generative model given the clamped variable z_sample
- Parameters
z_sample – the input noise sample to project through the NGC graph
- Returns
x_sample (sample(s) of the underlying generative model)
- settle(x, calc_update=True)[source]
Run an iterative settling process to find latent states given clamped input and output variables
- Parameters
x – sensory input to reconstruct/predict
calc_update – if True, computes synaptic updates @ end of settling process (Default = True)
- Returns
x_hat (predicted x)
- calc_updates(avg_update=True, decay_rate=- 1.0)[source]
Calculate adjustments to parameters under this given model and its current internal state values
- Returns
delta, a list of synaptic matrix updates (that follow order of .theta)
- clear()[source]
Clears the states/values of the stateful nodes in this NGC system
References:
[1] Ororbia, A., and Kifer, D. The neural coding framework for learning
generative models. Nature Communications 13, 2064 (2022).