A few remarks on miniproject 2: Hopfield

A few remarks on miniproject 2: Hopfield

by Mahsa Barzegarkeshteli -
Number of replies: 0

Dear students,

If you chose the second miniproject, please read the following post. 

please note that two typos had been corrected in the pdf file for more clarity. The changes are:

1. In task 1.7 description: you need to plot capacity as a function of network size, NOT the dictionary size.

2. In equation 8, the definition of weights for excitatory neurons, the summation is only over \mu index. Also, note that N is the number of excitatory neurons.

Also, to avoid confusion, I clarify some points based on your questions:

1. You do not need to hand in or code anything for section 0 Getting started. This section is only for introducing the project and some general descriptions used for the next exercises.

2. In part 2, \theta is a constant scaler. So for example in part 2.4, you can find the constant \theta that results in the approximate equivalence of the formalisms.

3. In the Getting started section, it is mentioned that there is no self interaction (autaptic connections). Don't forget to apply that condition in your simulations and keep the constraint in all sections of the project.

4. Note that in question 3.4, you are asked to search in smaller dictionary sizes. This can be as small as 1-10 patterns. Also in 3.4, pay attention to the neurons that receive zero input in your update function. The sign function returns value of 0 for this inputs which leads to S_i=0 or \sigma_i=0.5 which is not a valid state. You may want to set the updated state of such neurons to \sigma_i=0 to stay closer to the original state (since we are in the low activity limit and most neurons are 0)

5. In 3.6 and so on, you don't need to use a 10% error threshold. If you do, make a sanity check to make sure it doesn't return falsely retrieved patterns. To be sure, it is better to use the default 5% error threshold.