First, compile the project (to avoid potential slow downs from using go run
):
go build .
Then, run the resulting binary:
./hopfield
Note that much of the functionality of the network is determined by command line arguments given at run time. Use ./hopfield -h
to see a list of these.
Data on the run is saved to the directory specified (default: data/trialdata
), which consists of a collection of parquet files pertaining to different sections of the hopfield networks behavior. See the section on Data Files
networkSummary.pq
Defines data on the network. Independent of epochs, target states, probes, etc...
Effectively meta data on the trial.
NetworkDimension
LearningRule
Epochs
LearningNoiseMethod
LearningNoiseScale
UnitsUpdated
AsymmetricWeightMatrix
Threads
TargetStates
ProbeStates
learnStateData.pq
Collects data on the learning behavior of the network. In particular, measures properties of target states during the epochs of learning. Measured for every target state, for every epoch of training.
Epoch
TargetStateIndex
EnergyProfile
Stable
targetStateProbe.pq
Collects data on the target states after training. Measured after the network has trained in full.
TargetStateIndex
IsStable
State
EnergyProfile
relaxationResult.pq
Collects data on the results of relaxing probe states. Note this only involves the results and does not collect data on any intermediate steps. See RelaxationHistory.pq
for this.
StateIndex
Stable
NumSteps
FinalState
DistancesToTargets
TargetStateIndex
. []float64.EnergyProfile
uniqueStates.pq
Like relaxationResult.pq
, but only observes strictly unique states.
StateIndex
Stable
NumSteps
FinalState
DistancesToTargets
TargetStateIndex
. []float64.EnergyProfile
Hits
relaxationHistory.pq
Collects data on the relaxing probe states during relaxation. This involves a lot of data!
StateIndex
StepIndex
State
EnergyProfile
matrix.bin
A binary representation of the weight matrix after training.
targetStates.bin
A binary file consisting of a matrix. Each row in this matrix is a different target state for this trial.
This project is an investigation into implementing the Hopfield network (and some other supporting methods) in Go using gonum as a linear algebra backend. This project is intended to be clean and extensible, as well as blazing fast and scalable with CPU cores via threading.
In future it may be interesting to try and port this project to use a different backend project - one that leverages linear algebra on CUDA to scale instead with the GPU.
Go was chosen for this project for the following reasons:
It was found to be fast (see the profiling and testing in this repository - be sure to checkout the dashboard!)
Tensorflow was found to scale much better by leveraging the GPU, but ensuring the code continued to scale required awkward vectorized methods that were prone to bugs.
Rust was found to scale nearly as well as Go on the CPU, and has nicer memory safety. However, multithreading proved to be difficult, and the implementation did not continue very far past the initial experiments. Check out the Rust implementation.
Go was found to scale very slightly better on the CPU, and after the initial implementation the language was found to be a nicer fit. Higher velocity development wins the day!