Open Thanh-Binh opened 7 years ago
Hi there. In the ball demo the encode function receives 2 inputs: 1) Stimulae 0: a binary vector representing the current scene state (the green ball and black background) 2) Stimulae 1: a binary vector representing previous neuron activations
Scalar values like ball (x, y) position and (x, y) velocity used in the ball scene simulation are not inputted into the learning algorithm.
Please let me know if you have any more questions.
Dave
I teally do not undestand because i come to your implementation from the knowledge with nupic, where encode converts a data e.g. Scalar value. It is very nice of you to give me some hints how to use your sw for predicting sine wave. At each time instance, you have a float data and we will predict multiple steps inn the future!
Ah, I can see where the confusion comes from. Nupic "encode" converts scalar data to a binary vector, then observes and learns the data through Spatial Pooling and Temporal Memory. Simple Cortex "encode" is essentially Spatial Pooling and Temporal Memory combined into one function that converts binary vectors to neuron activations. I used "encode" because there is a corresponding "decode" function which converts neuron activations back to binary vectors.
I am working on a video that will explain my algorithms. Hopefully it will help!
I think it is a big confusion not only for me because you use the same terms like nupic but at different meaning. Is it possible from your side to use the same terms as you explain? How can you convert the ball position/velocity into binary? Do you use the whole image with the ball as one pixel? That means your binary has only one bit = 1 at ball position, and all remain are zeros?
My algorithm terminology for "encode", "learn", "predict", "decode" comes more from Ogma than HTM, but I certainly can help make the comparisons:
HTM Encode = SC "Set Stimulae". HTM Spatial Pooling = SC "Encode" and "Learn" for Forest 0. All dendrites in Forest 0 represent proximal dendrites in HTM. HTM Temporal Memory = SC "Encode" and "Learn" for Forest 0 as well as "Predict". All dendrites in Forest 1 represent distal dendrites in HTM. HTM Classifier = SC Decode
To answer your 2nd question, if you'd like to make use of ball position and ball velocity scalar values, you'd have to convert it to a binary vector manually. Eventually I will code a scalar to binary vector converter.
To answer your 3rd question Stimulae 0 is the entire image stored in a binary vector. The image is 100x100 pixels, so the vector has 10,000 elements. The ball (green pixels) takes up about 49 pixels in image. Therefore there will be 49 1s in the vector and the rest are 0s.
Aha. I understand a little better, because I know both HTM and Ogma.... So I have to convert my input data into a binary vector myself.
Do you think, that we can use HTM encoder like ScalarEncoder with your framework?
Yes, I think any HTM encoder will work as long as you can get the datatype as a std::vector
Interesting ... very good ...
I have just check again in your algorithm, and found the current version does not have any guarantie that after decoding the number of pixels having a higher intensity remain 49, here
std::vector
I have just tested with a HTM scalar encoder with 2048 colums and 40 active bits:
// setup encode
const uint nInputCols = 2048;
const uint nActiveInputCols = 20;
// Setup OpenCL
ComputeSystem cs;
ComputeProgram cp;
cs.init(ComputeSystem::_gpu);
cs.printCLInfo();
std::string kernels_cl = "source/cortex/behavior.cl";
cp.loadFromFile(cs, kernels_cl);
// Setup Simple Cortex Area
unsigned int numStimuli = 4;
unsigned int numForests = 2;
unsigned int numNeurons = 1500000; // 1,500,000 reccomended maxumum
std::vector<Stimuli> vecStimuli(numStimuli);
vecStimuli[0].init(cs, nInputCols); // input - current binary scene state
vecStimuli[1].init(cs, numNeurons); // input - previous neuron activations
vecStimuli[2].init(cs, numNeurons); // input - storage of current neuron activations for forecasting
vecStimuli[3].init(cs, nInputCols); // output - predicted future binary scene state
std::vector<Forest> vecForest(numForests);
vecForest[0].init(cs, cp, numNeurons, 50, 0.25f);
vecForest[1].init(cs, cp, numNeurons, 1, 1.00f);
Area area;
area.init(cs, cp, numNeurons);
std::vector<unsigned char> vecResetNeurons(numNeurons);
vecResetNeurons[numNeurons - 1] = 1;
vecStimuli[1].setStates(cs, vecResetNeurons);
...here is the loop:
std::vector<unsigned char> inputSDR_;
vecStimuli[0].setStates(cs, inputSDR_);
area.encode(cs, {vecStimuli[0], vecStimuli[1]}, {vecForest[0], vecForest[1]});
if (learnIn)
area.learn(cs, {vecStimuli[0], vecStimuli[1]}, {vecForest[0], vecForest[1]});
vecStimuli[1].setStates(cs, area.getStates(cs));
// Forecast 1 time steps into the future
std::vector<unsigned char> predictionSDR_;
for (unsigned int i = 0; i < 1; i++)
{
vecStimuli[2].setStates(cs, area.getStates(cs));
area.predict(cs, {vecStimuli[2]}, {vecForest[1]});
area.decode(cs, {vecStimuli[3]}, {vecForest[0]});
predictionSDR_ = vecStimuli[3].getStates(cs);
}
and found that the size of predictionSDR_ is always 0. Did I do something wrong?
Any idea for bringing SC to predict ?
Hi, sorry for the delayed reply. I think the issue is when the predictionSDR_ vector was initialized it never was given a size and defaulted to 0. Try changing that line to:
std::vector<unsigned char> predictionSDR_(nInputCols);
I do not think so, because this variable is used like predictionSDR_ = vecStimuli[3].getStates(cs); there is no need for initializing. However, I tested with the initialization as you mentioned. It does not help as I expected.
I hope @ddigiorg can convince us about the prediction ability of SC!
Could you please explain me the input data for encode for ball demo? Position and velocity of ball in a vector? Thanks