Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.
I strictly followed the wiki documentation on how to run NuPIC within docker, and the steps listed on the wiki cause NuPIC to throw an error.
The environment is Debian 8, Linux 4.1.1 (Aufs).
Steps I took:
1) clean install of docker (successful):
# apt-get install lxc-docker-1.9.1
2) run MySQL docker container (successful):
$ docker run --name nupic-mysql -e MYSQL_ROOT_PASSWORD=nupic -p 3306:3306 -d mysql:5.6
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9ca22d18793e mysql:5.6 "/entrypoint.sh mysql" About an hour ago Up About an hour 0.0.0.0:3306->3306/tcp nupic-mysql
3) run nupic docker container (successful):
$ docker run --name nupic -e NTA_CONF_PROP_nupic_cluster_database_passwd=nupic -e NTA_CONF_PROP_nupic_cluster_database_host=mysql --link nupic-mysql:mysql -ti numenta/nupic
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
75a8e86a5c86 numenta/nupic "/bin/bash" About an hour ago Up 2 seconds nupic
9ca22d18793e mysql:5.6 "/entrypoint.sh mysql" About an hour ago Up About an hour 0.0.0.0:3306->3306/tcp nupic-mysql
4) run swarming example inside docker container (fail):
# ${NUPIC}/scripts/run_swarm.py ${NUPIC}/examples/swarm/simple/search_def.json --maxWorkers=4
Generating experiment files in directory: /usr/local/src/nupic/examples/swarm/simple...
Writing 313 lines...
Writing 114 lines...
done.
None
Successfully submitted new HyperSearch job, jobID=1000
##>> UPDATED WORKER STATE:
{ u'activeSwarms': [ u'modelParams|sensorParams|encoders|consumption',
u'modelParams|sensorParams|encoders|timestamp_dayOfWeek',
u'modelParams|sensorParams|encoders|timestamp_timeOfDay',
u'modelParams|sensorParams|encoders|timestamp_weekend'],
u'blackListedEncoders': [],
u'lastGoodSprint': None,
u'lastUpdateTime': 1448208642.176781,
u'searchOver': False,
u'sprints': [ { u'bestErrScore': None,
u'bestModelId': None,
u'status': u'active'}],
u'swarms': { u'modelParams|sensorParams|encoders|consumption': { u'bestErrScore': None,
u'bestModelId': None,
u'sprintIdx': 0,
u'status': u'active'},
u'modelParams|sensorParams|encoders|timestamp_dayOfWeek': { u'bestErrScore': None,
u'bestModelId': None,
u'sprintIdx': 0,
u'status': u'active'},
u'modelParams|sensorParams|encoders|timestamp_timeOfDay': { u'bestErrScore': None,
u'bestModelId': None,
u'sprintIdx': 0,
u'status': u'active'},
u'modelParams|sensorParams|encoders|timestamp_weekend': { u'bestErrScore': None,
u'bestModelId': None,
u'sprintIdx': 0,
u'status': u'active'}}}
Evaluated 4 models
HyperSearch finished!
Worker completion message: None
Results from all experiments:
----------------------------------------------------------------
Generating experiment files in directory: /tmp/tmp4NzXrQ...
Writing 313 lines...
Writing 114 lines...
done.
None
json.loads(jobInfo.results) raised an exception. Here is some info to help with debugging:
jobInfo: _jobInfoNamedTuple(jobId=1000, client=u'GRP', clientInfo=u'', clientKey=u'', cmdLine=u'$HYPERSEARCH', params=u'{"hsVersion": "v2", "maxModels": null, "persistentJobGUID": "9346a1d8-9133-11e5-96eb-0242ac110003", "useTerminators": false, "description": {"includedFields": [{"fieldName": "timestamp", "fieldType": "datetime"}, {"fieldName": "consumption", "fieldType": "float"}], "streamDef": {"info": "test", "version": 1, "streams": [{"info": "hotGym.csv", "source": "file://extra/hotgym/hotgym.csv", "columns": ["*"], "last_record": 100}], "aggregation": {"seconds": 0, "fields": [["consumption", "sum"], ["gym", "first"], ["timestamp", "first"]], "months": 0, "days": 0, "years": 0, "hours": 1, "microseconds": 0, "weeks": 0, "minutes": 0, "milliseconds": 0}}, "inferenceType": "MultiStep", "inferenceArgs": {"predictionSteps": [1], "predictedField": "consumption"}, "iterationCount": -1, "swarmSize": "medium"}}', jobHash='\x93F\xad\xcc\x913\x11\xe5\x96\xeb\x02B\xac\x11\x00\x03', status=u'notStarted', completionReason=None, completionMsg=None, workerCompletionReason=u'success', workerCompletionMsg=None, cancel=0, startTime=None, endTime=None, results=None, engJobType=u'hypersearch', minimumWorkers=1, maximumWorkers=4, priority=0, engAllocateNewWorkers=1, engUntendedDeadWorkers=0, numFailedWorkers=0, lastFailedWorkerErrorMsg=None, engCleaningStatus=u'notdone', genBaseDescription=u'# ----------------------------------------------------------------------\n# Numenta Platform for Intelligent Computing (NuPIC)\n# Copyright (C) 2013, Numenta, Inc. Unless you have an agreement\n# with Numenta, Inc., for a separate license for this software code, the\n# following terms and conditions apply:\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero Public License version 3 as\n# published by the Free Software Foundation.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n# See the GNU Affero Public License for more details.\n#\n# You should have received a copy of the GNU Affero Public License\n# along with this program. If not, see http://www.gnu.org/licenses.\n#\n# http://numenta.org/licenses/\n# ----------------------------------------------------------------------\n\n"""\nTemplate file used by the OPF Experiment Generator to generate the actual\ndescription.py file by replacing $XXXXXXXX tokens with desired values.\n\nThis description.py file was generated by:\n\'/usr/local/lib/python2.7/dist-packages/nupic-0.3.6.dev0-py2.7.egg/nupic/swarming/exp_generator/ExpGenerator.pyc\'\n"""\n\nfrom nupic.frameworks.opf.expdescriptionapi import ExperimentDescriptionAPI\n\nfrom nupic.frameworks.opf.expdescriptionhelpers import (\n updateConfigFromSubConfig,\n applyValueGettersToContainer\n )\n\nfrom nupic.frameworks.opf.clamodelcallbacks import *\nfrom nupic.frameworks.opf.metrics import MetricSpec\nfrom nupic.swarming.hypersearch.experimentutils import (InferenceType,\n InferenceElement)\nfrom nupic.support import aggregationDivide\n\nfrom nupic.frameworks.opf.opftaskdriver import (\n IterationPhaseSpecLearnOnly,\n IterationPhaseSpecInferOnly,\n IterationPhaseSpecLearnAndInfer)\n\n\n# Model Configuration Dictionary:\n#\n# Define the model parameters and adjust for any modifications if imported\n# from a sub-experiment.\n#\n# These fields might be modified by a sub-experiment; this dict is passed\n# between the sub-experiment and base experiment\n#\n#\nconfig = {\n # Type of model that the rest of these parameters apply to.\n \'model\': "CLA",\n\n # Version that specifies the format of the config.\n \'version\': 1,\n\n # Intermediate variables used to compute fields in modelParams and also\n # referenced from the control section.\n \'aggregationInfo\': { \'days\': 0,\n \'fields\': [ (u\'timestamp\', \'first\'),\n (u\'gym\', \'first\'),\n (u\'consumption\', \'sum\')],\n \'hours\': 1,\n \'microseconds\': 0,\n \'milliseconds\': 0,\n \'minutes\': 0,\n \'months\': 0,\n \'seconds\': 0,\n \'weeks\': 0,\n \'years\': 0},\n \'predictAheadTime\': None,\n\n # Model parameter dictionary.\n \'modelParams\': {\n # The type of inference that this model will perform\n \'inferenceType\': \'TemporalMultiStep\',\n\n \'sensorParams\': {\n # Sensor diagnostic output verbosity control;\n # if > 0: sensor region will print out on screen what it\'s sensing\n # at each step 0: silent; >=1: some info; >=2: more info;\n # >=3: even more info (see compute() in py/regions/RecordSensor.py)\n \'verbosity\' : 0,\n\n # Example:\n # \'encoders\': {\'field1\': {\'fieldname\': \'field1\', \'n\':100,\n # \'name\': \'field1\', \'type\': \'AdaptiveScalarEncoder\',\n # \'w\': 21}}\n #\n \'encoders\': {\n u\'timestamp_timeOfDay\': { \'fieldname\': u\'timestamp\',\n \'name\': u\'timestamp_timeOfDay\',\n \'timeOfDay\': (21, 1),\n \'type\': \'DateEncoder\'},\n u\'timestamp_dayOfWeek\': { \'dayOfWeek\': (21, 1),\n \'fieldname\': u\'timestamp\',\n \'name\': u\'timestamp_dayOfWeek\',\n \'type\': \'DateEncoder\'},\n u\'timestamp_weekend\': { \'fieldname\': u\'timestamp\',\n \'name\': u\'timestamp_weekend\',\n \'type\': \'DateEncoder\',\n \'weekend\': 21},\n u\'consumption\': { \'clipInput\': True,\n \'fieldname\': u\'consumption\',\n \'n\': 100,\n \'name\': u\'consumption\',\n \'type\': \'AdaptiveScalarEncoder\',\n \'w\': 21},\n \'_classifierInput\': { \'classifierOnly\': True,\n \'clipInput\': True,\n \'fieldname\': u\'consumption\',\n \'n\': 100,\n \'name\': \'_classifierInput\',\n \'type\': \'AdaptiveScalarEncoder\',\n \'w\': 21},\n },\n\n # A dictionary specifying the period for automatically-generated\n # resets from a RecordSensor;\n #\n # None = disable automatically-generated resets (also disabled if\n # all of the specified values evaluate to 0).\n # Valid keys is the desired combination of the following:\n # days, hours, minutes, seconds, milliseconds, microseconds, weeks\n #\n # Example for 1.5 days: sensorAutoReset = dict(days=1,hours=12),\n #\n # (value generated from SENSOR_AUTO_RESET)\n \'sensorAutoReset\' : None,\n },\n\n \'spEnable\': True,\n\n \'spParams\': {\n # Spatial pooler implementation to use. \n # Options: "py" (slow, good for debugging), and "cpp" (optimized).\n \'spatialImp\': \'cpp\',\n\n # SP diagnostic output verbosity control;\n # 0: silent; >=1: some info; >=2: more info;\n \'spVerbosity\' : 0,\n\n \'globalInhibition\': 1,\n\n # Number of cell columns in the cortical region (same number for\n # SP and TP)\n # (see also tpNCellsPerCol)\n \'columnCount\': 2048,\n\n \'inputWidth\': 0,\n\n # SP inhibition control (absolute value);\n # Maximum number of active columns in the SP region\'s output (when\n # there are more, the weaker ones are suppressed)\n \'numActiveColumnsPerInhArea\': 40,\n\n \'seed\': 1956,\n\n # potentialPct\n # What percent of the columns\'s receptive field is available\n # for potential synapses. \n \'potentialPct\': 0.8,\n\n # The default connected threshold. Any synapse whose\n # permanence value is above the connected threshold is\n # a "connected synapse", meaning it can contribute to the\n # cell\'s firing. Typical value is 0.10. Cells whose activity\n # level before inhibition falls below minDutyCycleBeforeInh\n # will have their own internal synPermConnectedCell\n # threshold set below this default value.\n # (This concept applies to both SP and TP and so \'cells\'\n # is correct here as opposed to \'columns\')\n \'synPermConnected\': 0.1,\n\n \'synPermActiveInc\': 0.05,\n\n \'synPermInactiveDec\': 0.0005,\n \n \'maxBoost\': 2.0\n },\n\n # Controls whether TP is enabled or disabled;\n # TP is necessary for making temporal predictions, such as predicting\n # the next inputs. Without TP, the model is only capable of\n # reconstructing missing sensor inputs (via SP).\n \'tpEnable\' : True,\n\n \'tpParams\': {\n # TP diagnostic output verbosity control;\n # 0: silent; [1..6]: increasing levels of verbosity\n # (see verbosity in nupic/trunk/py/nupic/research/TP.py and TP10X*.py)\n \'verbosity\': 0,\n\n # Number of cell columns in the cortical region (same number for\n # SP and TP)\n # (see also tpNCellsPerCol)\n \'columnCount\': 2048,\n\n # The number of cells (i.e., states), allocated per column.\n \'cellsPerColumn\': 32,\n\n \'inputWidth\': 2048,\n\n \'seed\': 1960,\n\n # Temporal Pooler implementation selector (see _getTPClass in\n # CLARegion.py).\n \'temporalImp\': \'cpp\',\n\n # New Synapse formation count\n # NOTE: If None, use spNumActivePerInhArea\n \'newSynapseCount\': 20,\n\n # Maximum number of synapses per segment\n \'maxSynapsesPerSegment\': 32,\n\n # Maximum number of segments per cell\n \'maxSegmentsPerCell\': 128,\n\n # Initial Permanence\n \'initialPerm\': 0.21,\n\n # Permanence Increment\n \'permanenceInc\': 0.1,\n\n # Permanence Decrement\n # If set to None, will automatically default to tpPermanenceInc\n # value.\n \'permanenceDec\' : 0.1,\n\n \'globalDecay\': 0.0,\n\n \'maxAge\': 0,\n\n # Minimum number of active synapses for a segment to be considered\n # during search for the best-matching segments.\n # None=use default\n # Replaces: tpMinThreshold\n \'minThreshold\': 12,\n\n # Segment activation threshold.\n # A segment is active if it has >= tpSegmentActivationThreshold\n # connected synapses that are active due to infActiveState\n # None=use default\n # Replaces: tpActivationThreshold\n \'activationThreshold\': 16,\n\n \'outputType\': \'normal\',\n\n # "Pay Attention Mode" length. This tells the TP how many new\n # elements to append to the end of a learned sequence at a time.\n # Smaller values are better for datasets with short sequences,\n # higher values are better for datasets with long sequences.\n \'pamLength\': 1,\n },\n\n \'clParams\': {\n \'regionName\' : \'CLAClassifierRegion\',\n \n # Classifier diagnostic output verbosity control;\n # 0: silent; [1..6]: increasing levels of verbosity\n \'clVerbosity\' : 0,\n\n # This controls how fast the classifier learns/forgets. Higher values\n # make it adapt faster and forget older patterns faster.\n \'alpha\': 0.001,\n\n # This is set after the call to updateConfigFromSubConfig and is\n # computed from the aggregationInfo and predictAheadTime.\n \'steps\': \'1\',\n },\n\n \'anomalyParams\': { u\'anomalyCacheRecords\': None,\n u\'autoDetectThreshold\': None,\n u\'autoDetectWaitRecords\': None},\n\n \'trainSPNetOnlyIfRequested\': False,\n },\n}\n# end of config dictionary\n\n\n# Adjust base config dictionary for any modifications if imported from a\n# sub-experiment\nupdateConfigFromSubConfig(config)\n\n\n# Compute predictionSteps based on the predictAheadTime and the aggregation\n# period, which may be permuted over.\nif config[\'predictAheadTime\'] is not None:\n predictionSteps = int(round(aggregationDivide(\n config[\'predictAheadTime\'], config[\'aggregationInfo\'])))\n assert (predictionSteps >= 1)\n config[\'modelParams\'][\'clParams\'][\'steps\'] = str(predictionSteps)\n\n\n# Adjust config by applying ValueGetterBase-derived\n# futures. NOTE: this MUST be called after updateConfigFromSubConfig() in order\n# to support value-getter-based substitutions from the sub-experiment (if any)\napplyValueGettersToContainer(config)\n\n\n\ncontrol = {\n # The environment that the current model is being run in\n "environment": \'nupic\',\n\n # Input stream specification per py/nupic/frameworks/opf/jsonschema/stream_def.json.\n #\n \'dataset\' : { \'aggregation\': config[\'aggregationInfo\'],\n u\'info\': u\'test\',\n u\'streams\': [ { u\'columns\': [u\'*\'],\n u\'info\': u\'hotGym.csv\',\n u\'last_record\': 100,\n u\'source\': u\'file://extra/hotgym/hotgym.csv\'}],\n u\'version\': 1},\n\n # Iteration count: maximum number of iterations. Each iteration corresponds\n # to one record from the (possibly aggregated) dataset. The task is\n # terminated when either number of iterations reaches iterationCount or\n # all records in the (possibly aggregated) database have been processed,\n # whichever occurs first.\n #\n # iterationCount of -1 = iterate over the entire dataset\n \'iterationCount\' : -1,\n\n\n # A dictionary containing all the supplementary parameters for inference\n "inferenceArgs":{u\'inputPredictedField\': \'auto\',\n u\'predictedField\': u\'consumption\',\n u\'predictionSteps\': [1]},\n\n # Metrics: A list of MetricSpecs that instantiate the metrics that are\n # computed for this experiment\n \'metrics\':[\n MetricSpec(field=u\'consumption\', metric=\'multiStep\', inferenceElement=\'multiStepBestPredictions\', params={\'window\': 1000, \'steps\': [1], \'errorMetric\': \'aae\'}),\n MetricSpec(field=u\'consumption\', metric=\'multiStep\', inferenceElement=\'multiStepBestPredictions\', params={\'window\': 1000, \'steps\': [1], \'errorMetric\': \'altMAPE\'})\n ],\n\n # Logged Metrics: A sequence of regular expressions that specify which of\n # the metrics from the Inference Specifications section MUST be logged for\n # every prediction. The regex\'s correspond to the automatically generated\n # metric labels. This is similar to the way the optimization metric is\n # specified in permutations.py.\n \'loggedMetrics\': [\'.*\'],\n}\n\n\n\ndescriptionInterface = ExperimentDescriptionAPI(modelConfig=config,\n control=control)\n', genPermutations=u'# ----------------------------------------------------------------------\n# Numenta Platform for Intelligent Computing (NuPIC)\n# Copyright (C) 2013, Numenta, Inc. Unless you have an agreement\n# with Numenta, Inc., for a separate license for this software code, the\n# following terms and conditions apply:\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero Public License version 3 as\n# published by the Free Software Foundation.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n# See the GNU Affero Public License for more details.\n#\n# You should have received a copy of the GNU Affero Public License\n# along with this program. If not, see http://www.gnu.org/licenses.\n#\n# http://numenta.org/licenses/\n# ----------------------------------------------------------------------\n\n"""\nTemplate file used by ExpGenerator to generate the actual\npermutations.py file by replacing $XXXXXXXX tokens with desired values.\n\nThis permutations.py file was generated by:\n\'/usr/local/lib/python2.7/dist-packages/nupic-0.3.6.dev0-py2.7.egg/nupic/swarming/exp_generator/ExpGenerator.pyc\'\n"""\n\nimport os\n\nfrom nupic.swarming.permutationhelpers import *\n\n# The name of the field being predicted. Any allowed permutation MUST contain\n# the prediction field.\n# (generated from PREDICTION_FIELD)\npredictedField = \'consumption\'\n\n\n\n\npermutations = {\n \'aggregationInfo\': { \'days\': 0,\n \'fields\': [ (u\'timestamp\', \'first\'),\n (u\'gym\', \'first\'),\n (u\'consumption\', \'sum\')],\n \'hours\': 1,\n \'microseconds\': 0,\n \'milliseconds\': 0,\n \'minutes\': 0,\n \'months\': 0,\n \'seconds\': 0,\n \'weeks\': 0,\n \'years\': 0},\n\n \'modelParams\': {\n \'inferenceType\': PermuteChoices([\'NontemporalMultiStep\', \'TemporalMultiStep\']),\n\n \'sensorParams\': {\n \'encoders\': {\n u\'timestamp_timeOfDay\': PermuteEncoder(fieldName=\'timestamp\', encoderClass=\'DateEncoder.timeOfDay\', radius=PermuteFloat(0.5, 12), w=21, ),\n u\'timestamp_dayOfWeek\': PermuteEncoder(encoderClass=\'DateEncoder.dayOfWeek\', radius=PermuteFloat(1, 6), w=21, fieldName=\'timestamp\', ),\n u\'timestamp_weekend\': PermuteEncoder(encoderClass=\'DateEncoder.weekend\', radius=PermuteChoices([1]), w=21, fieldName=\'timestamp\', ),\n u\'consumption\': PermuteEncoder(fieldName=\'consumption\', w=21, clipInput=True, encoderClass=\'AdaptiveScalarEncoder\', n=PermuteInt(22, 521), ),\n \'_classifierInput\': dict(classifierOnly=True, fieldname=\'consumption\', w=21, clipInput=True, type=\'AdaptiveScalarEncoder\', n=PermuteInt(28, 521), ),\n },\n },\n\n \'spParams\': {\n \'synPermInactiveDec\': PermuteFloat(0.0003, 0.1),\n\n },\n\n \'tpParams\': {\n \'activationThreshold\': PermuteInt(12, 16),\n \'minThreshold\': PermuteInt(9, 12),\n \'pamLength\': PermuteInt(1, 5),\n\n },\n\n \'clParams\': {\n \'alpha\': PermuteFloat(0.0001, 0.1),\n\n },\n }\n}\n\n\n# Fields selected for final hypersearch report;\n# NOTE: These values are used as regular expressions by RunPermutations.py\'s\n# report generator\n# (fieldname values generated from PERM_PREDICTED_FIELD_NAME)\nreport = [\n \'.*consumption.*\',\n ]\n\n# Permutation optimization setting: either minimize or maximize metric\n# used by RunPermutations.\n# NOTE: The value is used as a regular expressions by RunPermutations.py\'s\n# report generator\n# (generated from minimize = "multiStepBestPredictions:multiStep:errorMetric=\'altMAPE\':steps=\\[1\\]:window=1000:field=consumption")\nminimize = "multiStepBestPredictions:multiStep:errorMetric=\'altMAPE\':steps=\\[1\\]:window=1000:field=consumption"\n\nminParticlesPerSwarm = 5\n\ninputPredictedField = \'auto\'\n\n\n\n\n\n\n\nmaxModels = 200\n\n\n\ndef permutationFilter(perm):\n """ This function can be used to selectively filter out specific permutation\n combinations. It is called by RunPermutations for every possible permutation\n of the variables in the permutations dict. It should return True for valid a\n combination of permutation values and False for an invalid one.\n\n Parameters:\n ---------------------------------------------------------\n perm: dict of one possible combination of name:value\n pairs chosen from permutations.\n """\n\n # An example of how to use this\n #if perm[\'__consumption_encoder\'][\'maxval\'] > 300:\n # return False;\n #\n return True\n', engLastUpdateTime=datetime.datetime(2015, 11, 22, 16, 10, 42), engCjmConnId=None, engWorkerState=u'{"sprints": [{"status": "active", "bestModelId": null, "bestErrScore": null}], "searchOver": false, "blackListedEncoders": [], "lastUpdateTime": 1448208642.176781, "activeSwarms": ["modelParams|sensorParams|encoders|consumption", "modelParams|sensorParams|encoders|timestamp_dayOfWeek", "modelParams|sensorParams|encoders|timestamp_timeOfDay", "modelParams|sensorParams|encoders|timestamp_weekend"], "swarms": {"modelParams|sensorParams|encoders|consumption": {"status": "active", "bestModelId": null, "sprintIdx": 0, "bestErrScore": null}, "modelParams|sensorParams|encoders|timestamp_dayOfWeek": {"status": "active", "bestModelId": null, "sprintIdx": 0, "bestErrScore": null}, "modelParams|sensorParams|encoders|timestamp_timeOfDay": {"status": "active", "bestModelId": null, "sprintIdx": 0, "bestErrScore": null}, "modelParams|sensorParams|encoders|timestamp_weekend": {"status": "active", "bestModelId": null, "sprintIdx": 0, "bestErrScore": null}}, "lastGoodSprint": null}', engStatus=None, engModelMilestones=None)
jobInfo.results: None
EXCEPTION: expected string or buffer
Traceback (most recent call last):
File "/usr/local/src/nupic/scripts/run_swarm.py", line 187, in <module>
runPermutations(sys.argv[1:])
File "/usr/local/src/nupic/scripts/run_swarm.py", line 178, in runPermutations
fileArgPath, optionsDict, outputLabel, permWorkDir)
File "/usr/local/lib/python2.7/dist-packages/nupic-0.3.6.dev0-py2.7.egg/nupic/swarming/permutations_runner.py", line 310, in runWithJsonFile
verbosity=verbosity)
File "/usr/local/lib/python2.7/dist-packages/nupic-0.3.6.dev0-py2.7.egg/nupic/swarming/permutations_runner.py", line 277, in runWithConfig
return _runAction(runOptions)
File "/usr/local/lib/python2.7/dist-packages/nupic-0.3.6.dev0-py2.7.egg/nupic/swarming/permutations_runner.py", line 218, in _runAction
returnValue = _runHyperSearch(runOptions)
File "/usr/local/lib/python2.7/dist-packages/nupic-0.3.6.dev0-py2.7.egg/nupic/swarming/permutations_runner.py", line 161, in _runHyperSearch
metricsKeys=search.getDiscoveredMetricsKeys())
File "/usr/local/lib/python2.7/dist-packages/nupic-0.3.6.dev0-py2.7.egg/nupic/swarming/permutations_runner.py", line 828, in generateReport
results = json.loads(jobInfo.results)
File "/usr/local/lib/python2.7/dist-packages/nupic-0.3.6.dev0-py2.7.egg/nupic/swarming/object_json.py", line 163, in loads
json.loads(s, object_hook=objectDecoderHook, **kwargs))
File "/usr/lib/python2.7/json/__init__.py", line 351, in loads
return cls(encoding=encoding, **kw).decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
Also, from inside the same docker container:
# cat $NUPIC/VERSION
0.3.6.dev0
Let me know if you need more info. Please inform me of possible workarounds.
Hello,
I strictly followed the wiki documentation on how to run NuPIC within docker, and the steps listed on the wiki cause NuPIC to throw an error.
The environment is Debian 8, Linux 4.1.1 (Aufs).
Steps I took:
1) clean install of docker (successful):
2) run MySQL docker container (successful):
3) run nupic docker container (successful):
4) run swarming example inside docker container (fail):
Also, from inside the same docker container:
Let me know if you need more info. Please inform me of possible workarounds.