numenta / nupic-legacy

Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.
http://numenta.org/
GNU Affero General Public License v3.0
6.34k stars 1.56k forks source link

make tests_run_all failes due mysql connectivity problem #883

Closed yaitskov closed 10 years ago

yaitskov commented 10 years ago

I followed installation instructions on https://github.com/numenta/nupic/wiki/Installing-NuPIC-on-Ubuntu Next readme section on page https://github.com/numenta/nupic#build-and-test-nupic

Last commit is dd9e1f6.

All steps before make tests_run_all completed successfully (including pip installer and tests_everything).

I guess to MySql login/pass/address should be required obviously. And mention about this on the readme file nearby tests_run_all.

Another option is to use sqlite RDBMS. It doesn't require any configuration.

(EDIT by @rhyolight: Removed complete log output because we know the problem.)

breznak commented 10 years ago

Hi Daneel,

right, those tests require DB (with no password for seamless work, or login set somewhere-im not sure where). Therefor are tests_run (which skip the DB-dependant tests), this is the default behavior when you run make tests_all

Actually we now have an (optional) $NUPIC/.nupic_config file, so it would be best place to set the DB login/pass there too, can you open an Issue for that?

Cheers, Mark

On Sun, Apr 27, 2014 at 1:27 PM, Daneel S. Yaitskov < notifications@github.com> wrote:

I followed installation instructions on https://github.com/numenta/nupic/wiki/Installing-NuPIC-on-Ubuntu Next readme section on page https://github.com/numenta/nupic#build-and-test-nupic

Last commit is dd9e1f6 https://github.com/numenta/nupic/commit/dd9e1f6.

All steps before make tests_run_all completed successfully (including pip installer and tests_everything).

I guess to MySql login/pass/address should be required obviously. And mention about this on the readme file nearby tests_run_all.

Another option is to use sqlite RDBMS. It doesn't require any configuration.

There are full log output:

Scanning dependencies of target tests_run_all [100%] Python tests + swarming (requires DB) ============================= test session starts ============================== platform linux2 -- Python 2.7.5 -- pytest-2.4.2 -- /usr/bin/python plugins: cov, xdist collecting ... collected 803 items / 3 skipped

tests/external/py2/asteval_test.py:32: TestCase.testImportAndVersions PASSED tests/external/py2/testfixture_test.py:48: TestPytest.testSetUpModuleCalled PASSED tests/integration/py2/nupic/algorithms/tp_likelihood_test.py:414: TPLikelihoodTest.testLikelihood1Long PASSED tests/integration/py2/nupic/algorithms/tp_likelihood_test.py:410: TPLikelihoodTest.testLikelihood1Short PASSED tests/integration/py2/nupic/algorithms/tp_likelihood_test.py:423: TPLikelihoodTest.testLikelihood2Long PASSED tests/integration/py2/nupic/algorithms/tp_likelihood_test.py:419: TPLikelihoodTest.testLikelihood2Short PASSED tests/integration/py2/nupic/algorithms/tp_overlapping_sequences_test.py:646: TPOverlappingSeqsTest.testFastLearning PASSED tests/integration/py2/nupic/algorithms/tp_overlapping_sequences_test.py:803: TPOverlappingSeqsTest.testForbesLikeData SKIPPED tests/integration/py2/nupic/algorithms/tp_overlapping_sequences_test.py:696: TPOverlappingSeqsTest.testSlowLearning PASSED tests/integration/py2/nupic/algorithms/tp_overlapping_sequences_test.py:747: TPOverlappingSeqsTest.testSlowLearningWithOverlap SKIPPED tests/integration/py2/nupic/algorithms/knn_classifier_test/categories_test.py:38: KNNCategoriesTest.testCategories PASSED tests/integration/py2/nupic/algorithms/knn_classifier_test/classifier_test.py:193: KNNClassifierTest.testKNNClassifierMedium PASSED tests/integration/py2/nupic/algorithms/knn_classifier_test/classifier_test.py:185: KNNClassifierTest.testKNNClassifierShort PASSED tests/integration/py2/nupic/algorithms/knn_classifier_test/classifier_test.py:197: KNNClassifierTest.testPCAKNNMedium PASSED tests/integration/py2/nupic/algorithms/knn_classifier_test/classifier_test.py:189: KNNClassifierTest.testPCAKNNShort PASSED tests/integration/py2/nupic/data/aggregation_test.py:815: AggregationTests.test_AutoSpecialFields PASSED tests/integration/py2/nupic/data/aggregation_test.py:705: AggregationTests.test_GapsInIrregularData PASSED tests/integration/py2/nupic/data/aggregation_test.py:644: AggregationTests.test_GenerateDataset PASSED tests/integration/py2/nupic/data/aggregation_test.py:600: AggregationTests.test_GymAggregate PASSED tests/integration/py2/nupic/data/aggregation_test.py:548: AggregationTests.test_GymAggregateWithOldData PASSED tests/integration/py2/nupic/data/aggregation_test.py:875: AggregationTests.test_WeightedMean PASSED tests/integration/py2/nupic/engine/network_serialization_test.py:41: NetworkSerializationTest.testSerialization PASSED tests/integration/py2/nupic/engine/network_twonode_test.py:134: NetworkTwoNodeTest.testLinkingDownwardDimensions PASSED tests/integration/py2/nupic/engine/network_twonode_test.py:46: NetworkTwoNodeTest.testTwoNode PASSED tests/integration/py2/nupic/engine/vector_file_sensor_test.py:85: VectorFileSensorTest.testAll PASSED tests/integration/py2/nupic/opf/expgenerator_test.py:763: PositiveExperimentTests.test_Aggregation PASSED tests/integration/py2/nupic/opf/expgenerator_test.py:1278: PositiveExperimentTests.test_AggregationSwarming PASSED tests/integration/py2/nupic/opf/expgenerator_test.py:1673: PositiveExperimentTests.test_AnomalyParams PASSED tests/integration/py2/nupic/opf/expgenerator_test.py:1202: PositiveExperimentTests.test_DeltaEncoders PASSED tests/integration/py2/nupic/opf/expgenerator_test.py:1615: PositiveExperimentTests.test_FastSwarmModelParams PASSED tests/integration/py2/nupic/opf/expgenerator_test.py:1559: PositiveExperimentTests.test_FixedFields PASSED tests/integration/py2/nupic/opf/expgenerator_test.py:578: PositiveExperimentTests.test_IncludedFields PASSED tests/integration/py2/nupic/opf/expgenerator_test.py:514: PositiveExperimentTests.test_Metrics PASSED tests/integration/py2/nupic/opf/expgenerator_test.py:976: PositiveExperimentTests.test_MultiStep FAILED

=================================== FAILURES =================================== ____ PositiveExperimentTests.test_MultiStep ____ self =

def test_MultiStep(self):
  """ Test the we correctly generate a multi-step prediction experiment
    """

  # Form the stream definition
  dataPath = os.path.join(g_myEnv.datasetSrcDir, "hotgym", "hotgym.csv")
  streamDef = dict(
    version = 1,
    info = "test_NoProviders",
    streams = [
      dict(source="file://%s" % (dataPath),
           info="hotGym.csv",
           columns=["*"],
           last_record=20),
      ],
    aggregation = {
      'years': 0,
      'months': 0,
      'weeks': 0,
      'days': 0,
      'hours': 1,
      'minutes': 0,
      'seconds': 0,
      'milliseconds': 0,
      'microseconds': 0,
      'fields': [('consumption', 'sum'),
                 ('gym', 'first'),
                 ('timestamp', 'first')]
    }
 )

  # Generate the experiment description
  expDesc = {
    'environment':    OpfEnvironment.Grok,
    "inferenceArgs":{
      "predictedField":"consumption",
      "predictionSteps": [1, 5],
    },
    "inferenceType":  "MultiStep",
    "streamDef":      streamDef,
    "includedFields": [
      { "fieldName": "timestamp",
        "fieldType": "datetime"
      },
      { "fieldName": "consumption",
        "fieldType": "float",
      },
    ],
    "iterationCount": -1,
    "runBaselines": True,
  }

  # --------------------------------------------------------------------
  (base, perms) = self.getModules(expDesc)

  print "base.config['modelParams']:"
  pprint.pprint(base.config['modelParams'])
  print "perms.permutations"
  pprint.pprint(perms.permutations)
  print "perms.minimize"
  pprint.pprint(perms.minimize)
  print "expDesc"
  pprint.pprint(expDesc)

  # Make sure we have the expected info in the base description file
  self.assertEqual(base.control['inferenceArgs']['predictionSteps'],
                   expDesc['inferenceArgs']['predictionSteps'])
  self.assertEqual(base.control['inferenceArgs']['predictedField'],
                   expDesc['inferenceArgs']['predictedField'])
  self.assertEqual(base.config['modelParams']['inferenceType'],
                   "TemporalMultiStep")

  # Make sure there is a '_classifier_input' encoder with classifierOnly
  #  set to True
  self.assertEqual(base.config['modelParams']['sensorParams']['encoders']
                   ['_classifierInput']['classifierOnly'], True)
  self.assertEqual(base.config['modelParams']['sensorParams']['encoders']
                   ['_classifierInput']['fieldname'],
                   expDesc['inferenceArgs']['predictedField'])

  # And in the permutations file
  self.assertIn('inferenceType', perms.permutations['modelParams'])
  self.assertEqual(perms.minimize,
          "multiStepBestPredictions:multiStep:errorMetric='altMAPE':" \
          + "steps=\\[1, 5\\]:window=1000:field=consumption")
  self.assertIn('alpha', perms.permutations['modelParams']['clParams'])

  # Should permute over the _classifier_input encoder params
  self.assertIn('_classifierInput',
                perms.permutations['modelParams']['sensorParams']['encoders'])

  # Should set inputPredictedField to "auto" (the default)
  self.assertEqual(perms.inputPredictedField, "auto")

  # Should have TP parameters being permuted
  self.assertIn('activationThreshold',
                perms.permutations['modelParams']['tpParams'])
  self.assertIn('minThreshold', perms.permutations['modelParams']['tpParams'])

  # Make sure the right metrics were put in
  metrics = base.control['metrics']
  metricTuples = [(metric.metric, metric.inferenceElement, metric.params) \
                 for metric in metrics]

  self.assertIn(('multiStep',
                 'multiStepBestPredictions',
                 {'window': 1000, 'steps': [1, 5], 'errorMetric': 'aae'}),
                metricTuples)

  # Test running it
self.runBaseDescriptionAndPermutations(expDesc, hsVersion='v2')

tests/integration/py2/nupic/opf/expgenerator_test.py:1090:


self = expDesc = {'environment': 'grok', 'includedFields': [{'fieldName': 'timestamp', 'fieldType': 'datetime'}, {'fieldName': 'consump...t'}], 'inferenceArgs': {'predictedField': 'consumption', 'predictionSteps': [1, 5]}, 'inferenceType': 'MultiStep', ...} hsVersion = 'v2', maxModels = 2

def runBaseDescriptionAndPermutations(self, expDesc, hsVersion, maxModels=2):
  """ This does the following:

    1.) Calls ExpGenerator to generate a base description file and permutations
    file from expDescription.

    2.) Verifies that description.py and permutations.py are valid python
    modules that can be loaded

    3.) Runs the base description.py as an experiment using OPF RunExperiment.

    4.) Runs a Hypersearch using the generated permutations.py by passing it
    to HypersearchWorker.

    Parameters:
    -------------------------------------------------------------------
    expDesc:       JSON format experiment description
    hsVersion:     which version of hypersearch to use ('v2'; 'v1' was dropped)
    retval:        list of model results
    """

  # --------------------------------------------------------------------
  # Generate the description.py and permutations.py. These get generated
  # in the g_myEnv.testOutDir directory.
  self.getModules(expDesc, hsVersion=hsVersion)
  permutationsPyPath = os.path.join(g_myEnv.testOutDir, "permutations.py")

  # ----------------------------------------------------------------
  # Try running the base experiment
  args = [g_myEnv.testOutDir]
  from nupic.frameworks.opf.experiment_runner import runExperiment
  LOGGER.info("")
  LOGGER.info("============================================================")
  LOGGER.info("RUNNING EXPERIMENT")
  LOGGER.info("============================================================")
  runExperiment(args)

  # ----------------------------------------------------------------
  # Try running the generated permutations
  jobParams = {'persistentJobGUID' : generatePersistentJobGUID(),
               'permutationsPyFilename': permutationsPyPath,
               'hsVersion': hsVersion,
               }
  if maxModels is not None:
    jobParams['maxModels'] = maxModels
  args = ['ignoreThis', '--params=%s' % (json.dumps(jobParams))]
  self.resetExtraLogItems()
  self.addExtraLogItem({'params':jobParams})

  LOGGER.info("")
  LOGGER.info("============================================================")
  LOGGER.info("RUNNING PERMUTATIONS")
  LOGGER.info("============================================================")
jobID = HypersearchWorker.main(args)

tests/integration/py2/nupic/opf/expgenerator_test.py:257:


argv = ['ignoreThis', '--params={"hsVersion": "v2", "maxModels": 2, "persistentJobGUID": "JOB_UUID1-4306d230-cdfb-11e3-9631-5...", "permutationsPyFilename": "/home/dan/nupic/nupic/tests/integration/py2/nupic/opf/expGeneratorOut/permutations.py"}']

def main(argv):
  """
  The main function of the HypersearchWorker script. This parses the command
  line arguments, instantiates a HypersearchWorker instance, and then
  runs it.

  Parameters:
  ----------------------------------------------------------------------
  retval:     jobID of the job we ran. This is used by unit test code
                when calling this working using the --params command
                line option (which tells this worker to insert the job
                itself).
  """

  parser = OptionParser(helpString)

  parser.add_option("--jobID", action="store", type="int", default=None,
        help="jobID of the job within the dbTable [default: %default].")

  parser.add_option("--modelID", action="store", type="str", default=None,
        help=("Tell worker to re-run this model ID. When specified, jobID "
         "must also be specified [default: %default]."))

  parser.add_option("--workerID", action="store", type="str", default=None,
        help=("workerID of the scheduler's SlotAgent (GenericWorker) that "
          "hosts this SpecializedWorker [default: %default]."))

  parser.add_option("--params", action="store", default=None,
        help="Create and execute a new hypersearch request using this JSON " \
        "format params string. This is helpful for unit tests and debugging. " \
        "When specified jobID must NOT be specified. [default: %default].")

  parser.add_option("--clearModels", action="store_true", default=False,
        help="clear out the models table before starting [default: %default].")

  parser.add_option("--resetJobStatus", action="store_true", default=False,
        help="Reset the job status before starting  [default: %default].")

  parser.add_option("--logLevel", action="store", type="int", default=None,
        help="override default log level. Pass in an integer value that "
        "represents the desired logging level (10=logging.DEBUG, "
        "20=logging.INFO, etc.) [default: %default].")

  # Evaluate command line arguments
  (options, args) = parser.parse_args(argv[1:])
  if len(args) != 0:
    raise RuntimeError("Expected no command line arguments, but got: %s" % \
                        (args))

  if (options.jobID and options.params):
    raise RuntimeError("--jobID and --params can not be used at the same time")

  if (options.jobID is None and options.params is None):
    raise RuntimeError("Either --jobID or --params must be specified.")

  initLogging(verbose=True)

  # Instantiate the HypersearchWorker and run it
  hst = HypersearchWorker(options, argv[1:])

  # Normal use. This is one of among a number of workers. If we encounter
  #  an exception at the outer loop here, we fail the entire job.
  if options.params is None:
    try:
      jobID = hst.run()

    except Exception, e:
      jobID = options.jobID
      msg = StringIO.StringIO()
      print >>msg, "%s: Exception occurred in Hypersearch Worker: %r" % \
         (ErrorCodes.hypersearchLogicErr, e)
      traceback.print_exc(None, msg)

      completionReason = ClientJobsDAO.CMPL_REASON_ERROR
      completionMsg = msg.getvalue()
      hst.logger.error(completionMsg)

      # If no other worker already marked the job as failed, do so now.
      jobsDAO = ClientJobsDAO.get()
      workerCmpReason = jobsDAO.jobGetFields(options.jobID,
          ['workerCompletionReason'])[0]
      if workerCmpReason == ClientJobsDAO.CMPL_REASON_SUCCESS:
        jobsDAO.jobSetFields(options.jobID, fields=dict(
            cancel=True,
            workerCompletionReason = ClientJobsDAO.CMPL_REASON_ERROR,
            workerCompletionMsg = completionMsg),
            useConnectionID=False,
            ignoreUnchanged=True)

  # Run just 1 worker for the entire job. Used for unit tests that run in
  # 1 process
  else:
    jobID = None
    completionReason = ClientJobsDAO.CMPL_REASON_SUCCESS
    completionMsg = "Success"

    try:
    jobID = hst.run()

../build/release/lib/python2.7/site-packages/nupic/swarming/HypersearchWorker.py:592:


self = <nupic.swarming.HypersearchWorker.HypersearchWorker object at 0x35eac50>

def run(self):
  """ Run this worker.

    Parameters:
    ----------------------------------------------------------------------
    retval:     jobID of the job we ran. This is used by unit test code
                  when calling this working using the --params command
                  line option (which tells this worker to insert the job
                  itself).
    """
  # Easier access to options
  options = self._options

  # ---------------------------------------------------------------------
  # Connect to the jobs database
  self.logger.info("Connecting to the jobs database")
cjDAO = ClientJobsDAO.get()

../build/release/lib/python2.7/site-packages/nupic/swarming/HypersearchWorker.py:263:


args = (), kwargs = {}, logger = <logging.Logger object at 0x6c87850>

@functools.wraps(func)
def exceptionLoggingWrap(*args, **kwargs):
  try:
  return func(*args, **kwargs)

../build/release/lib/python2.7/site-packages/nupic/support/decorators.py:59:


@staticmethod
@logExceptions(_getLogger)
def get():
  """ Get the instance of the ClientJobsDAO created for this process (or
    perhaps at some point in the future, for this thread).

    Parameters:
    ----------------------------------------------------------------
    retval:  instance of ClientJobsDAO
    """

  # Instantiate if needed
  if ClientJobsDAO._instance is None:
    cjDAO = ClientJobsDAO()
  cjDAO.connect()

../build/release/lib/python2.7/site-packages/nupic/database/ClientJobsDAO.py:567:


args = (<nupic.database.ClientJobsDAO.ClientJobsDAO object at 0x672ce10>,) kwargs = {}, logger = <logging.Logger object at 0x6c87850>

@functools.wraps(func)
def exceptionLoggingWrap(*args, **kwargs):
  try:
  return func(*args, **kwargs)

../build/release/lib/python2.7/site-packages/nupic/support/decorators.py:59:


args = (<nupic.database.ClientJobsDAO.ClientJobsDAO object at 0x672ce10>,) kwargs = {}, numAttempts = 38, delaySec = 10, startTime = 1398596479.634295 e = OperationalError(2003, "Can't connect to MySQL server on 'localhost' (111)") now = 1398596782.714833

@functools.wraps(func)
def retryWrap(*args, **kwargs):
  numAttempts = 0
  delaySec = initialRetryDelaySec
  startTime = time.time()

  # Make sure it gets called at least once
  while True:
    numAttempts += 1
    try:
    result = func(*args, **kwargs)

../build/release/lib/python2.7/site-packages/nupic/support/decorators.py:214:


self = <nupic.database.ClientJobsDAO.ClientJobsDAO object at 0x672ce10> deleteOldVersions = False, recreate = False

@logExceptions(_getLogger)
@g_retrySQL
def connect(self, deleteOldVersions=False, recreate=False):
  """ Locate the current version of the jobs DB or create a new one, and
    optionally delete old versions laying around. If desired, this method
    can be called at any time to re-create the tables from scratch, delete
    old versions of the database, etc.

    Parameters:
    ----------------------------------------------------------------
    deleteOldVersions:   if true, delete any old versions of the DB left
                          on the server
    recreate:            if true, recreate the database from scratch even
                          if it already exists.
    """

  # Initialize tables, if needed
with ConnectionFactory.get() as conn:
    # Initialize tables
    self._initTables(cursor=conn.cursor, deleteOldVersions=deleteOldVersions,
                     recreate=recreate)

../build/release/lib/python2.7/site-packages/nupic/database/ClientJobsDAO.py:656:


cls = <class 'nupic.database.Connection.ConnectionFactory'>

@classmethod
def get(cls):
  """ Acquire a ConnectionWrapper instance that represents a connection
    to the SQL server per nupic.cluster.database.* configuration settings.

    NOTE: caller is responsible for calling the ConnectionWrapper instance's
    release() method after using the connection in order to release resources.
    Better yet, use the returned ConnectionWrapper instance in a Context Manager
    statement for automatic invocation of release():
    Example:
        # If using Jython 2.5.x, first import with_statement at the very top of
        your script (don't need this import for Jython/Python 2.6.x and later):
        from __future__ import with_statement
        # Then:
        from nupic.database.Connection import ConnectionFactory
        # Then use it like this
        with ConnectionFactory.get() as conn:
          conn.cursor.execute("SELECT ...")
          conn.cursor.fetchall()
          conn.cursor.execute("INSERT ...")

    WARNING: DO NOT close the underlying connection or cursor as it may be
    shared by other modules in your process.  ConnectionWrapper's release()
    method will do the right thing.

    Parameters:
    ----------------------------------------------------------------
    retval:       A ConnectionWrapper instance. NOTE: Caller is responsible
                    for releasing resources as described above.
    """
  if cls._connectionPolicy is None:
    logger = _getLogger(cls)
    logger.info("Creating db connection policy via provider %r",
                cls._connectionPolicyInstanceProvider)
    cls._connectionPolicy = cls._connectionPolicyInstanceProvider()

    logger.debug("Created connection policy: %r", cls._connectionPolicy)
return cls._connectionPolicy.acquireConnection()

../build/release/lib/python2.7/site-packages/nupic/database/Connection.py:172:


self = <nupic.database.Connection.PooledConnectionPolicy object at 0x7cff290>

def acquireConnection(self):
  """ Get a connection from the pool.

    Parameters:
    ----------------------------------------------------------------
    retval:       A ConnectionWrapper instance. NOTE: Caller
                    is responsible for calling the  ConnectionWrapper
                    instance's release() method or use it in a context manager
                    expression (with ... as:) to release resources.
    """
  self._logger.debug("Acquiring connection")
dbConn = self._pool.connection(shareable=False)

../build/release/lib/python2.7/site-packages/nupic/database/Connection.py:558:


self = <DBUtils.PooledDB.PooledDB instance at 0x3b06a28>, shareable = False

def connection(self, shareable=True):
    """Get a steady, cached DB-API 2 connection from the pool.

        If shareable is set and the underlying DB-API 2 allows it,
        then the connection may be shared with other threads.

        """
    if shareable and self._maxshared:
        self._condition.acquire()
        try:
            while (not self._shared_cache and self._maxconnections
                    and self._connections >= self._maxconnections):
                self._condition.wait()
            if len(self._shared_cache) < self._maxshared:
                # shared cache is not full, get a dedicated connection
                try: # first try to get it from the idle cache
                    con = self._idle_cache.pop(0)
                except IndexError: # else get a fresh connection
                    con = self.steady_connection()
                else:
                    con._ping_check() # check this connection
                con = SharedDBConnection(con)
                self._connections += 1
            else: # shared cache full or no more connections allowed
                self._shared_cache.sort() # least shared connection first
                con = self._shared_cache.pop(0) # get it
                while con.con._transaction:
                    # do not share connections which are in a transaction
                    self._shared_cache.insert(0, con)
                    self._condition.wait()
                    self._shared_cache.sort()
                    con = self._shared_cache.pop(0)
                con.con._ping_check() # check the underlying connection
                con.share() # increase share of this connection
            # put the connection (back) into the shared cache
            self._shared_cache.append(con)
            self._condition.notify()
        finally:
            self._condition.release()
        con = PooledSharedDBConnection(self, con)
    else: # try to get a dedicated connection
        self._condition.acquire()
        try:
            while (self._maxconnections
                    and self._connections >= self._maxconnections):
                self._condition.wait()
            # connection limit not reached, get a dedicated connection
            try: # first try to get it from the idle cache
                con = self._idle_cache.pop(0)
            except IndexError: # else get a fresh connection
              con = self.steady_connection()

../../.local/lib/python2.7/site-packages/DBUtils/PooledDB.py:331:


self = <DBUtils.PooledDB.PooledDB instance at 0x3b06a28>

def steady_connection(self):
    """Get a steady, unpooled DB-API 2 connection."""
    return connect(self._creator,
        self._maxusage, self._setsession,
        self._failures, self._ping, True,
      *self._args, **self._kwargs)

../../.local/lib/python2.7/site-packages/DBUtils/PooledDB.py:279:


creator = <module 'pymysql' from '/home/dan/.local/lib/python2.7/site-packages/pymysql/init.pyc'> maxusage = None, setsession = ['SET AUTOCOMMIT = 1'], failures = None, ping = 1 closeable = True, args = () kwargs = {'charset': 'utf8', 'host': 'localhost', 'passwd': '', 'port': 3306, ...}

def connect(creator, maxusage=None, setsession=None,
        failures=None, ping=1, closeable=True, *args, **kwargs):
    """A tough version of the connection constructor of a DB-API 2 module.

    creator: either an arbitrary function returning new DB-API 2 compliant
        connection objects or a DB-API 2 compliant database module
    maxusage: maximum usage limit for the underlying DB-API 2 connection
        (number of database operations, 0 or None means unlimited usage)
        callproc(), execute() and executemany() count as one operation.
        When the limit is reached, the connection is automatically reset.
    setsession: an optional list of SQL commands that may serve to prepare
        the session, e.g. ["set datestyle to german", "set time zone mez"]
    failures: an optional exception class or a tuple of exception classes
        for which the failover mechanism shall be applied, if the default
        (OperationalError, InternalError) is not adequate
    ping: determines when the connection should be checked with ping()
        (0 = None = never, 1 = default = when _ping_check() is called,
        2 = whenever a cursor is created, 4 = when a query is executed,
        7 = always, and all other bit combinations of these values)
    closeable: if this is set to false, then closing the connection will
        be silently ignored, but by default the connection can be closed
    args, kwargs: the parameters that shall be passed to the creator
        function or the connection constructor of the DB-API 2 module

    """
    return SteadyDBConnection(creator, maxusage, setsession,
      failures, ping, closeable, *args, **kwargs)

../../.local/lib/python2.7/site-packages/DBUtils/SteadyDB.py:134:


self = <DBUtils.SteadyDB.SteadyDBConnection instance at 0x7cde908> creator = <module 'pymysql' from '/home/dan/.local/lib/python2.7/site-packages/pymysql/init.pyc'> maxusage = 0, setsession = ['SET AUTOCOMMIT = 1'], failures = None, ping = 1 closeable = True, args = () kwargs = {'charset': 'utf8', 'host': 'localhost', 'passwd': '', 'port': 3306, ...}

def __init__(self, creator, maxusage=None, setsession=None,
        failures=None, ping=1, closeable=True, *args, **kwargs):
    """Create a "tough" DB-API 2 connection."""
    # basic initialization to make finalizer work
    self._con = None
    self._closed = True
    # proper initialization of the connection
    try:
        self._creator = creator.connect
        self._dbapi = creator
    except AttributeError:
        # try finding the DB-API 2 module via the connection creator
        self._creator = creator
        try:
            self._dbapi = creator.dbapi
        except AttributeError:
            try:
                self._dbapi = sys.modules[creator.__module__]
                if self._dbapi.connect != creator:
                    raise AttributeError
            except (AttributeError, KeyError):
                self._dbapi = None
    try:
        self._threadsafety = creator.threadsafety
    except AttributeError:
        try:
            self._threadsafety = self._dbapi.threadsafety
        except AttributeError:
            self._threadsafety = None
    if not callable(self._creator):
        raise TypeError("%r is not a connection provider." % (creator,))
    if maxusage is None:
        maxusage = 0
    if not isinstance(maxusage, (int, long)):
        raise TypeError("'maxusage' must be an integer value.")
    self._maxusage = maxusage
    self._setsession_sql = setsession
    if failures is not None and not isinstance(
            failures, tuple) and not issubclass(failures, Exception):
        raise TypeError("'failures' must be a tuple of exceptions.")
    self._failures = failures
    self._ping = isinstance(ping, int) and ping or 0
    self._closeable = closeable
    self._args, self._kwargs = args, kwargs
  self._store(self._create())

../../.local/lib/python2.7/site-packages/DBUtils/SteadyDB.py:186:


self = <DBUtils.SteadyDB.SteadyDBConnection instance at 0x7cde908>

def _create(self):
    """Create a new connection using the creator function."""
  con = self._creator(*self._args, **self._kwargs)

../../.local/lib/python2.7/site-packages/DBUtils/SteadyDB.py:190:


args = () kwargs = {'charset': 'utf8', 'host': 'localhost', 'passwd': '', 'port': 3306, ...} Connection = <class 'pymysql.connections.Connection'>

def Connect(*args, **kwargs):
    """
    Connect to the database; see connections.Connection.__init__() for
    more information.
    """
    from connections import Connection
  return Connection(*args, **kwargs)

../../.local/lib/python2.7/site-packages/pymysql/init.py:93:


self = <pymysql.connections.Connection object at 0x75c6fd0>, host = 'localhost' user = 'root', passwd = '', db = None, port = 3306, unix_socket = None charset = 'utf8', sql_mode = None, read_default_file = None conv = {0: <function convert_decimal at 0x2bf2668>, 1: <function convert_int at 0x2bf2500>, 2: <function convert_int at 0x2bf2500>, 3: <function convert_long at 0x2bf2578>, ...} use_unicode = True, client_flag = 107013 cursorclass = <class 'pymysql.cursors.Cursor'>, init_command = None connect_timeout = None, ssl = None, read_default_group = None, compress = None named_pipe = None

def __init__(self, host="localhost", user=None, passwd="",
             db=None, port=3306, unix_socket=None,
             charset='', sql_mode=None,
             read_default_file=None, conv=decoders, use_unicode=None,
             client_flag=0, cursorclass=Cursor, init_command=None,
             connect_timeout=None, ssl=None, read_default_group=None,
             compress=None, named_pipe=None):
    """
        Establish a connection to the MySQL database. Accepts several
        arguments:

        host: Host where the database server is located
        user: Username to log in as
        passwd: Password to use.
        db: Database to use, None to not use a particular one.
        port: MySQL port to use, default is usually OK.
        unix_socket: Optionally, you can use a unix socket rather than TCP/IP.
        charset: Charset you want to use.
        sql_mode: Default SQL_MODE to use.
        read_default_file: Specifies  my.cnf file to read these parameters from under the [client] section.
        conv: Decoders dictionary to use instead of the default one. This is used to provide custom marshalling of types. See converters.
        use_unicode: Whether or not to default to unicode strings. This option defaults to true for Py3k.
        client_flag: Custom flags to send to MySQL. Find potential values in constants.CLIENT.
        cursorclass: Custom cursor class to use.
        init_command: Initial SQL statement to run when connection is established.
        connect_timeout: Timeout before throwing an exception when connecting.
        ssl: A dict of arguments similar to mysql_ssl_set()'s parameters. For now the capath and cipher arguments are not supported.
        read_default_group: Group to read from in the configuration file.
        compress; Not supported
        named_pipe: Not supported
        """

    if use_unicode is None and sys.version_info[0] > 2:
        use_unicode = True

    if compress or named_pipe:
        raise NotImplementedError, "compress and named_pipe arguments are not supported"

    if ssl and (ssl.has_key('capath') or ssl.has_key('cipher')):
        raise NotImplementedError, 'ssl options capath and cipher are not supported'

    self.ssl = False
    if ssl:
        if not SSL_ENABLED:
            raise NotImplementedError, "ssl module not found"
        self.ssl = True
        client_flag |= SSL
        for k in ('key', 'cert', 'ca'):
            v = None
            if ssl.has_key(k):
                v = ssl[k]
            setattr(self, k, v)

    if read_default_group and not read_default_file:
        if sys.platform.startswith("win"):
            read_default_file = "c:\\my.ini"
        else:
            read_default_file = "/etc/my.cnf"

    if read_default_file:
        if not read_default_group:
            read_default_group = "client"

        cfg = ConfigParser.RawConfigParser()
        cfg.read(os.path.expanduser(read_default_file))

        def _config(key, default):
            try:
                return cfg.get(read_default_group,key)
            except:
                return default

        user = _config("user",user)
        passwd = _config("password",passwd)
        host = _config("host", host)
        db = _config("db",db)
        unix_socket = _config("socket",unix_socket)
        port = _config("port", port)
        charset = _config("default-character-set", charset)

    self.host = host
    self.port = port
    self.user = user or DEFAULT_USER
    self.password = passwd
    self.db = db
    self.unix_socket = unix_socket
    if charset:
        self.charset = charset
        self.use_unicode = True
    else:
        self.charset = DEFAULT_CHARSET
        self.use_unicode = False

    if use_unicode is not None:
        self.use_unicode = use_unicode

    client_flag |= CAPABILITIES
    client_flag |= MULTI_STATEMENTS
    if self.db:
        client_flag |= CONNECT_WITH_DB
    self.client_flag = client_flag

    self.cursorclass = cursorclass
    self.connect_timeout = connect_timeout
  self._connect()

../../.local/lib/python2.7/site-packages/pymysql/connections.py:510:


self = <pymysql.connections.Connection object at 0x75c6fd0>

def _connect(self):
    try:
        if self.unix_socket and (self.host == 'localhost' or self.host == '127.0.0.1'):
            sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
            t = sock.gettimeout()
            sock.settimeout(self.connect_timeout)
            sock.connect(self.unix_socket)
            sock.settimeout(t)
            self.host_info = "Localhost via UNIX socket"
            if DEBUG: print 'connected using unix_socket'
        else:
            sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
            t = sock.gettimeout()
            sock.settimeout(self.connect_timeout)
            sock.connect((self.host, self.port))
            sock.settimeout(t)
            self.host_info = "socket %s:%d" % (self.host, self.port)
            if DEBUG: print 'connected using socket'
        self.socket = sock
        self.rfile = self.socket.makefile("rb")
        self.wfile = self.socket.makefile("wb")
        self._get_server_information()
        self._request_authentication()
    except socket.error, e:
      raise OperationalError(2003, "Can't connect to MySQL server on %r (%s)" % (self.host, e.args[0]))

E OperationalError: (2003, "Can't connect to MySQL server on 'localhost' (111)")

../../.local/lib/python2.7/site-packages/pymysql/connections.py:679: OperationalError ------------------------------- Captured stdout -------------------------------- Generating experiment files in directory: /home/dan/nupic/nupic/tests/integration/py2/nupic/opf/expGeneratorOut... Writing 315 lines... Writing 113 lines... done. None base.config['modelParams']: {'anomalyParams': {u'anomalyCacheRecords': None, u'autoDetectThreshold': None, u'autoDetectWaitRecords': None}, 'clParams': {'alpha': 0.001, 'clVerbosity': 0, 'regionName': 'CLAClassifierRegion', 'steps': '1,5'}, 'inferenceType': 'TemporalMultiStep', 'sensorParams': {'encoders': {'_classifierInput': {'classifierOnly': True, 'clipInput': True, 'fieldname': u'consumption', 'n': 100, 'name': '_classifierInput', 'type': 'AdaptiveScalarEncoder', 'w': 21}, u'consumption': {'clipInput': True, 'fieldname': u'consumption', 'n': 100, 'name': u'consumption', 'type': 'AdaptiveScalarEncoder', 'w': 21}, u'timestamp_dayOfWeek': {'dayOfWeek': (21, 1), 'fieldname': u'timestamp', 'name': u'timestamp_dayOfWeek', 'type': 'DateEncoder'}, u'timestamp_timeOfDay': {'fieldname': u'timestamp', 'name': u'timestamp_timeOfDay', 'timeOfDay': (21, 1), 'type': 'DateEncoder'}, u'timestamp_weekend': {'fieldname': u'timestamp', 'name': u'timestamp_weekend', 'type': 'DateEncoder', 'weekend': 21}}, 'sensorAutoReset': None, 'verbosity': 0}, 'spEnable': True, 'spParams': {'coincInputPoolPct': 0.8, 'columnCount': 2048, 'globalInhibition': 1, 'inputWidth': 0, 'maxBoost': 2.0, 'numActivePerInhArea': 40, 'seed': 1956, 'spVerbosity': 0, 'spatialImp': 'cpp', 'synPermActiveInc': 0.05, 'synPermConnected': 0.1, 'synPermInactiveDec': 0.0005}, 'tpEnable': True, 'tpParams': {'activationThreshold': 16, 'cellsPerColumn': 32, 'columnCount': 2048, 'globalDecay': 0.0, 'initialPerm': 0.21, 'inputWidth': 2048, 'maxAge': 0, 'maxSegmentsPerCell': 128, 'maxSynapsesPerSegment': 32, 'minThreshold': 12, 'newSynapseCount': 20, 'outputType': 'normal', 'pamLength': 1, 'permanenceDec': 0.1, 'permanenceInc': 0.1, 'seed': 1960, 'temporalImp': 'cpp', 'verbosity': 0}, 'trainSPNetOnlyIfRequested': False} perms.permutations {'aggregationInfo': {'days': 0, 'fields': [(u'timestamp', 'first'), (u'gym', 'first'), (u'consumption', 'sum')], 'hours': 1, 'microseconds': 0, 'milliseconds': 0, 'minutes': 0, 'months': 0, 'seconds': 0, 'weeks': 0, 'years': 0}, 'modelParams': {'clParams': {'alpha': PermuteFloat(min=0.000100, max=0.100000, stepSize=None) [position=0.050050(0.050050), velocity=0.019980, _bestPosition=0.05005, _bestResult=None]}, 'inferenceType': PermuteChoices(choices=['NontemporalMultiStep', 'TemporalMultiStep']) [position=NontemporalMultiStep], 'sensorParams': {'encoders': {'_classifierInput': {'classifierOnly': True, 'clipInput': True, 'fieldname': 'consumption', 'n': PermuteInt(min=28, max=521, stepSize=1) [position=275(274.500000), velocity=98.600000, _bestPosition=275, _bestResult=None], 'type': 'AdaptiveScalarEncoder', 'w': 21}, u'consumption': PermuteEncoder(fieldName=consumption, encoderClass=AdaptiveScalarEncoder, name=consumption, clipInput=True, w=21, n=PermuteInt(min=22, max=521, stepSize=1) [position=272(271.500000), velocity=99.800000, _bestPosition=272, _bestResult=None], ), u'timestamp_dayOfWeek': PermuteEncoder(fieldName=timestamp, encoderClass=DateEncoder.dayOfWeek, name=timestamp, radius=PermuteFloat(min=1.000000, max=6.000000, stepSize=None) [position=3.500000(3.500000), velocity=1.000000, _bestPosition=3.5, _bestResult=None], w=21, ), u'timestamp_timeOfDay': PermuteEncoder(fieldName=timestamp, encoderClass=DateEncoder.timeOfDay, name=timestamp, radius=PermuteFloat(min=0.500000, max=12.000000, stepSize=None) [position=6.250000(6.250000), velocity=2.300000, _bestPosition=6.25, _bestResult=None], w=21, ), u'timestamp_weekend': PermuteEncoder(fieldName=timestamp, encoderClass=DateEncoder.weekend, name=timestamp, radius=PermuteChoices(choices=[1]) [position=1], w=21, )}}, 'spParams': {'synPermInactiveDec': PermuteFloat(min=0.000300, max=0.100000, stepSize=None) [position=0.050150(0.050150), velocity=0.019940, _bestPosition=0.05015, _bestResult=None]}, 'tpParams': {'activationThreshold': PermuteInt(min=12, max=16, stepSize=1) [position=14(14.000000), velocity=0.800000, _bestPosition=14, _bestResult=None], 'minThreshold': PermuteInt(min=9, max=12, stepSize=1) [position=11(10.500000), velocity=0.600000, _bestPosition=11, _bestResult=None], 'pamLength': PermuteInt(min=1, max=5, stepSize=1) [position=3(3.000000), velocity=0.800000, _bestPosition=3, _bestResult=None]}}} perms.minimize "multiStepBestPredictions:multiStep:errorMetric='altMAPE':steps=[1, 5]:window=1000:field=consumption" expDesc {'environment': 'grok', 'includedFields': [{'fieldName': 'timestamp', 'fieldType': 'datetime'}, {'fieldName': 'consumption', 'fieldType': 'float'}], 'inferenceArgs': {'predictedField': 'consumption', 'predictionSteps': [1, 5]}, 'inferenceType': 'MultiStep', 'iterationCount': -1, 'runBaselines': True, 'streamDef': {'aggregation': {'days': 0, 'fields': [('consumption', 'sum'), ('gym', 'first'), ('timestamp', 'first')], 'hours': 1, 'microseconds': 0, 'milliseconds': 0, 'minutes': 0, 'months': 0, 'seconds': 0, 'weeks': 0, 'years': 0}, 'info': 'test_NoProviders', 'streams': [{'columns': ['*'], 'info': 'hotGym.csv', 'last_record': 20, 'source': 'file:///home/dan/nupic/build/release/share/prediction/data/extra/hotgym/hotgym.csv'}], 'version': 1}} Generating experiment files in directory: /home/dan/nupic/nupic/tests/integration/py2/nupic/opf/expGeneratorOut... Writing 315 lines... Writing 113 lines... done. None DEBUG: Result of PyRegion::executeCommand : 'None' OPENING OUTPUT FOR PREDICTION WRITER AT: '/home/dan/nupic/nupic/tests/integration/py2/nupic/opf/expGeneratorOut/inference/DefaultTask.TemporalMultiStep.predictionLog.csv' Prediction field-meta: [('reset', 'int', 'R'), ('address', 'string', ''), ('consumption', 'string', ''), ('gym', 'string', ''), ('timestamp', 'string', ''), ('multiStepPredictions.actual', 'string', ''), ('multiStepPredictions.1', 'string', ''), ('multiStepPredictions.5', 'string', ''), ('multiStepBestPredictions.actual', 'string', ''), ('multiStepBestPredictions.1', 'string', ''), ('multiStepBestPredictions.5', 'string', ''), (u"prediction:moving_mean:errorMetric='altMAPE':mean_window=200:steps=5:window=1000:field=consumption", 'float', ''), (u"prediction:moving_mean:errorMetric='altMAPE':mean_window=200:steps=1:window=1000:field=consumption", 'float', ''), (u"multiStepBestPredictions:multiStep:errorMetric='altMAPE':steps=[1, 5]:window=1000:field=consumption", 'float', ''), (u"prediction:trivial:errorMetric='altMAPE':steps=5:window=1000:field=consumption", 'float', ''), (u"prediction:trivial:errorMetric='altMAPE':steps=1:window=1000:field=consumption", 'float', ''), (u"multiStepBest Predictions:multiStep:errorMetric='aae':steps=[1, 5]:window=1000:field=consumption", 'float', '')]

{ "prediction:moving_mean:errorMetric='altMAPE':mean_window=200:steps=5:window=1000:field=consumption": null, "prediction:moving_mean:errorMetric='altMAPE':mean_window=200:steps=1:window=1000:field=consumption": 116.61184210526315, "multiStepBestPredictions:multiStep:errorMetric='altMAPE':steps=[1, 5]:window=1000:field=consumption": 77.30592105263158, "prediction:trivial:errorMetric='altMAPE':steps=5:window=1000:field=consumption": null, "prediction:trivial:errorMetric='altMAPE':steps=1:window=1000:field=consumption": 54.60526315789475, "multiStepBestPredictions:multiStep:errorMetric='aae':steps=[1, 5]:window=1000:field=consumption": 5.875249999999999 }

DEBUG: Result of PyRegion::executeCommand : 'None' 2014-04-27 15:01:19,633 - com.numenta.nupic.swarming.HypersearchWorker(10574) - INFO - Launched with command line arguments: ['--params={"hsVersion": "v2", "maxModels": 2, "persistentJobGUID": "JOB_UUID1-4306d230-cdfb-11e3-9631-525400000005", "permutationsPyFilename": "/home/dan/nupic/nupic/tests/integration/py2/nupic/opf/expGeneratorOut/permutations.py"}'] 2014-04-27 15:01:19,634 - com.numenta.nupic.swarming.HypersearchWorker(10574) - INFO - Connecting to the jobs database 2014-04-27 15:01:19,634 - com.numenta.nupic.database.Connection.ConnectionFactory(10574) - INFO - Creating db connection policy via provider <bound method type._createDefaultPolicy of <class 'nupic.database.Connection.ConnectionFactory'>> 2014-04-27 15:01:19,639 - com.numenta.nupic.database.Connection.PooledConnectionPolicy(10574) - INFO - Created PooledConnectionPolicy 2014-04-27 15:01:19,648 - com.numenta.nupic.database.ClientJobsDAO.ClientJobsDAO(10574) - WARNING - [] First failure in <function connect at 0x32efd70>; initial retry in 0.1 sec.; timeoutSec=300. Caller stack: File "/home/dan/nupic/build/release/bin/run_tests.py", line 256, in result = main(parser, sys.argv[1:]) File "/home/dan/nupic/build/release/bin/run_tests.py", line 246, in main exitStatus = pytest.main(args + list(tests)) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/config.py", line 19, in main exitstatus = config.hook.pytest_cmdline_main(config=config) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 368, in call return self._docall(methods, kwargs) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 379, in _docall res = mc.execute() File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 297, in execute res = method(kwargs) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/main.py", line 111, in pytest_cmdline_main return wrap_session(config, _main) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/main.py", line 81, in wrap_session doit(config, session) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/main.py", line 117, in _main config.hook.pytest_runtestloop(session=session) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 368, in call return self._docall(methods, kwargs) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 379, in _docall res = mc.execute() File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 297, in execute res = method(kwargs) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/main.py", line 137, in pytest_runtestloop item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 368, in call return self._docall(methods, kwargs) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 379, in _docall res = mc.execute() File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 297, in execute res = method(*kwargs) File "/home/dan/.local/lib/python2.7/site-packages/xdist/plugin.py", line 85, in pytest_runtest_protocol reports = forked_run_report(item) File "/home/dan/.local/lib/python2.7/site-packages/xdist/plugin.py", line 105, in forked_run_report ff = py.process.ForkedFunc(runforked) File "/home/dan/.local/lib/python2.7/site-packages/py/_process/forkedfunc.py", line 34, in init self._child(nice_level) File "/home/dan/.local/lib/python2.7/site-packages/py/_process/forkedfunc.py", line 55, in _child retval = self.fun(_self.args, _self.kwargs) File "/home/dan/.local/lib/python2.7/site-packages/xdist/plugin.py", line 100, in runforked reports = runtestprotocol(item, log=False) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/runner.py", line 72, in runtestprotocol reports.append(call_and_report(item, "call", log)) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/runner.py", line 106, in call_and_report call = call_runtest_hook(item, when, _kwds) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/runner.py", line 124, in call_runtest_hook return CallInfo(lambda: ihook(item=item, _kwds), when=when) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/runner.py", line 137, in init self.result = func() File "/home/dan/.local/lib/python2.7/site-packages/_pytest/runner.py", line 124, in return CallInfo(lambda: ihook(item=item, kwds), when=when) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/main.py", line 161, in call_matching_hooks return hookmethod.pcall(plugins, kwargs) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 372, in pcall return self._docall(methods, kwargs) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 379, in _docall res = mc.execute() File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 297, in execute res = method(kwargs) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/runner.py", line 86, in pytest_runtest_call item.runtest() File "/home/dan/.local/lib/python2.7/site-packages/_pytest/unittest.py", line 139, in runtest self._testcase(result=self) File "/home/dan/.local/lib/python2.7/site-packages/unittest2/case.py", line 398, in call return self.run(_args, _kwds) File "/home/dan/.local/lib/python2.7/site-packages/unittest2/case.py", line 340, in run testMethod() File "/home/dan/nupic/nupic/tests/integration/py2/nupic/opf/expgenerator_test.py", line 1090, in test_MultiStep self.runBaseDescriptionAndPermutations(expDesc, hsVersion='v2') File "/home/dan/nupic/nupic/tests/integration/py2/nupic/opf/expgenerator_test.py", line 257, in runBaseDescriptionAndPermutations jobID = HypersearchWorker.main(args) File "/home/dan/nupic/build/release/lib/python2.7/site-packages/nupic/swarming/HypersearchWorker.py", line 592, in main jobID = hst.run() File "/home/dan/nupic/build/release/lib/python2.7/site-packages/nupic/swarming/HypersearchWorker.py", line 263, in run cjDAO = ClientJobsDAO.get() File "/home/dan/nupic/build/release/lib/python2.7/site-packages/nupic/support/decorators.py", line 59, in exceptionLoggingWrap return func(_args, _kwargs) File "/home/dan/nupic/build/release/lib/python2.7/site-packages/nupic/database/ClientJobsDAO.py", line 567, in get cjDAO.connect() File "/home/dan/nupic/build/release/lib/python2.7/site-packages/nupic/support/decorators.py", line 59, in exceptionLoggingWrap return func(_args, _kwargs) File "/home/dan/nupic/build/release/lib/python2.7/site-packages/nupic/support/decorators.py", line 241, in retryWrap timeoutSec, ''.join(traceback.format_stack()), exc_info=True) Traceback (most recent call last): File "/home/dan/nupic/build/release/lib/python2.7/site-packages/nupic/support/decorators.py", line 214, in retryWrap result = func(_args, _kwargs) File "/home/dan/nupic/build/release/lib/python2.7/site-packages/nupic/database/ClientJobsDAO.py", line 656, in connect with ConnectionFactory.get() as conn: File "/home/dan/nupic/build/release/lib/python2.7/site-packages/nupic/database/Connection.py", line 172, in get return cls._connectionPolicy.acquireConnection() File "/home/dan/nupic/build/release/lib/python2.7/site-packages/nupic/database/Connection.py", line 558, in acquireConnection dbConn = self._pool.connection(shareable=False) File "/home/dan/.local/lib/python2.7/site-packages/DBUtils/PooledDB.py", line 331, in connection con = self.steady_connection() File "/home/dan/.local/lib/python2.7/site-packages/DBUtils/PooledDB.py", line 279, in steady_connection _self._args, _self._kwargs) File "/home/dan/.local/lib/python2.7/site-packages/DBUtils/SteadyDB.py", line 134, in connect failures, ping, closeable, _args, _kwargs) File "/home/dan/.local/lib/python2.7/site-packages/DBUtils/SteadyDB.py", line 186, in init self._store(self._create()) File "/home/dan/.local/lib/python2.7/site-packages/DBUtils/SteadyDB.py", line 190, in _create con = self._creator(_self._args, _self._kwargs) File "/home/dan/.local/lib/python2.7/site-packages/pymysql/init.py", line 93, in Connect return Connection(_args, _kwargs) File "/home/dan/.local/lib/python2.7/site-packages/pymysql/connections.py", line 510, in init self._connect() File "/home/dan/.local/lib/python2.7/site-packages/pymysql/connections.py", line 679, in _connect raise OperationalError(2003, "Can't connect to MySQL server on %r (%s)" % (self.host, e.args[0])) OperationalError: (2003, "Can't connect to MySQL server on 'localhost' (111)") 2014-04-27 15:06:22,716 - com.numenta.nupic.database.ClientJobsDAO.ClientJobsDAO(10574) - ERROR - [] Exhausted retry timeout (300 sec.; 38 attempts) for <function connect at 0x32efd70>. Caller stack: File "/home/dan/nupic/build/release/bin/run_tests.py", line 256, in result = main(parser, sys.argv[1:]) File "/home/dan/nupic/build/release/bin/run_tests.py", line 246, in main exitStatus = pytest.main(args + list(tests)) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/config.py", line 19, in main exitstatus = config.hook.pytest_cmdline_main(config=config) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 368, in call return self._docall(methods, kwargs) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 379, in _docall res = mc.execute() File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 297, in execute res = method(kwargs) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/main.py", line 111, in pytest_cmdline_main return wrap_session(config, _main) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/main.py", line 81, in wrap_session doit(config, session) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/main.py", line 117, in _main config.hook.pytest_runtestloop(session=session) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 368, in call return self._docall(methods, kwargs) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 379, in _docall res = mc.execute() File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 297, in execute res = method(**kwargs) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/main.py", line 137, in pytest_runtestloop item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core.py", line 368, in call return self._docall(methods, kwargs) File "/home/dan/.local/lib/python2.7/site-packages/_pytest/core

jefffohl commented 10 years ago

I am getting this error too. However, I have MySQL up and running with the default setup: root user, no password. It would be great to know if there is some other expected configuration.

jefffohl commented 10 years ago

Additionally, this command runs successfully: python $NUPIC/examples/swarm/test_db.py

rhyolight commented 10 years ago

Another report of this error: http://lists.numenta.org/pipermail/nupic_lists.numenta.org/2014-May/003831.html

sbryukov commented 10 years ago

Error certain place

nupic/build/release/lib/python2.7/site-packages/nupic/database/ClientJobsDAO.py
line 1163, in _insertOrGetUniqueJobNoRetries
     numRowsInserted = conn.cursor.execute(query, sqlParams)

DEBUG session:

query:  
INSERT IGNORE INTO client_jobs_v29_surge.jobs (status, client, client_info, client_key,cmd_line, params, job_hash, _eng_last_update_time, minimum_workers, maximum_workers, priority, _eng_job_type)  VALUES (%s, %s, %s, %s, %s, %s, %s,          UTC_TIMESTAMP(), %s, %s, %s, %s) 

sqlParams:

('notStarted', 'GRP', '', '', '$HYPERSEARCH', '{"hsVersion": "v2", "maxModels": null, "persistentJobGUID": "49c2d260-d4d9-11e3-a111-8c705a812ef4", "useTerminators": false, "description": {"includedFields": [{"fieldName": "timestamp", "fieldType": "datetime"}, {"fieldName": "consumption", "fieldType": "float"}], "streamDef": {"info": "test", "version": 1, "streams": [{"info": "hotGym.csv", "source": "file://extra/hotgym/hotgym.csv", "columns": ["*"], "last_record": 100}], "aggregation": {"seconds": 0, "fields": [["consumption", "sum"], ["gym", "first"], ["timestamp", "first"]], "months": 0, "days": 0, "years": 0, "hours": 1, "microseconds": 0, "weeks": 0, "minutes": 0, "milliseconds": 0}}, "inferenceType": "MultiStep", "inferenceArgs": {"predictionSteps": [1], "predictedField": "consumption"}, "iterationCount": -1, "swarmSize": "medium"}}', 'I\xc4\x02z\xd4\xd9\x11\xe3\xa1\x11\x8cpZ\x81.\xf4', 1, 4, 0, 'hypersearch')

Is this unreadable the source of issue? What does this config come from?

'I\xc4\x02z\xd4\xd9\x11\xe3\xa1\x11\x8cpZ\x81.\xf4'  
sbryukov commented 10 years ago

_insertOrGetUniqueJobNoRetries()

takes unreadable param - jobHash

rhyolight commented 10 years ago

Thanks for your debugging and fix, @surge- . I'm having it reviewed. :smiley:

vitaly-krugl commented 10 years ago

@yaitskov, could you please add the entire traceback that you get for this crash?

vitaly-krugl commented 10 years ago

@surge-: regarding

_insertOrGetUniqueJobNoRetries() takes unreadable param - jobHash

The parameter is binary, which is allowed because job_hash column is defined as BINARY in ClientJobsDAO.py: 'job_hash BINARY(%d) DEFAULT NULL' % (self.HASH_MAX_LEN)

CC @rhyolight

sbryukov commented 10 years ago

@vitaly-krugl with binary jobHash

PySQL stpos with UnicodeDecodeError: http://lists.numenta.org/pipermail/nupic_lists.numenta.org/2014-May/003831.html

"/usr/local/lib/python2.7/dist-packages/PyMySQL-0.4-py2.7.egg/pymysql/converters.py", line 27, in escape_item val = val.decode(charset) File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode byte 0x93 in position 0: invalid start byte

vitaly-krugl commented 10 years ago

@surge-: Thanks for the follow-up. The failure is a bug in PyMySQL client. The crash appears to be in PyMySQL version 0.4 (http://lists.numenta.org/pipermail/nupic_lists.numenta.org/2014-May/003831.html: /usr/local/lib/python2.7/dist-packages/PyMySQL-0.4-py2.7.egg/pymysql/converters.py)

PyMySQL is now at version 0.6.2, and there are fixes for several UnicodeDecodeError: https://github.com/PyMySQL/PyMySQL/search?q=UnicodeDecodeError&ref=cmdform&type=Issues.

Is the issue reproducible with PyMySQL version 0.6.2? (https://pypi.python.org/pypi?%3Aaction=search&term=PyMySQL&submit=search). If it's still reproducible, would you mind filing an Issue on PyMySQL github (https://github.com/PyMySQL/PyMySQL/issues)? If it's not reproducible with 0.6.2, then the correct fix will be to upgrade PyMySQL.

rhyolight commented 10 years ago

@jefffohl @yaitskov @surge- Does Vitaly's comment above help you guys out at all? Can you try updating to PyMySQL 0.6.2?

jefffohl commented 10 years ago

I resolved the problem - though exactly how, I am not sure. I took the nuclear option, and uninstalled python, and all modules. Then, re-installed python and used pip to install all dependencies. After that, I was able to get a successful build that passed all tests. My problem may have been due to what Vitaly found. Thanks!

sbryukov commented 10 years ago

Hi I confirm issue has gone after manual remove PyMySQL 0.4

Fo Ubuntu it is prerequisite below to avoid issue in case of older PyMySQL installed sudo pip uninstall PyMySQL

I also updated $NUPIC/external/common/requirements.txt for latest PyMySQL 0.6.2 tests_all and run_swarm pass

vitaly-krugl commented 10 years ago

@surge- - thank you

rhyolight commented 10 years ago

@surge- said:

I also updated $NUPIC/external/common/requirements.txt for latest PyMySQL 0.6.2

So do we need to update this in the repo?

rhyolight commented 10 years ago

Need to reopen this because it doesn't work in OSX (see https://github.com/numenta/nupic/pull/935#issuecomment-43365444).

https://github.com/numenta/nupic/pull/938 undoes this and we'll merge it to get the build back to green. Then we need to readdress this issue.

rhyolight commented 10 years ago

Fixed by https://github.com/numenta/nupic/issues/941 & https://github.com/numenta/nupic-darwin64/issues/8.