kizniche / Mycodo

An environmental monitoring and regulation system
http://kylegabriel.com/projects/
GNU General Public License v3.0
2.95k stars 494 forks source link

Refactor to introduce testing #128

Closed etiology closed 6 years ago

etiology commented 7 years ago

Currently we have less than 10 tests that can only check if the Flask UI is running and if some pages redirect to a login page for unknown users. In order for this project to grow in a healthy way it's going to need more tests and more test coverage so that refactoring can improve the code quality. Right now it takes me about 2-3 hours to restore a raspberry pi from a base image, install the mycodo app, and start clicking around just to see if new code broke anything.

This Refactor to introduce testingissue is a broad issue that will be broken up into smaller steps. There are several smaller refactors that need to take place before tests can even be written. Too many places in the code are reimplementing connections to other places and it means that we'd have to patch every single one of them to run just one unit test.

Here is a table describing an issue and a prescribed solution (more will be added later):

issue solution
Too many areas of code rebuild/reimplement their setups preventing a single way of configuring all of them. Examples can be found in the DB setups where sqlalchemy URIs are rebuilt in multiple modules which means that each one needs to be modified if they are to be tested. [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] Implement a configuration objects and application factories. Setup should happen once for each independent application. There are basically two apps in this repo: 1) The Flask UI, and 2) the daemon service. Each one should configure their service from a single config object
zsole2 commented 7 years ago

I cannot comment on the programming aspects, but: I'd suggest to create a basic Raspi image, having it upgraded/updated, and all dependencies installed that Mycodo will need (before numpy, the time was much less, about an hour was enough for a complete rebuild on Pi Zero, now it is almost 3 hours). Then you can git clone Mycodo, and do all the remaining steps of the setup. It would require a modified setup.py, though, or do it manually all the time. With this, I believe the rebuild could be a much faster experience...

etiology commented 7 years ago

Yeah that's a good idea. I committed to that very setup this weekend. I got a pretty good base created and then made an image of it. I was forced to compile from source on pretty much everything due to an apt-get issue I was having so that took forever.

Once we get more software tests built we can have the code check itself for expected behavior. The functional/unit tests run in minutes and it will tell you if the code has basic functionality. They also can catch compatibility issues with previous releases.

If you are interested in tinkering with code then tests are a good place to start. They are short (maybe a few lines) and they really help the code base. We could create a wiki to show some some examples and go through them.

kizniche commented 7 years ago

I added some logged in endpoint tests with commit a691ed1. I wanted to ask some advice. Here is the code:

@mock.patch('mycodo.mycodo_flask.authentication.views.login_log')  # the login_log writes to a system protected file
def test_logged_in_pages(_, testapp, user_db):
    page_data = {
        'live': '<!-- Page Name: Live -->',
        'graph': '<!-- Page Name: Graph -->'
    }
    norm_user = create_user_admin(user_db, 'name', 'secret_pass')
    login_user(testapp, norm_user.user_name, 'secret_pass')
    for address, body_msg in page_data.iteritems():
        print("Testing /{add} with logged in admin user...".format(add=address))
        assert body_msg in testapp.get('/{add}'.format(add=address)).maybe_follow()

@mock.patch('mycodo.mycodo_flask.authentication.views.login_log')  # the login_log writes to a system protected file
def test_page_live_for_logged_in_user(_, testapp, user_db):
    """ logged in user sees /live """
    norm_user = create_user_admin(user_db, 'name', 'secret_pass')
    login_user(testapp, norm_user.user_name, 'secret_pass')
    expected_body_msg = "<!-- Page Name: Live -->"
    assert expected_body_msg in testapp.get('/live').maybe_follow()

def create_user_admin(user_db, name, password):
    norm_user = UserFactory()
    norm_user.user_name = name
    norm_user.set_password(password)
    user_db.add(norm_user)
    user_db.commit()
    return norm_user

test_logged_in_pages() will test all key pages in the page_data dictionary for the value string in the body of the page. I added commented strings (e.g. "") to the appropriate pages in order to detect a known response.

My question is whether this function should be used that iterates through the dictionary, or should a new function be created for each page/test. My efficient side wants to do this, but I'm unfamiliar with any advantages/disadvantages to having separate functions may have for testing. The same thing could be done for the non-logged in tests using the same dictionary:


def test_pages_for_non_logged_in_user(testapp):
    """ Verifies behavior of these endpoints for non-logged in users """
    page_data = {
        'live': '<!-- Page Name: Live -->',
        'graph': '<!-- Page Name: Graph -->'
    }
    for address, body_msg in page_data.iteritems():
        print("Testing /{add} with non-logged in user...".format(add=address))
        redirects_to_login_page(app=testapp,
                                endpoint='/{add}'.format(add=address))
etiology commented 7 years ago

First off I love where you're headed with this:

page_data = {
        'live': '<!-- Page Name: Live -->',
        'graph': '<!-- Page Name: Graph -->'
    }

For some reason I would rather have each test as a tuple (endpoint, thing_to_find_there). I've just seen this in tuples more often. So a set, list, or tuple of tuples:

pages_to_test = (('live', '<!-- Page Name: Live -->'), 
                 ('graph', '<!-- Page Name: Graph -->'),
                 ('etc', '<!-- some other page... -->'))

Either is a good pattern and I don't think it has many disadvantages. It's much more readable and it keeps the extra lines down. The only thing that I've found with bulk tests is that we really want good feedback when the test fails. In this case we've done that by adding a message to the assert:

assert "Mycodo Login" in response, 'Did Not see login page.  Saw "{body}"'.format(body=response.body)

I wouldn't include the print, for tests it just too verbose. Having it only speak when it fails is preferred since later you may have hundreds of tests. The whole screen would be painted green with print statements that don't mean anything.


I do find this part confusing though:

norm_user = create_user_admin(user_db, 'name', 'secret_pass')

If norm_user is an admin then it should be admin_user for clarity.

Really nice choice with with page tests though. Like I said it's a good pattern and I've certainly seen it used often. It's clean and is less work.

etiology commented 7 years ago

We'll have to start adding some tests for creating new sensors and verifying different settings. I'll get more time later in the week.

kizniche commented 7 years ago

For some reason I would rather have each test as a tuple (endpoint, thing_to_find_there).

I actually started using tuples with the latest commit. Right now there's a check if it's a tuple, but I think moving them all to tuples would be better for consistency.

I wouldn't include the print, for tests it just too verbose. Having it only speak when it fails is preferred since later you may have hundreds of tests.

Good idea. I'll change that later today to a message from the assert.

If norm_user is an admin then it should be admin_user for clarity.

Good point. I just copied what you had written just to get it working initially. I'll also change this later.

etiology commented 7 years ago

Right now there's a check if it's a tuple

Honestly I'd just let it fail if it has an issue instead of checking. As per the python documentation:

EAFP

Easier to ask for forgiveness than permission. This common Python coding style assumes the existence of valid keys or attributes and catches exceptions if the assumption proves false. This clean and fast style is characterized by the presence of many try and except statements. The technique contrasts with the LBYL style common to many other languages such as C.

kizniche commented 7 years ago

All cleaned up with the latest commit.

kizniche commented 7 years ago

I'm seeing some unexpected behavior when running the flask app from the terminal. I often do this to see the web server activity, page loads, etc. and it's quite handy to turn flask on in debug mode to get this information. Typically, when called with sudo python ~/Mycodo/mycodo/start_flask_ui.py -d it will startup and display all incoming requests and responses, however now there's some unexpected behavior.

First (and this has been mentioned before), sqlite3.IntegrityError has begun appearing when it starts:

kiz@tango:~/Mycodo$ sudo python ./mycodo/start_flask_ui.py -d
36.0148797723
(sqlite3.IntegrityError) UNIQUE constraint failed: alembic_version.version_num [SQL: u'INSERT INTO alembic_versi
on (version_num) VALUES (?)'] [parameters: ('52aeaa701dfb',)]
(sqlite3.IntegrityError) UNIQUE constraint failed: displayorder.id [SQL: u'INSERT INTO displayorder (id, graph,
lcd, log, pid, relay, remote_host, sensor, timer) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)'] [parameters: ('0', '', No
ne, '', '', '', None, '', None)]
(sqlite3.IntegrityError) UNIQUE constraint failed: cameratimelapse.id [SQL: u'INSERT INTO cameratimelapse (id, r
elay_id, path, prefix, file_timestamp, display_last, cmd_pre_camera, cmd_post_camera, extra_parameters) VALUES (
?, ?, ?, ?, ?, ?, ?, ?, ?)'] [parameters: ('0', '', '/var/www/mycodo/camera-timelapse', 'Timelapse', 1, 1, '', '
', '--nopreview --contrast 20 --sharpness 60 --awb auto --quality 20 --vflip --hflip --width 800 --height 600')]
(sqlite3.IntegrityError) UNIQUE constraint failed: camerastill.id [SQL: u'INSERT INTO camerastill (id, hflip, vf
lip, rotation, relay_id, timestamp, display_last, cmd_pre_camera, cmd_post_camera, extra_parameters) VALUES (?,
?, ?, ?, ?, ?, ?, ?, ?, ?)'] [parameters: ('0', 0, 0, 0, '', 1, 1, '', '', '--vflip --hflip --width 800 --height
 600')]
(sqlite3.IntegrityError) UNIQUE constraint failed: camerastream.id [SQL: u'INSERT INTO camerastream (id, relay_i
d, cmd_pre_camera, cmd_post_camera, extra_parameters) VALUES (?, ?, ?, ?, ?)'] [parameters: ('0', '', '', '', '-
-contrast 20 --sharpness 60 --awb auto --quality 20 --vflip --hflip --nopreview --width 800 --height 600')]
(sqlite3.IntegrityError) UNIQUE constraint failed: misc.id [SQL: u'INSERT INTO misc (id, force_https, dismiss_no
tification, hide_alert_success, hide_alert_info, hide_alert_warning, stats_opt_out, login_message, relay_stats_v
olts, relay_stats_cost, relay_stats_currency, relay_stats_dayofmonth) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?
)'] [parameters: ('0', 1, 0, 0, 0, 0, 0, '', 120, 0.05, '$', 15)]
(sqlite3.IntegrityError) UNIQUE constraint failed: smtp.id [SQL: u'INSERT INTO smtp (id, host, ssl, port, user,
passw, email_from, hourly_max) VALUES (?, ?, ?, ?, ?, ?, ?, ?)'] [parameters: ('0', 'smtp.gmail.com', 1, 465, 'e
mail@gmail.com', 'password', 'email@gmail.com', 2)]
36.0076229565
(sqlite3.IntegrityError) UNIQUE constraint failed: alembic_version.version_num [SQL: u'INSERT INTO alembic_versi
on (version_num) VALUES (?)'] [parameters: ('52aeaa701dfb',)]
(sqlite3.IntegrityError) UNIQUE constraint failed: displayorder.id [SQL: u'INSERT INTO displayorder (id, graph,
lcd, log, pid, relay, remote_host, sensor, timer) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)'] [parameters: ('0', '', No
ne, '', '', '', None, '', None)]
(sqlite3.IntegrityError) UNIQUE constraint failed: cameratimelapse.id [SQL: u'INSERT INTO cameratimelapse (id, r
elay_id, path, prefix, file_timestamp, display_last, cmd_pre_camera, cmd_post_camera, extra_parameters) VALUES (
?, ?, ?, ?, ?, ?, ?, ?, ?)'] [parameters: ('0', '', '/var/www/mycodo/camera-timelapse', 'Timelapse', 1, 1, '', '
', '--nopreview --contrast 20 --sharpness 60 --awb auto --quality 20 --vflip --hflip --width 800 --height 600')]
(sqlite3.IntegrityError) UNIQUE constraint failed: camerastill.id [SQL: u'INSERT INTO camerastill (id, hflip, vf
lip, rotation, relay_id, timestamp, display_last, cmd_pre_camera, cmd_post_camera, extra_parameters) VALUES (?,
?, ?, ?, ?, ?, ?, ?, ?, ?)'] [parameters: ('0', 0, 0, 0, '', 1, 1, '', '', '--vflip --hflip --width 800 --height
 600')]
(sqlite3.IntegrityError) UNIQUE constraint failed: camerastream.id [SQL: u'INSERT INTO camerastream (id, relay_i
d, cmd_pre_camera, cmd_post_camera, extra_parameters) VALUES (?, ?, ?, ?, ?)'] [parameters: ('0', '', '', '', '-
-contrast 20 --sharpness 60 --awb auto --quality 20 --vflip --hflip --nopreview --width 800 --height 600')]
(sqlite3.IntegrityError) UNIQUE constraint failed: misc.id [SQL: u'INSERT INTO misc (id, force_https, dismiss_no
tification, hide_alert_success, hide_alert_info, hide_alert_warning, stats_opt_out, login_message, relay_stats_v
olts, relay_stats_cost, relay_stats_currency, relay_stats_dayofmonth) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?
)'] [parameters: ('0', 1, 0, 0, 0, 0, 0, '', 120, 0.05, '$', 15)]
(sqlite3.IntegrityError) UNIQUE constraint failed: smtp.id [SQL: u'INSERT INTO smtp (id, host, ssl, port, user,
passw, email_from, hourly_max) VALUES (?, ?, ?, ?, ?, ?, ?, ?)'] [parameters: ('0', 'smtp.gmail.com', 1, 465, 'e
mail@gmail.com', 'password', 'email@gmail.com', 2)]
WARNING:werkzeug: * Debugger is active!

Next, when I load a page from the web browser, no output appears as it once did. Stranger even, is that if I kill the process (ctrl+c) it will exit just fine, but if I first load the Method page in the web browser, then close (ctrl+c), it will exit after displaying this error:

^CERROR:sqlalchemy.pool.NullPool:Exception during reset or similar
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 636, in _finalize_fairy
    fairy._reset(pool)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 776, in _reset
    pool._dialect.do_rollback(self)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 420, in do_rollback
    dbapi_connection.rollback()
ProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created
 in thread id 1960256608 and this is thread id 1993785344
ERROR:sqlalchemy.pool.NullPool:Exception closing connection <sqlite3.Connection object at 0x743cec80>
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 290, in _close_connection
    self._dialect.do_close(connection)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 426, in do_close
    dbapi_connection.close()
ProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created
 in thread id 1960256608 and this is thread id 1993785344
ERROR:sqlalchemy.pool.NullPool:Exception during reset or similar
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 636, in _finalize_fairy
    fairy._reset(pool)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 776, in _reset
    pool._dialect.do_rollback(self)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 420, in do_rollback [33/1863]
    dbapi_connection.rollback()
ProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created
 in thread id 1960256608 and this is thread id 1993785344
ERROR:sqlalchemy.pool.NullPool:Exception closing connection <sqlite3.Connection object at 0x7454a020>
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 290, in _close_connection
    self._dialect.do_close(connection)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 426, in do_close
    dbapi_connection.close()
ProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created
 in thread id 1960256608 and this is thread id 1993785344
ERROR:sqlalchemy.pool.NullPool:Exception during reset or similar
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 636, in _finalize_fairy       [19/1863]
    fairy._reset(pool)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 776, in _reset
    pool._dialect.do_rollback(self)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 420, in do_rollback
    dbapi_connection.rollback()
ProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created
 in thread id 1960256608 and this is thread id 1993785344
ERROR:sqlalchemy.pool.NullPool:Exception closing connection <sqlite3.Connection object at 0x74545a40>
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 290, in _close_connection
    self._dialect.do_close(connection)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 426, in do_close
    dbapi_connection.close()
ProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created
 in thread id 1960256608 and this is thread id 1993785344
ERROR:sqlalchemy.pool.NullPool:Exception during reset or similar
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 636, in _finalize_fairy
    fairy._reset(pool)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 776, in _reset
    pool._dialect.do_rollback(self)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 420, in do_rollback
    dbapi_connection.rollback()
ProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created
 in thread id 1960256608 and this is thread id 1993785344
ERROR:sqlalchemy.pool.NullPool:Exception closing connection <sqlite3.Connection object at 0x74399380>
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 290, in _close_connection
    self._dialect.do_close(connection)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 426, in do_close
    dbapi_connection.close()
ProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created
 in thread id 1960256608 and this is thread id 1993785344

I haven't tested all pages, but most of them, and it appears only the Method page causes this to occur. I've had this error before from using the same SQLite object in different threads, but this error is perplexing now, both because I don't know where it's originating and the expected flask debug console is no longer providing output to the terminal.

I just wanted to document what I'm seeing and ask if you may know what's going on.

etiology commented 7 years ago

I think I need to run this to see what is happening. It will take me a bit of time before I can do this

kizniche commented 7 years ago

It seems there's an issue with the method-data route, in Mycodo/mycodo/mycodo_flask/methods/views.py. I noticed this because no method graphs were loading, just blank. Here's the flask debug response when trying to access /method-data/Date/eqLCRAwW (one of my Date methods), which should return a list of data.

    File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 2000, in __call__
      return self.wsgi_app(environ, start_response)
    File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1991, in wsgi_app
      response = self.make_response(self.handle_exception(e))
    File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1567, in handle_exception
      reraise(exc_type, exc_value, tb)
    File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1988, in wsgi_app
      response = self.full_dispatch_request()
    File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1641, in full_dispatch_request
      rv = self.handle_user_exception(e)
    File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1544, in handle_user_exception
      reraise(exc_type, exc_value, tb)
    File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1639, in full_dispatch_request
      rv = self.dispatch_request()
    File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1625, in dispatch_request
      return self.view_functions[rule.endpoint](**req.view_args)
    File "/home/kiz/Mycodo/mycodo/mycodo_flask/methods/views.py", line 51, in method_data
      with session_scope(current_app) as new_session:
    File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
      return self.gen.next()
    File "/home/kiz/Mycodo/mycodo/databases/utils.py", line 16, in session_scope
      engine = create_engine(db_path)
    File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/__init__.py", line 386, in create_engine
      return strategy.create(*args, **kwargs)
    File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", line 51, in create
      entrypoint = u._get_entrypoint()
    File "/usr/local/lib/python2.7/dist-packages/werkzeug/local.py", line 343, in __getattr__
      return getattr(self._get_current_object(), name)

    AttributeError: 'Flask' object has no attribute '_get_entrypoint'
etiology commented 7 years ago

My money is on this part:

    File "/home/kiz/Mycodo/mycodo/mycodo_flask/methods/views.py", line 51, in method_data
      with session_scope(current_app) as new_session:
kizniche commented 7 years ago

Yeah, I was going to just paste that, but figured I'd put the whole shebang here.

etiology commented 7 years ago

There is no database path. Instead it's just the current app ref.
This method part is new to me, how does this work and what is it doing?

etiology commented 7 years ago

The part that causes the failure is here

Is there an example that we can write a test for? What does the call look like? I find the keyword portion of this route to be confusing. I'm sure we just need to change current_app to current_app.SQL_DATABASE_MYCODO since SQL_DATABASE_MYCODO is where the Methods model is located

kizniche commented 7 years ago

Methods allow you to change a PID setpoint over time (reflow oven, thermal cycler, etc.). You can set a setpoint at specific times/dates, after specific durations, along a sine wave every day, or along a Bezier curve every day. The /method-data route will pull the data from the database (for instance, [(time1, setpoint1), (time2, setpoint2), ...]) and present it for highcharts to render a nice graph of the changing setpoint method.

kizniche commented 7 years ago

I got to the erroneous code, just wondering where current_app came from and the best approach to fix it.

kizniche commented 7 years ago

Ah, it needs to be current_app.config['MYCODO_DB_PATH']

etiology commented 7 years ago

current app is the flask app. It has the flask config inside of it. The blueprint is neutral until the app is constructed. After the application factory builds the flask app the blueprints now have access to the current_app.config['SQL_DATABASE_MYCODO'] (which is the correct way to access this).

I'm sure we just need to change current_app to current_app.SQL_DATABASE_MYCODO since SQL_DATABASE_MYCODO is where the Methods model is located

This is incorrect because we need to get to the config attribute of the current_app

etiology commented 7 years ago

I just wish we caught this in a test

kizniche commented 7 years ago

I had been thinking of ways to perform that specific test (/method-data route) yesterday when I was trying to cover as many endpoints as I could, and realized the best approach would be to add data to the database before the test, but I didn't know enough about the system to jump into that yet.

etiology commented 7 years ago

No worries. I was considering the same thing. I'm sure there are several ways to approach it.

Ultimately this all changes when the database tables start creating relationships. The method table should have a relationship to a _methodtype table. So method has-a method_type.

At that point I'm assuming the different databases are resolved to a single database which removes the need to pass around the database URI. Currently we to store and pass around the URI to create a new session to different databases. Things will streamline when we can hash out the other relationships the database needs.

kizniche commented 7 years ago

With commit 8172f62 to the mycodo_dev branch, I added specific tests for the PT1000 sensor.

So now we have specific tests for the AM2315, PT1000, and raspi, I wanted to see if I could create a test function that could test all sensors instead of a repeated function for each sensor. I added an info() function to the PT1000 and AM2315 classes that returns information about each sensor. For instance, this is for the AM2315 sensor:

    @staticmethod
    def info():
        conditions_measured = [
            ("Dew Point", "dew_point", "float", "0.00"),
            ("Humidity", "humidity", "float", "0.00"),
            ("Temperature", "temperature", "float", "0.00")
        ]
        return conditions_measured

Then, created a test function to iterate through each sensor class and test based on this information:

def test_sensor_class_iterates_using_in():
    """ Verify that a class object can use the 'in' operator """
    for each_class in sensor_classes:
        with mock.patch('{mod}.{name}.get_measurement'.format(mod=each_class.__module__, name=each_class.__class__.__name__)) as mock_measure:
            sensor_conditions = each_class.info()

            # Create mock_measure.side_effect
            list_cond = []
            number = 20
            number_mod = 5
            for _ in range(4):
                tuple_conditions = []
                for _ in sensor_conditions:
                    tuple_conditions.append(number)
                    number += number_mod
                if len(tuple_conditions) > 1:
                    tuple_conditions = tuple(tuple_conditions)
                else:
                    tuple_conditions = tuple_conditions[0]
                list_cond.append(tuple_conditions)
            mock_measure.side_effect = list_cond

            # Build expected results list
            expected_result_list = []
            for index in range(4):
                dict_build = {}
                index_cond = 0
                if len(sensor_conditions) == 1:
                    if sensor_conditions[0][2] == 'float':
                        dict_build[sensor_conditions[0][1]] = float(list_cond[index])
                    else:
                        dict_build[sensor_conditions[0][1]] = list_cond[index]
                else:
                    for each_cond in sensor_conditions:
                        if each_cond[2] == 'float':
                            dict_build[each_cond[1]] = float(list_cond[index][index_cond])
                        else:
                            dict_build[each_cond[1]] = list_cond[index][index_cond]
                        index_cond += 1
                expected_result_list.append(dict_build)

            assert expected_result_list == [cond for cond in each_class]

I haven't tried to make this more efficient. It was merely a proof of concept. I did get it to work, however.

Let me know what you think and if this is a good direction to go.

kizniche commented 7 years ago

Now that flask sqlalchemy is in the dev-5.0 branch, the tests are now broken. I'm going to attempt to get them working again, but I may need your expertise if I'm unable to figure it out. I'll work on it and return with an update soon.

etiology commented 7 years ago

This weekend may be a good time for it. Keep me posted

kizniche commented 7 years ago

I just pushed my edits thus far. I got the database tests (and creation/verification during flask startup) working, but I haven't been able to figure out how to get the endpoint tests to work. It would seem I'm not understanding some crucial piece of this puzzle. I'll come back to it a little later after I've eaten and found some coffee.

Edit: It appears it's not finding the user table.

kizniche commented 7 years ago

It seems the issue is with finding the users table. I'm going in circles and scratching my head trying to figure out what's exactly wrong. Here's the error from the endpoint tests:

testapp_no_admin_user = <webtest.app.TestApp object at 0x74bd3eb0>

    def test_sees_admin_creation_form(testapp_no_admin_user):
        """ No Admin user exists: user sees the admin creation page """
        expected_body_msg = "<!-- Route: /create_admin -->"
>       assert expected_body_msg in testapp_no_admin_user.get('/').maybe_follow()

test_endpoints.py:76: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/local/lib/python2.7/dist-packages/webtest/app.py:327: in get
    expect_errors=expect_errors)
/usr/local/lib/python2.7/dist-packages/webtest/app.py:610: in do_request
    res = req.get_response(app, catch_exc_info=True)
/usr/local/lib/python2.7/dist-packages/webob/request.py:1295: in send
    application, catch_exc_info=True)
/usr/local/lib/python2.7/dist-packages/webob/request.py:1263: in call_application
    app_iter = application(self.environ, start_response)
/usr/local/lib/python2.7/dist-packages/webtest/lint.py:198: in lint_app
    iterator = application(environ, start_response_wrapper)
/usr/local/lib/python2.7/dist-packages/flask/app.py:1994: in __call__
    return self.wsgi_app(environ, start_response)
/usr/local/lib/python2.7/dist-packages/flask/app.py:1985: in wsgi_app
    response = self.handle_exception(e)
/usr/local/lib/python2.7/dist-packages/flask/app.py:1540: in handle_exception
    reraise(exc_type, exc_value, tb)
/usr/local/lib/python2.7/dist-packages/flask/app.py:1982: in wsgi_app
    response = self.full_dispatch_request()
/usr/local/lib/python2.7/dist-packages/flask/app.py:1614: in full_dispatch_request
    rv = self.handle_user_exception(e)
/usr/local/lib/python2.7/dist-packages/flask/app.py:1517: in handle_user_exception
    reraise(exc_type, exc_value, tb)
/usr/local/lib/python2.7/dist-packages/flask/app.py:1610: in full_dispatch_request
    rv = self.preprocess_request()
/usr/local/lib/python2.7/dist-packages/flask/app.py:1831: in preprocess_request
    rv = func()
../../../mycodo_flask/general_routes.py:87: in before_blueprint_request
    if not admin_exists():
../../../mycodo_flask/authentication_routes.py:218: in admin_exists
    return Users.query.filter(Users.user_restriction == 'admin').count()
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py:2980: in count
    return self.from_self(col).scalar()
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py:2749: in scalar
    ret = self.one()
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py:2718: in one
    ret = list(self)
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py:2761: in __iter__
    return self._execute_and_instances(context)
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py:2776: in _execute_and_instances
    result = conn.execute(querycontext.statement, self._params)
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py:914: in execute
    return meth(self, multiparams, params)
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py:323: in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py:1010: in _execute_clauseelement
    compiled_sql, distilled_params
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py:1146: in _execute_context
    context)
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py:1341: in _handle_dbapi_exception
    exc_info
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py:202: in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py:1139: in _execute_context
    context)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <sqlalchemy.dialects.sqlite.pysqlite.SQLiteDialect_pysqlite object at 0x74b2dd50>
cursor = <sqlite3.Cursor object at 0x745b5ba0>
statement = 'SELECT count(*) AS count_1 
FROM (SELECT users.user_id AS users_user_id, users.user_name AS users_user_name, users.us... users_user_restriction, users.user_theme AS users_user_theme 
FROM users 
WHERE users.user_restriction = ?) AS anon_1'
parameters = ('admin',)
context = <sqlalchemy.dialects.sqlite.base.SQLiteExecutionContext object at 0x74b2dc70>

    def do_execute(self, cursor, statement, parameters, context=None):
>       cursor.execute(statement, parameters)
E       OperationalError: (sqlite3.OperationalError) no such table: users [SQL: u'SELECT count(*) AS count_1 \nFROM (SELECT users.user_id AS users_user_id, users.user_name AS users_user_name, users.user_password_hash AS users_user_password_hash, users.user_email AS users_user_email, users.user_restriction AS users_user_restriction, users.user_theme AS users_user_theme \nFROM users \nWHERE users.user_restriction = ?) AS anon_1'] [parameters: ('admin',)]

/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py:450: OperationalError
kizniche commented 7 years ago

I deleted my last message... I realized it wasn't a database issue, but a permissions issue. I changed ownership of the database to the wsgi user and all is good on the apache2 front.

kizniche commented 6 years ago

It appears that the testing environment is in a bit of disarray. It's been broken for a few months now, and I've made several attempts to fix it, without much luck. There are a few issues that came up that I could pinpoint but could not resolve.

  1. The Travis-CI environment changed
  2. GPIO made it impossible to compile on a non-raspberry pi
  3. Pycharm stopped being able to run pytests for me (returns "Empty test suite")
  4. Running the testing commands from the wiki on the Pi returns nothing, even with the verbose arg (-v)
  5. I'm not utilizing mock appropriately

It does appear Travis-CI does run part of the tests successfully (the Flask portion), but I had to disable the sensor tests to have everything complete successfully.

@etiology, If you have any spare time to take a look, I'd appreciate the help. I'm just not understanding the issue as well as I need to in order to fix this at the moment.

etiology commented 6 years ago

I'll take a look at it. I think we should delay the import of hardware dependent libraries until the method calls that use them.

This makes it then easy to test by deriving a subclass from the sensor object that overwrites its parents IO dependent method. This effectively means you can run hardware tests on your classes while generating IO behavior.

"""" old style sensor with import at module level """
from special_hardware import some_io_lib

class PlatformDependentSensor(object):
    """ Example sensor """
    def read_sensor(self):
        """ take a reading from a device """
        reading = some_io_lib.read()
        return reading

New suggestion:

"""" new style sensor with delayed import """

class PlatformInDependentSensor(object):
    """ Example sensor """
    def read_sensor(self):
        """ take a reading from a device """
        from special_hardware import some_io_lib

        reading = some_io_lib.read()
        return reading

Now a mock test sensor used in testing:


class MockPlatformInDependentSensor(PlatformInDependentSensor):
    """ Example sensor """
    def read_sensor(self):
        """ simulate a reading from a device """
        fake_reading = "0.746"
        return fake_reading
etiology commented 6 years ago

Right now we have to mock patch some really low level stuff to keep the interpreter from raising import errors at runtime. If the interpreter never sees the imports then tests can run on any platform or in a docker container. Actually I think you could spin up the whole app with mock sensors and have a live but fake demo of the app.

kizniche commented 6 years ago

Thanks for the quick response. That all sounds great.

If the interpreter never sees the imports then tests can run on any platform or in a docker container.

Will this be more involved than just moving imports? That is, will things like sensor or logger initializations be affected, or other changes to the sensor class be required? I'm afraid my eagerness to jump in and start testing this is outweighed by my lack of experience with testing.

etiology commented 6 years ago

If I get some time between work I'll take a look and see what it might take. It depends mostly on how these platform dependent classes have been encapsulated. If they aren't then this would require some broad changes which would be better anyway.

Ideally one would want the majority of the application code to deal with your custom classes instead of continually having to import platform dependent code like GPIO libs or IC2 interfaces everywhere. If you have one class, or family of classes (like sensors), who take full responsibility for interacting with the hardware and everything else just talks to those classes, then this shouldn't be a huge change.

An example would be a web interface shouldn't know about how the sensors are implemented. That's not something that web interfaces should know. They deal with displaying information or making changes to configurations. The flask endpoints should be able to talk to your sensor classes directly without knowing that sensor 'A' uses IC2 and sensor 'B' uses GPIO pins.

It's like SQLAlchemy, we only talk/interact with the models and not with the database implementation directly. That's why we can switch between databases without changing our code. Our code is 'decoupled' from the implementation of SQLite, MySQL, postgreSQL, or Oracle and changing the database doesn't affect how we used data stored in the models.

It might take me a bit of time to review. If it's needed we can take this exploration offline once I look into the tests again. That way we won't spam everyone

kizniche commented 6 years ago

Thanks to your suggestion, I was able to get all previous tests to work with every Input. over 2,000 additions and my head is spinning but it's done and all appears to be working. All sensor/input imports are how behind a test:

if not testing:
    import x, y, z