e-mission / e-mission-docs

Repository for docs and issues. If you need help, please file an issue here. Public conversations are better for open source projects than private email.
https://e-mission.readthedocs.io/en/latest
BSD 3-Clause "New" or "Revised" License
15 stars 34 forks source link

Fleet - public dashboard design #1051

Open Abby-Wheelis opened 9 months ago

Abby-Wheelis commented 9 months ago

In the fleet version of OpenPATH, the metrics we would like to display on the dashboard are different than what is displayed when we have simple mode and purpose labels:

  1. There are no purpose labels, we only have survey data
  2. No mode labels, but the detected mode is the "confirmed mode" due to bluetooth integration
  3. We need to show survey data on the dashboard - this is the new feature that will need the most design, focus on making this a general feature NOT a 1-off for this application

To access survey data in the public dashboard we will need to:

Challenges:

In my mind, the trick here is going to be a data-driven approach where we are able to show data based soley on the data we have (and probably the survey part of the config) and allow for as many different survey questions as we can. Can we set it up so that all of the survey questions that are multiple choice in nature are able to be displayed on the dashboard?

I see the fleet adaptation taking two main steps:

  1. adapt existing "confirmed mode" visualizations to rely on the bluetooth-provided modes and eliminate purpose visualizations
  2. add visualizations of survey data to the dashboard
Abby-Wheelis commented 9 months ago

should we limit what is shown on the dashboard to only include questions with at least k responses? The data is not spatial, but there could be cases in which the surveys collect detailed information...

deferring this concern since we will be are reviewing survey questions before they are merged - process TBD

we can't know what questions are in the survey entries

pandas JSON normalize (what we use in the admin dashboard) can help us!

Abby-Wheelis commented 8 months ago

Starting work on this!

I will integrate with the conversion to bar charts when that finishes the review cycle, but for now am relying on pie chart code. Progress so far:

Example from washingtoncommons study: image

Abby-Wheelis commented 8 months ago

for "select multiple" questions, this is more difficult, as individual values are made up of multiple options, still need to address this still need to pull in file from config and github, right now I have the file locally

I have resolved both of these issues now, looping over all of the labels that will go in the chart to translate them, accounting for multiples - long lists are awkward for display, and I am pulling the files off of github instead of using the local file, doing this based off of the survey section of the config

Still need to see if this will work with multiple trip-level surveys, it does currently handle surveys with "missing" responses fine, so I think this covers the case of some questions not always being required

Still need to figure out how this will work from the frontend - if we don't have a defined list of chart names to pull from, how can we display? Here's what I'm thinking right now:

will also need "quality text" here, just remembered that part!! Based on 50 trips with survey responses from 5 users, of 450 trips from 25 users -- this could be tricky, since we probably want to count "num of trips with responses" and "total trips" as num of trips where this question was answered out of trips where this questions was prompted ... may start with "Based on 50 responses from 5 users` and build up from there

shankari commented 8 months ago

Two things, for which we have examples in various other pieces of code:

  1. Group by the type of answered survey. Each survey has an ID. And the ID shows up in the survey response. So you can basically do something conceptually
survey_id = df.user_input..apply(lambda sr: sr.substring...
df.group_by(survey_id)

So this will allow you to have a chart of the challenges using Fermata equipment by pulling in only the survey responses to EV car usage parking at Fermata location. Because that question won't even exist in the other surveys.

Asmita does something similar with the demographic surveys in the admin dashboard because we have had multiple iterations of the survey so far, at least on staging.

  1. Once you have grouped everything, you want to extract the relevant responses. You can do that in the way that we extract certain fields from the surveys for trip confirm and time use to display in the button. So the config for the survey will include something like
'public_dashboard_chart': "...fermata_response_col_name"

We also do this in the admin dashboard to filter out excluded columns

So we can start with just displaying all columns, but if there are some columns which questions that we don't want to make public we can filter them this way.

shankari commented 8 months ago

Let's say that we have a 100 trips, 40 are gas cars, 30 are EVs not at Fermata and 30 are EVs are Fermata. And they are all labeled. Without grouping, the "fermata challenges" chart will have 70% N/A. With grouping, you will only consider the 30% of the trips that have a relevant response and then display the breakdown.

Abby-Wheelis commented 8 months ago

Ok, I think I might still be confused, I have introduced a survey_name column, to my dataframe of trips, so I would be able to count how many responses we have for each survey type, but I still don't know how many trips were presented with that survey. So I have enough information to say "based on 30 responses from 10 users" but not the "out of 50 total trips from 12 users" part. Am I missing something?

This is the code I added:

survey_trips = survey_trips.reset_index()
survey_trips['survey_name'] = survey_trips.user_input.apply(lambda sr: sr['trip_user_input']['data']['name'])

I could now easily break the dataframe into one for each survey_name, but I still wouldn't know how many trips would have that survey assigned to them, just how many responses there were

shankari commented 8 months ago

that is an excellent point. I guess we will need to determine how many trips would have displayed each type of survey. If the eval syntax that we use in the config is sufficiently general, we can use it as an input to the df.query method and find the number of trips that the survey was applicable to. If it is not general enough, maybe we have two strings (one for javascript and one for python) or we have one string that we can convert using the tools in #1058

Abby-Wheelis commented 8 months ago

I think I will stick with some placeholder text for now then, and figure out a more thorough solution as we know more about the syntax that will end up in the config.

Starting a draft PR so code is public, I think my next goal is connecting the notebook to the frontend - then it will be a roughly working prototype and can be refined and tested with more survey configurations from there.

Abby-Wheelis commented 8 months ago

We're going to need plugins in order to read the spreadsheet files. At least for the notebook to translate the data, but if we name the files and create headers based on the survey questions in the frontend we will need to do something similar there.

I was testing with openpyxl in the notebook, this worked well but I had to install it with conda so we'll need to add it to the production environment as well. Based on the release notes, openpyxl has been around since 2013 and was last updated in March of 2023. The test coverage icon on the docs indicates 95% test coverage.

For the frontend, xlsx was the first thing I found, but it sounds like it has migrated... so still looking for a good options for reading the file on the frontend. Once we can read the file we can extract the survey questions in both raw_data_format and the Human Readable Format

shankari commented 8 months ago

Why do we have to read the spreadsheet? The spreadsheet is the input from our partners, but then we convert it to xml and to json. The spec version that we add in the dynamic config is the json (or maybe it can now be xml per @JGreenlee). json and/or xml should both be much better supported on both python and javascript than xls.

Abby-Wheelis commented 8 months ago

We won't necessarily have json anymore but we will have xml. I did not realize that xml would be easier to read than the spreadsheet, I will look into using that to get the raw/readable pairs instead!

Abby-Wheelis commented 8 months ago

I have now been able to read the questions in from the xml version of the survey in the notebook - still working on the responses.

Encountering a few things to keep in mind:

JGreenlee commented 8 months ago

I have now been able to read the questions in from the xml version of the survey in the notebook - still working on the responses.

Encountering a few things to keep in mind:

  • some questions come and go from the surveys, the only questions we can display "translated" and not with raw questions and responses are those still in the surveys, so I think we should only display those that are still in the survey
  • for the likert scale questions, the "-" space saving display means that these would probably show up for example, as 12& - on the chart, I can come up with a workaround for this, but am trying to keep the code general, so still thinking of the best solution

On the 5-point scale Likert questions, the labels will be "Disagree", "-", "Neutral", "-", "Agree", but the underlying values corresponding to those options will simply be 1, 2, 3, 4, 5.

Abby-Wheelis commented 8 months ago

On the 5-point scale Likert questions, the labels will be "Disagree", "-", "Neutral", "-", "Agree", but the underlying values corresponding to those options will simply be 1, 2, 3, 4, 5.

Good to know! I'm trying to keep this handling as generic as possible, so my default would be to display the labels for every answer, to avoid displaying things like 'pick_up_drop_off_accompany_someone', which is what would happen if I showed the values for every answer, but maybe I can detect when a label is "nonsensical" and show the value instead?

JGreenlee commented 8 months ago

Maybe use labels most of the time, but have an exception for questions where appearance="likert" https://github.com/JGreenlee/nrel-openpath-deploy-configs/blob/a5eaa17a649c4ecfc2ce336c3c1543643c6435da/survey_resources/dfc-fermata/dfc-ev-return-trip-v0.xml#L58

Abby-Wheelis commented 8 months ago

I have now connected the notebook to the frontend, so the survey question charts can now be displayed when the page loads 🎉

Screenshot 2024-03-22 at 11 43 07 AM

There is still lots of cleanup and polishing to go before this is ready, however:

Abby-Wheelis commented 8 months ago

handle "Other please specify" questions ... omit them? Get them paired to their parent question?

Currently, these questions will be shown right after their predecessor in the list, which semi-implies their definition. However, it could be clearer - I could check for the name "Other - please specify" and then prepend the previous question to the name, or I could just check for "Other" to be more general - but that seems less foolproof. Maybe we check for the whole string and let survey designers know about this feature? Screenshot 2024-03-25 at 4 55 12 PM

Abby-Wheelis commented 8 months ago

I now have the "missing plots" and alt text working for each of the charts. A slight pivot I needed to make this morning was switching from generating a chart for each column in the dataset to generating a chart for each question in the survey file -- this means older survey questions will not be addressed, but also that all questions sought by the frontend (the ones in the file) will have a chart or debug df and alt text. Examples of washingtoncommons and dfc-fermata as they stand now: Screenshot 2024-03-26 at 11 33 30 AM Screenshot 2024-03-26 at 11 33 46 AM

Note that some of the longer questions and labels are causing squashing - I'm not too worried about that right now because we are going to be moving to the stacked bar charts when that PR gets merged.

Abby-Wheelis commented 7 months ago

I have made some progress on the "quality text" for these charts thanks to some conversations in our meeting yesterday:

@JGreenlee pointed out that I need the composite and not just the confirmed trip in order to get the sections, so I added a snippet to scaffolding to pull that trip type. I also leveraged the eval that @shankari suggested yesterday, which allowed me to define the variables accessed in that scope. The eval strings that I used for this test are different than the ones in the config in two ways:

  1. I had to replace dictionary.access notation with dictionary['access'] notation to work well with Python's syntax
  2. I replaced the operator && with and and the operator ! with not, but left != alone

I think it's possible to convert the dictionary into a format compatible with dot notation, but before continuing down that road I think we should decide if we want to continue with retrofitting the strings written for javascript, or just write one for python. Dot notation and replacing && should be fairly foolproof, but replacing ! and leaving != might be a little tricker.

Remaining tasks:

JGreenlee commented 7 months ago

I think we should decide if we want to continue with retrofitting the strings written for javascript, or just write one for python.

What about supporting both? The Python scripts could check for a showsIfPy condition first, then fallback to showsIf if showsIfPy doesn't exist. That would give us flexibility to do it with one condition when possible, but define 2 conditions if we have to.

JGreenlee commented 7 months ago

I had to replace dictionary.access notation with dictionary['access'] notation to work well with Python's syntax

We can update the config to use bracket notation since it's the common denominator between JS objects and Python dicts. Let me create a PR for that real quick.

replacing ! and leaving != might be a little tricker.

It should just be a matter of finding the right regex expression

JGreenlee commented 7 months ago

I think this would do it:

import re

expression = "sections[0]['sensed_mode_str'] != 'CAR' && !pointIsWithinBounds(end_loc['coordinates'], [[-105.153, 39.745], [-105.150, 39.743]])"
expression = expression.replace('&&', 'and')
expression = expression.replace('||', 'or')
expression = re.sub(r"!(?!=)", "not ", expression)

results in:

sections[0]['sensed_mode_str'] != 'CAR' and not pointIsWithinBounds(end_loc['coordinates'], [[-105.153, 39.745], [-105.150, 39.743]])

the regex expression !(?!=) matches any ! that isn't followed by =

Abby-Wheelis commented 7 months ago

I think this would do it:

That worked perfectly! Thank you!

I now have more accurate quality_text for each of the charts. For the current conditions with the data snapshot I have, it is showing 0 trips for ev return, 73 for ev roaming, and 169 for gas car - this seems about right for the snapshot. So the denominator of the quality text is now accurate on a per-survey level, but won't be 100% accurate to a per-question level. This is because some questions could show conditionally, so there could be cases were there were 15 instances that met the survey conditions, but only 5 that met the conditions for a specific question based on previous answers.

Since the conditions will be in the xml, it would be possible to determine the denominator dataframe for quality text on a per-question bases. For example, if the condition is that question 3 had to be answered "yes", I could pull all rows from the trip dataframe where the answer to question 3 is "yes". The technique for this would probably be similar to what I'm doing now for the other eval strings, but I'd have to piece these together dynamically based on the xml. I'm going to set this aside and make sure the base functionality is polished and working with both datasets, then I can come back to ironing out more details in the quality text.

Abby-Wheelis commented 7 months ago

So the denominator of the quality text is now accurate on a per-survey level, but won't be 100% accurate to a per-question level.

I have made a first attempt at this, and was somewhat successful in using the conditions scraped from the xml. However, when I talked with @JGreenlee this morning to ask about some of the other xml conditions, he pointed out that since all questions are usually required - if a conditionally shown question was shown - it was answered. I mapped this out a little bit and it would mean that the "denominator" of the quality text would work something like:

  1. if a question has no conditions, it's denominator is all trips that meet the condition to show the survey it belongs to
  2. if a question has a condition, it's denominator would be all the times that question was answered (if it was shown it had to be answered!), meaning that all conditional questions would be 100% responded to

While true that a required conditional question would be answered 100% of the times that it is shown, I'm not convinced that this is useful information. If we were in a place to predict survey responses, then maybe we'd know how many of trips most likely should have seen this survey question, but that's not something we can do right now.

So, question for the group, is it better to have:

A. "too broad" of a denominator, just showing how many trips the question could have been relevant to, knowing that some of those trips may not have seen the question even if they responded to the survey given the conditional nature

--OR--

B. "narrow" denominator, where conditional questions seem to be answered 100% of the time, but only because they are only shown once someone is already answering the survey and will be required to if it is shown

@iantei @shankari - what do you think?

Abby-Wheelis commented 7 months ago

I just talked to @iantei and we're both leaning to option A from above, since B is not really useful information. We also discussed that finding a way to note that a question is conditional (or the precise condition) in the quality text would be useful. If this can be done easily, I could do it now, else it might be a good candidate for future work.

Abby-Wheelis commented 7 months ago

Did a bit of initial testing with the prod .yml configuration today, and things look promising! Sample output below:

Washington Commons default charts - only one survey, so consistent denominator: Screenshot 2024-04-09 at 11 33 18 AM

DFC-Fermata default charts - multiple surveys, so multiple demominators: Screenshot 2024-04-09 at 11 33 37 AM

Both datasets are also working with all of the "sensed charts", but most of these are subject to chance once I merge in the stacked bar chart changes. There was one error that I encountered with the "sensed trips under 80%" chart, since there was no data (all data is test, and included_test_users is configured to False) - I will keep an eye out for this error once the stacked bar chart changes are merged.

shankari commented 7 months ago

@Abby-Wheelis I agree that we should go with A. We can clarify this in the text to indicate that it is the number of surveys that were displayed instead of the number of questions. Effectively, conditional questions are not answered, it is just that they are not answered because they are not displayed.

Abby-Wheelis commented 7 months ago

Today I merged from upstream, picking up the unified color scheme changes! This is going to apply a little differently to the survey situation, though. We know we don't want the same color to repeat within a single chart, that there are probably too many different possible responses to just give every single one a different color, and it would be nice if the questions that DO have the same responses - ie likert-scale questions - had the same color mapping.

To achieve this:

My next step is try merging all of the answers into a single list and map colors based on that, to achieve "same answers have the same color" and prevent repeated colors in one chart!

Abby-Wheelis commented 7 months ago

Merging all of the answers into a single list seems to have worked!

This does not work for questions where users were able to select multiple responses, we'll need a workaround for that next

Abby-Wheelis commented 7 months ago

Initial solution for select multiples - averaging the colors together - I feel this is likely to result in a brown every time, so I think further experimentation is needed to see if this works all the time

image

Abby-Wheelis commented 7 months ago

Adjusted some colors today and tested with datasets, when testing with running the notebooks at the command line, I encountered an error that I have not been able to resolve:

I have not been able to replicate when running the notebooks interactively, so I'm not sure what's going on. Wrapping up work for the day, but documenting this here for when I start back tomorrow.

error message below ``` Running at 2024-04-22T23:00:48.893846+00:00 with params [Parameter('program', str, value='default'), Parameter('study_type', str, value='study'), Parameter('include_test_users', bool, value=False), Parameter('use_imperial', bool, value=True), Parameter('sensed_algo_prefix', str, value='cleaned')] Traceback (most recent call last): File "/usr/src/app/saved-notebooks/bin/generate_plots.py", line 105, in compute_for_date(None, None) File "/usr/src/app/saved-notebooks/bin/generate_plots.py", line 102, in compute_for_date nbclient.execute(new_nb) File "/root/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/nbclient/client.py", line 1314, in execute return NotebookClient(nb=nb, resources=resources, km=km, **kwargs).execute() File "/root/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/jupyter_core/utils/__init__.py", line 165, in wrapped return loop.run_until_complete(inner) File "/root/miniconda-23.5.2/envs/emission/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete return future.result() File "/root/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/nbclient/client.py", line 709, in async_execute await self.async_execute_cell( File "/root/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/nbclient/client.py", line 1062, in async_execute_cell await self._check_raise_for_error(cell, cell_index, exec_reply) File "/root/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/nbclient/client.py", line 918, in _check_raise_for_error raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content) nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell: ------------------ file_name ='ntrips_under10miles_sensed_mode%s' % file_suffix try: #determine 80th percentile cutoff = expanded_ct.distance.quantile(0.8) print(cutoff) dist_threshold = expanded_ct[distance_col].quantile(0.8).round(1) print(dist_threshold) dist_threshold = str(dist_threshold) plot_title_no_quality="Number of trips under " + dist_threshold + " " + label_units_lower + " for each primary mode" plot_title_no_quality=plot_title_no_quality + "\n(inferred by OpenPATH from phone sensors)" plot_title_no_quality=plot_title_no_quality + "\n["+dist_threshold + " " + label_units_lower+" represents 80th percentile of trip length]" labels_d10 = expanded_ct.loc[(expanded_ct['distance'] <= cutoff)].primary_mode.value_counts(dropna=True).keys().tolist() values_d10 = expanded_ct.loc[(expanded_ct['distance'] <= cutoff)].primary_mode.value_counts(dropna=True).tolist() d10_quality_text = scaffolding.get_quality_text(expanded_ct, expanded_ct[expanded_ct['distance'] <= cutoff], "< " + dist_threshold + " " + label_units_lower, include_test_users) plot_title= plot_title_no_quality+"\n"+d10_quality_text pie_chart_sensed_mode(plot_title,labels_d10,values_d10,file_name) alt_text = store_alt_text_pie(pd.DataFrame(values_d10, labels_d10), file_name, plot_title) print(expanded_ct.loc[(expanded_ct['distance'] <= cutoff)].primary_mode.value_counts(dropna=True)) except: d10_df = expanded_ct.query("distance <= " + str(cutoff)) if "distance" in expanded_ct.columns else expanded_ct debug_df.loc["Trips_less_than_80th_pct"] = scaffolding.trip_label_count("Mode_confirm", d10_df) generate_missing_plot(plot_title_no_quality,debug_df,file_name) alt_text = store_alt_text_missing(debug_df, file_name, plot_title_no_quality) ------------------ ----- stdout ----- nan nan ------------------ --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[5], line 15 13 plot_title_no_quality=plot_title_no_quality + "\n["+dist_threshold + " " + label_units_lower+" represents 80th percentile of trip length]" ---> 15 labels_d10 = expanded_ct.loc[(expanded_ct['distance'] <= cutoff)].primary_mode.value_counts(dropna=True).keys().tolist() 16 values_d10 = expanded_ct.loc[(expanded_ct['distance'] <= cutoff)].primary_mode.value_counts(dropna=True).tolist() File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/generic.py:5902, in NDFrame.__getattr__(self, name) 5901 return self[name] -> 5902 return object.__getattribute__(self, name) AttributeError: 'DataFrame' object has no attribute 'primary_mode' During handling of the above exception, another exception occurred: KeyError Traceback (most recent call last) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/scope.py:198, in Scope.resolve(self, key, is_local) 197 if self.has_resolvers: --> 198 return self.resolvers[key] 200 # if we're here that means that we have no locals and we also have 201 # no resolvers File ~/miniconda-23.5.2/envs/emission/lib/python3.9/collections/__init__.py:941, in ChainMap.__getitem__(self, key) 940 pass --> 941 return self.__missing__(key) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/collections/__init__.py:933, in ChainMap.__missing__(self, key) 932 def __missing__(self, key): --> 933 raise KeyError(key) KeyError: 'nan' During handling of the above exception, another exception occurred: KeyError Traceback (most recent call last) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/scope.py:209, in Scope.resolve(self, key, is_local) 205 try: 206 # last ditch effort we look in temporaries 207 # these are created when parsing indexing expressions 208 # e.g., df[df > 0] --> 209 return self.temps[key] 210 except KeyError as err: KeyError: 'nan' The above exception was the direct cause of the following exception: UndefinedVariableError Traceback (most recent call last) Cell In[5], line 23 21 print(expanded_ct.loc[(expanded_ct['distance'] <= cutoff)].primary_mode.value_counts(dropna=True)) 22 except: ---> 23 d10_df = expanded_ct.query("distance <= " + str(cutoff)) if "distance" in expanded_ct.columns else expanded_ct 24 debug_df.loc["Trips_less_than_80th_pct"] = scaffolding.trip_label_count("Mode_confirm", d10_df) 25 generate_missing_plot(plot_title_no_quality,debug_df,file_name) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments..decorate..wrapper(*args, **kwargs) 325 if len(args) > num_allow_args: 326 warnings.warn( 327 msg.format(arguments=_format_argument_list(allow_args)), 328 FutureWarning, 329 stacklevel=find_stack_level(), 330 ) --> 331 return func(*args, **kwargs) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/frame.py:4474, in DataFrame.query(self, expr, inplace, **kwargs) 4472 kwargs["level"] = kwargs.pop("level", 0) + 2 4473 kwargs["target"] = None -> 4474 res = self.eval(expr, **kwargs) 4476 try: 4477 result = self.loc[res] File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments..decorate..wrapper(*args, **kwargs) 325 if len(args) > num_allow_args: 326 warnings.warn( 327 msg.format(arguments=_format_argument_list(allow_args)), 328 FutureWarning, 329 stacklevel=find_stack_level(), 330 ) --> 331 return func(*args, **kwargs) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/frame.py:4612, in DataFrame.eval(self, expr, inplace, **kwargs) 4609 kwargs["target"] = self 4610 kwargs["resolvers"] = tuple(kwargs.get("resolvers", ())) + resolvers -> 4612 return _eval(expr, inplace=inplace, **kwargs) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/eval.py:353, in eval(expr, parser, engine, truediv, local_dict, global_dict, resolvers, level, target, inplace) 344 # get our (possibly passed-in) scope 345 env = ensure_scope( 346 level + 1, 347 global_dict=global_dict, (...) 350 target=target, 351 ) --> 353 parsed_expr = Expr(expr, engine=engine, parser=parser, env=env) 355 # construct the engine and evaluate the parsed expression 356 eng = ENGINES[engine] File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/expr.py:813, in Expr.__init__(self, expr, engine, parser, env, level) 811 self.parser = parser 812 self._visitor = PARSERS[parser](self.env, self.engine, self.parser) --> 813 self.terms = self.parse() File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/expr.py:832, in Expr.parse(self) 828 def parse(self): 829 """ 830 Parse an expression. 831 """ --> 832 return self._visitor.visit(self.expr) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/expr.py:415, in BaseExprVisitor.visit(self, node, **kwargs) 413 method = "visit_" + type(node).__name__ 414 visitor = getattr(self, method) --> 415 return visitor(node, **kwargs) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/expr.py:421, in BaseExprVisitor.visit_Module(self, node, **kwargs) 419 raise SyntaxError("only a single expression is allowed") 420 expr = node.body[0] --> 421 return self.visit(expr, **kwargs) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/expr.py:415, in BaseExprVisitor.visit(self, node, **kwargs) 413 method = "visit_" + type(node).__name__ 414 visitor = getattr(self, method) --> 415 return visitor(node, **kwargs) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/expr.py:424, in BaseExprVisitor.visit_Expr(self, node, **kwargs) 423 def visit_Expr(self, node, **kwargs): --> 424 return self.visit(node.value, **kwargs) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/expr.py:415, in BaseExprVisitor.visit(self, node, **kwargs) 413 method = "visit_" + type(node).__name__ 414 visitor = getattr(self, method) --> 415 return visitor(node, **kwargs) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/expr.py:723, in BaseExprVisitor.visit_Compare(self, node, **kwargs) 721 op = self.translate_In(ops[0]) 722 binop = ast.BinOp(op=op, left=node.left, right=comps[0]) --> 723 return self.visit(binop) 725 # recursive case: we have a chained comparison, a CMP b CMP c, etc. 726 left = node.left File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/expr.py:415, in BaseExprVisitor.visit(self, node, **kwargs) 413 method = "visit_" + type(node).__name__ 414 visitor = getattr(self, method) --> 415 return visitor(node, **kwargs) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/expr.py:536, in BaseExprVisitor.visit_BinOp(self, node, **kwargs) 535 def visit_BinOp(self, node, **kwargs): --> 536 op, op_class, left, right = self._maybe_transform_eq_ne(node) 537 left, right = self._maybe_downcast_constants(left, right) 538 return self._maybe_evaluate_binop(op, op_class, left, right) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/expr.py:458, in BaseExprVisitor._maybe_transform_eq_ne(self, node, left, right) 456 left = self.visit(node.left, side="left") 457 if right is None: --> 458 right = self.visit(node.right, side="right") 459 op, op_class, left, right = self._rewrite_membership_op(node, left, right) 460 return op, op_class, left, right File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/expr.py:415, in BaseExprVisitor.visit(self, node, **kwargs) 413 method = "visit_" + type(node).__name__ 414 visitor = getattr(self, method) --> 415 return visitor(node, **kwargs) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/expr.py:549, in BaseExprVisitor.visit_Name(self, node, **kwargs) 548 def visit_Name(self, node, **kwargs): --> 549 return self.term_type(node.id, self.env, **kwargs) File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/ops.py:85, in Term.__init__(self, name, env, side, encoding) 83 tname = str(name) 84 self.is_local = tname.startswith(LOCAL_TAG) or tname in DEFAULT_GLOBALS ---> 85 self._value = self._resolve_name() 86 self.encoding = encoding File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/ops.py:109, in Term._resolve_name(self) 104 if local_name in self.env.scope and isinstance( 105 self.env.scope[local_name], type 106 ): 107 is_local = False --> 109 res = self.env.resolve(local_name, is_local=is_local) 110 self.update(res) 112 if hasattr(res, "ndim") and res.ndim > 2: File ~/miniconda-23.5.2/envs/emission/lib/python3.9/site-packages/pandas/core/computation/scope.py:211, in Scope.resolve(self, key, is_local) 209 return self.temps[key] 210 except KeyError as err: --> 211 raise UndefinedVariableError(key, is_local) from err UndefinedVariableError: name 'nan' is not defined ```
Abby-Wheelis commented 7 months ago

Picking the error back up this morning, I was able to replicate in the notebook:

Abby-Wheelis commented 6 months ago

Documenting current to-dos from previous PR's and things we've noticed on staging, to be addressed week of May 27th:

shankari commented 6 months ago

Couple of others:

Abby-Wheelis commented 6 months ago

for likert questions, use the related text ("neutral" or "strongly agree") instead of "3" or "5"

In order to do this, I will have to find a workaround for "2" and "4", if I just switched to use the text, then it would be "Strongly disagree", "-", "Neutral", "-", "Strongly agree" and we can't tell "agree" from "disagree". I had circumvented this issue by using the value instead of the label, but I can definitely explore other options.

shankari commented 6 months ago
Abby-Wheelis commented 6 months ago

if there are no labeled trips, but there are sensed trips, we get a blank chart because the exception is generated while pre-processing for the first bar (e.g. DDOT before we got the 4 labeled trips)

Mocked this up today by setting expanded_ct to blank, then added the lambda function for aggregation, etc. to the list of parameters, to be called on the df in the plot function, this seems to work well

image

Next to carry this change through the rest of the code!