Closed jchodera closed 8 years ago
Look here for a first example of what it would be like to use the autoprotcol-python
API in its most verbose form.
A plate layout is constructed, with some basic additional data attached to each well:
volume
- the volume in the well (defined by set_volume()
.)area
- the average area of the wellThe contents of each well are noted as properties
concentrations
- a dict
of concentrations of each component in the wellconcentration_uncertainties
- dict
containing corresponding entries for the uncertainties in each component of the wellExperimental data is attached to each well as properties:
fluorescence_[geometry]_ex[excitation_wavelength]nm_em[emission_wavelength]nm
- fluorescence data, with geometry
one of {top,bottom}
, and the excitation and emission wavelengths specified in nm
absorbance_[wavelength]nm
- the absorbance at the specified wavelengthThe set of wells to be analyzed---which may contain multiple replicates of the same experiment or several experiments to be analyzed simultaneously---would be compiled into a WellGroup
and passed to the new Bayesian model creation class that would interrogate the wells and create the model appropriately:
rom assaytools.analysis import CompetitiveBindingAnalysis
model = CompetitiveBindingAnalysis(wells=well_group, receptor=receptor_name, ligands=[ligand_name])
# fit the maximum a posteriori (MAP) estimate
map = model.map_fit()
# run some MCMC sampling and return the MCMC object
mcmc = model.run_mcmc()
We would write a convenience class that would automatically format existing datasets into this plate format, attaching the XML plate reader data, such as
from assaytools.experiments import QuanzolineInhibitorSpectra
container = QuanzolineInhibitorSpectra(receptor='Src', ligands=['gefitinib', 'bosutinib', 'erlotinib', 'bosutinib-isomer'], filename='filename.xml')
well_group = container.all_wells() # or specify only the wells you want to analyze
We may want to specify future more complex experiments entirely in the autoprotocol
API, since it is a very straightforward to specify plate contents this way.
The only technical hurdle to really getting this working is making this code numerically robust enough to work well with concentrations and affinities spanning many orders of magnitude. I've been thinking of how I can do this by representing both concentrations and affinities with logarithms for numerical stability (since both quantities have to be positive). Might still take a few days to work this out.
woohoo!
Any feedback/comment on the example?
Are there any things you would like to be able to do that you can't do now?
There's tons of things I would like to be able to do that we can't do now. I will get back to you over the next few days.
OK, I'll hold off on implementing the autoprotocol-python
API until the weekend. Would be great if you can give your wish list before then!
I think I've figured out how to handle general, robust binding model solutions with the new GeneralBindingModel
. This should allow us to express arbitrary competition experiments or association models involving more complex binding stoichiometries.
The syntax is a bit weird right now, but here's an example of simple 1:1 binding where the log of the dissociation constant Kd is -10 and the initial concentrations of receptor R
and ligand L
are both 1 uM:
reactions = [ (-10, {'RL': -1, 'R' : +1, 'L' : +1}) ]
conservation_equations = [ (-6, {'RL' : +1, 'R' : +1}), (-6, {'RL' : +1, 'L' : +1}) ]
from assaytools.bindingmodels import GeneralBindingModel
log_concentrations = GeneralBindingModel.equilibrium_concentrations(reactions, conservation_equations)
The syntax to specify the binding reactions is weird, but can be generated automatically based on however many competitive species we have in the assay.
Here's a more complex example of simple competitive binding with receptor R
and ligands L
and P
:
reactions = [ (-10, {'RL': -1, 'R' : +1, 'L' : +1}), (-6, {'RP':-1, 'R' : +1, 'P' : +1}) ]
conservation_equations = [ (-6, {'RL' : +1, 'R' : +1}), (-6, {'RL' : +1, 'L' : +1}), (-5, {'RP' : +1, 'P' : +1}) ]
from assaytools.bindingmodels import GeneralBindingModel
log_concentrations = GeneralBindingModel.equilibrium_concentrations(reactions, conservation_equations)
The simple test_bindingmodel.py
script was just some manual experiments and not real tests, but indicates this strategy seems to work well from a numerical standpoint.
I'll add tests soon, but this clears the major hurdle to implementing a much more general model construction scheme that can deal with both standard and competition assays, as well as more complex stuff in the future (like helping design competition assays before we start them).
I'll hold off on actually implementing a more general strategy until at least the weekend to give @sonyahanson and others time to comment on what you would want from a more general API that can work for competition and standard experiments.
I'll add a description of how the solver works to the theory.rst
docs at some point, but I wanted to drop in my whiteboard notes for now.
So, there are a lot of things here I don't particularly get right off the bat, but here are a few thoughts for now:
autoprotocol
here is for sure the way to go for future extensibility and to avoid doing more work than we have to.What I think we need to think about is how to work with the data we have: Singlet data, Competition assay data, spectra data, and DLS data. With the exception of the DLS data these are all in .xml
file format that have all the metadata we could want. Each .xml
file is related to (minimally) a plate, a protein, and multiple ligands (other things we should maybe already be including, because we may vary them in the future: pH, temperature, time, mixing). With that information and some assumptions about fluorescence we can perform Bayesian analysis that gives us an output of uncertainties in ligand concentration, protein concentration, extinction coefficient of the ligand, and the free energy of binding of the ligand to that protein in those conditions.
With this in mind, some questions I have:
.xml
file, that also has our experimental inputs? With the exception of the DLS data these are all in .xml file format that have all the metadata we could want
Which XML file type are you talking about, and can you point me to an example file in the repo?
To analyze a competition experiment, we need the following pieces of information (also described in #43):
I wasn't aware how much additional data beyond the absorbance and fluorescence readings the Infinite XML file contains. Are you really storing all of the compound stock and protein info in there too?
Another thing is that I think we don't want to bite off more than we can chew at the moment, since we are looking to get a paper out on this relatively soon. I think having a more robust general framework than we already have is important, but I think we should keep it minimal and think about the types of data we'll need to store. I think separating this out into three stages is a useful way to think about this: experimental setup -> raw data -> analysis. I think the experimental setup portion is the lowest priority right now (mostly because it is the hardest, and least scientifically interesting), meaning that maybe we can keep some things hardcoded for now in an external file or something, with the vision that in the future they will be more programmatic, and I think that integrating with autoprotocol here is for sure the way to go for future extensibility and to avoid doing more work than we have to.
Just to be super clear here: I am not talking about using autoprotocol
for experimental setup at all. It's just a way to attach some quantities and properties we need for the analysis of competition experiments to a WellSet
to allow a Bayesian model to be constructed. The alternative to this is to just extend what we already have by adding many more arguments to this function, which already has a docstring 50 lines long documenting the current set of arguments.
I just don't think we will be well served by continuing to try to prop up this way of doing things, since it's just going to fall over when we try to add a bunch of stuff to analyze the competition experiments. Once we collect an example of all the data we need to analyze a competition experiment in one place, it will take a day or two at most to get this working.
Are you really storing all of the compound stock and protein info in there too?
No. I meant all the data file from the infinite read like, wavelength, type of read, gain, bandwidth, temperature.
I had assumed you had already looked at these xml files, we have tons of them in this repo, here is the latest one from the experiment you did with Lucelenie: https://github.com/choderalab/assaytools/pull/42
No. I meant all the data file from the infinite read like, wavelength, type of read, gain, bandwidth, temperature.
Ah, OK! We need more information than that, though.
Maybe we can briefly chat and see how we can collect a complete set of the necessary data in a directory? I can then implement the new API and demonstrate how it would work!
I just don't think we will be well served by continuing to try to prop up this way of doing things, since it's just going to fall over when we try to add a bunch of stuff to analyze the competition experiments. Once we collect an example of all the data we need to analyze a competition experiment in one place, it will take a day or two at most to get this working.
I am agreeing with you here the current way things are organized is horrid, but just want to make sure we don't get sidetracked with shiny new things.
I am agreeing with you here the current way things are organized is horrid, but just want to make sure we don't get sidetracked with shiny new things.
I've thought about this a lot, and I think this is just the easiest way to move forward. We're just using the autoprotocol
classes as container classes instead of rolling our own.
We could alternatively roll our own, or use the container class from klaatu
, but either of these would require a lot more testing (or a lot more work on klaatu
) to get running.
I am 100% in favor of not getting sidetracked here. I think we just need to collect up the pieces of information above for one example and I can dive in and finish this.
Again, I am not disagreeing with you, I think this is the right way to go. Rather than discussing somethign we both agree on, I am interested in your response to these questions:
- Do we want to have an intermediate data storage item besides the raw data .xml file, that also has our experimental inputs?
- Do we want another object that stores the final results of the Bayesian analysis? how should this be connected to our experimental inputs and whatever parameters/models we used to get these results?
Do we want to have an intermediate data storage item besides the raw data .xml file, that also has our experimental inputs?
What specifically do you mean by "experimental inputs"?
My thinking was to use
dict
for the compound stock information, which can come from a CSV file (exported from a Google spreadsheet) or in the future JSON from a databasedict
(which might come from a YAML file)I don't think we need to store these bundled into an intermediate format, though if we would like to, we can just pickle the objects to disk for now.
Do we want another object that stores the final results of the Bayesian analysis? how should this be connected to our experimental inputs and whatever parameters/models we used to get these results?
pymc
generates traces that can be stored to disk in a variety of formats (text, SQLite, or using @ChayaSt's NetCDF form). How about we just store the results in SQLite form for now?
A good way to organize data for the paper would be to:
pymc
dataSounds good!
Great! Can you ping me once you've got the necessary data collected in one place? I would minimally need the following info:
I can generate the error assumption YAML file.
@Lucelenie has now created a compound stocks spreadsheet!
@sonyahanson : I've updated the README.md
with a description of the current API. Can you take a quick look and see if this would work for you before I finish this off?
I should note that none of these API choices need to be final---we can continue to refine them. I just wanted to make sure I didn't forget something critical in the short term that would prevent this from being immediately useful!
I realized I already forgot something important: The fluorescence gain! @sonyahanson @Lucelenie @MehtapIsik : I see the gain is reported in the <parameters/>
block of the Infinite XML file, like this:
<Section Name="em280" Time_Start="2016-03-11T20:23:40.876905Z" Time_End="2016-03-11T20:29:30.426719Z">
<Warnings />
<Errors />
<Parameters>
<Parameter Name="Mode" Value="Fluorescence Top Reading" />
<Parameter Name="Emission Wavelength Start" Value="280" Unit="nm" />
<Parameter Name="Emission Wavelength End" Value="600" Unit="nm" />
<Parameter Name="Emission Wavelength Step Size" Value="5" Unit="nm" />
<Parameter Name="Emission Scan Number" Value="65" />
<Parameter Name="Excitation Wavelength" Value="280" Unit="nm" />
<Parameter Name="Bandwidth (Em)" Value="280...850: 20 nm" />
<Parameter Name="Bandwidth (Ex) (Range 1)" Value="230...300: 10 nm" />
<Parameter Name="Bandwidth (Ex) (Range 2)" Value="301...850: 20 nm" />
<Parameter Name="Gain" Value="100" Unit="Manual" />
<Parameter Name="Number of Flashes" Value="50" />
<Parameter Name="Flash Frequency" Value="400" Unit="Hz" />
<Parameter Name="Integration Time" Value="20" Unit="µs" />
<Parameter Name="Lag Time" Value="0" Unit="µs" />
<Parameter Name="Settle Time" Value="0" Unit="ms" />
<Parameter Name="Z-Position (Manual)" Value="20000" Unit="µm" />
<Parameter Name="Part of Plate" Value="G1-H12" />
</Parameters>
Do you also name the sections of the file differently depending on the gain? What naming convention do you use? And how is this represented in the data
block after calling data = platereader.read_icontrol_xml(filename)
?
I wonder if it might be better to not fuss with section names and instead have the XML file pull all the relevant info directly from the well readings, which look like this:
<MeasurementFluoInt readingMode="Top" id="18" mode="Normal" type="" name="FluoInt" longname="" description="">
<Well id="19" auto="true">
<MeasurementReading id="20" name="" beamDiameter="0" beamGridType="Single" beamGridSize="0" beamEdgeDistance="">
<ReadingLabel id="21" name="em280_Copy2" scanType="ScanEM" refID="0">
<ReadingSettings number="50" rate="2500" />
<ReadingGain type="" gain="120" optimalGainPercentage="0" automaticGain="False" mode="Manual" />
<ReadingTime integrationTime="20" lagTime="0" readDelay="0" flash="0" dark="0" excitationTime="0" />
<ReadingFilter id="22" type="Ex" wavelength="2800" bandwidth="100" attenuation="0" usage="FI" />
<ReadingFilter id="23" type="Em" wavelength="2800~6000:50" bandwidth="200" attenuation="0" usage="FI" />
<ReadingZPosition mode="Manual" zPosition="20000" />
</ReadingLabel>
</MeasurementReading>
</Well>
</MeasurementFluoInt>
@sonyahanson : This might require we coordinate with your work on #48.
Thoughts anyone? I think how to pull this information (and represent it internally) is the last remaining hurdle to finishing this up.
@sonyahanson : I've whittled the example we discussed today for a four-compound non-competition assay plate into the kind of API I was thinking of that makes use of helper functions to create the data structures. How does this look?
from assaytools.experiments import DMSOStockSolutions, BufferSolution, ProteinSolution, AssayPlate
from assaytools.analysis import CompetitiveBindingAnalysis
# Read DMSO stock solutions from inventory CSV file
dmso_stocks_csv_filename = 'DMSOstocks-Sheet1.csv'
solutions = DMSOStockSolutions(dmso_stocks_csv_filename)
ligand_solutions = (solutions['BOS001'], solutions['BSI001'], solutions['GEF001'], solutions['GEF001'])
# Define receptor and ligand species names.
receptor_species = 'Abl(D382N)'
ligand_species = [ ligand_solutions.species for solution in ligand_solutions ]
# Add buffer and protein stock solutions
solutions['buffer'] = Buffer(name='20 mM Tris buffer')
solutions[receptor_species] = ProteinSolution(name='1 uM Abl D382N', species=receptor_species, buffer=solutions['buffer'], absorbance=4.24, extinction_coefficient=49850, molecular_weight=41293.2, ul_protein_stock=165.8, ml_buffer=14.0)
# Populate the Container data structure with well contents and measurements
d300_xml_filename = 'LRL_Src_Bos_2rows_1_2 2015-09-11 1048.DATA.xml'
infinite_xml_filename = 'Abl_D382N_Bos_20160311_132205.xml'
plate = AssayPlate(protein_solution=solutions['Abl'], buffer_solution=solutions['buffer'], ligand_solutions=ligand_solutions,
d300_xml_filename=d300_xml_filename, infinite_xml_filename=infinite_xml_filename)
# Create a model
experiment = CompetitiveBindingAnalysis(solutions=solutions, wells=plate.all_wells(), receptor_species=receptor_species, ligand_species=ligand_species)
# Fit the maximum a posteriori (MAP) estimate
map_fit = experiment.map_fit()
# Run some MCMC sampling and return the MCMC object
mcmc = experiment.run_mcmc()
# Show summary
experiment.show_summary(mcmc, map_fit)
# Generate plots
plots_filename = 'plots.pdf'
experiment.generate_plots(mcmc, pdf_filename=plots_filename)
Is this different than before?
Yes. I showed you an example of ~283 lines yesterday. I've trimmed that to ~40 by encapsulating all the helper functions in assaytools.experiments
.
Model is fully implemented. Just debugging and speeding things up now.
It's running! Just a bit slow now.
Remaining TODOs are all minor tweaks:
platereader.read_icontrol_xml()
so we can handle singlet and spectral assays without relying on Section
namesCompetitionAssay
and EmissionSpectraAssay
helper classes (though all internals are already implemented)FYI, here's what the current user-facing API is. Very simple!
"""
Analyze Abl:Gefinitinib singlet (single fluorescent inhibitor) assay.
"""
from autoprotocol.unit import Unit
from assaytools.experiments import SingletAssay
#
# This information is different for each experiment.
# We use a 'dict' so that we can later store this information in a JSON database or something.
#
params = {
'd300_xml_filename' : 'Src_Bos_Ima_96well_Mar2015 2015-03-07 1736.DATA.xml', # HP D300 dispense simulated DATA file
'infinite_xml_filename' : 'Abl Gef gain 120 bw1020 2016-01-19 15-59-53_plate_1.xml', # Tecan Infinite plate reader output data
'dmso_stocks_csv_filename' : 'DMSOstocks-Sheet1.csv', # CSV file of DMSO stock inventory
'hpd300_fluids' : ['GEF001', 'IMA001', 'DMSO'], # uuid of DMSO stocks from dmso_stocks_csv_filename (or 'DMSO' for pure DMSO) used to define HP D300 XML <Fluids> block
'receptor_species' : 'Abl(D382N)', # receptor name (just used for convenience)
'protein_absorbance' : 4.24, # absorbance reading of concentrated protein stock before dilution
'protein_extinction_coefficient' : Unit(49850, '1/(moles/liter)/centimeter'), # 1/M/cm extinction coefficient for protein
'protein_molecular_weight' : Unit(41293.2, 'daltons'), # g/mol protein molecular weight
'protein_stock_volume' : Unit(165.8, 'microliters'), # uL protein stock solution used to make 1 uM protein stock
'buffer_volume' : Unit(14.0, 'milliliters'), # mL buffer used to make 1 uM protein stock
'rows_to_analyze' : ['A', 'B'], # rows to analyze
'assay_volume' : Unit(100.0, 'microliters'), # quantity of protein or buffer dispensed into plate
'measurements_to_analyze' : ['fluorescence top'], # which measurements to analyze (if specified -- this is optional)
'wavelengths_to_analyze' : ['280:nanometers', '480:nanometers'], # which wavelengths to analyze (if specified -- this is optional)
}
# Create a single-point (singlet) assay.
assay = SingletAssay(**params)
# Fit the maximum a posteriori (MAP) estimate
map_fit = assay.experiment.map_fit()
# Run some MCMC sampling and return the MCMC object
mcmc = assay.experiment.run_mcmc()
# Show summary
assay.experiment.show_summary(mcmc, map_fit)
# Generate plots
plots_filename = 'plots.pdf'
assay.experiment.generate_plots(mcmc, pdf_filename=plots_filename)
All you have to do to analyze different experiments is swap out parts of that params
dict
.
Profiling data indicates the use of units in Deterministic
objects is slowing things down:
Tue May 3 12:40:41 2016 profile.out
1327693688 function calls (1306155821 primitive calls) in 1164.010 seconds
Ordered by: internal time
List reduced from 6018 to 10 due to restriction <10>
ncalls tottime percall cumtime percall filename:lineno(function)
46175450 323.549 0.000 323.549 0.000 {_locale.setlocale}
204857235/204857233 45.687 0.000 72.609 0.000 {isinstance}
15391815 35.296 0.000 100.248 0.000 formatting.py:180(format_unit)
7254367 34.725 0.000 490.153 0.000 quantity.py:660(_mul_div)
290756 34.581 0.000 1109.846 0.004 analysis.py:640(fluorescence_model)
15391816 33.039 0.000 121.514 0.000 unit.py:71(__new__)
18026054 31.858 0.000 50.334 0.000 util.py:251(__init__)
15682572 31.515 0.000 79.995 0.000 quantity.py:61(__new__)
15391815 25.545 0.000 585.426 0.000 unit.py:90(__init__)
6963562 22.150 0.000 59.065 0.000 util.py:333(__mul__)
I will remove unit conversions within the Deterministic
s to speed this up.
Cool!
Still working on speeding things up to a tolerable speed. Currently focusing on improving initial guesses so MAP estimation is not so horribly slow.
I've managed to take it from "way too slow" to "just a bit slow", but there is still room to improve the initial guesses to speed things up further.
@sonyahanson: I'm working with the Abl:Gefitinib/Imatinib singlets competition assay right now.
From your README.md
, I think I want to work with these files:
Abl Gef Ima gain 120 bw1020 2016-01-19 16-22-45_plate_1.xml
- Abl Geftinib Imatinib Singlets (96-well)Abl Gef gain 120 bw1020 2016-01-19 15-59-53_plate_1.xml
- Abl Gefitnib Singlets (96-well)Src_Bos_Ima_96well_Mar2015 2015-03-07 1736.DATA.xml
- D300 script used to dispense Gefitinib and Imatinib for Singlet Fluorescent and Singlet Imatinib ExperimentDo you have a pointer to a protocol that you used to illustrate the plate layout for this experiment? That D300 script appears to generate two plates, Ima Plate
and Control Plate
. The Control Plate
is just the compound and protein/buffer, while the Ima Plate
is the competition assay---is that right?
P.S. By "right now" I mean "tomorrow". :)
Just a quick update: In the final stretches of debugging. Just fixing a few last issues. Here's an example fit for Abl:Gefitinib. Some details of the model are still not working properly.
I am trying to follow an example of competition assay analysis to get familiar with what is the latest state we are at. .../data/full_example/Exploratory data analysis.ipynb @jchodera, is this the analysis you have been working on and mentioned in this issue?
See the examples/autoprotocol/
directory for the latest examples. In particular, there is an example of the competitive binding model you can start from.
Actually, I just realized that is not the most up to date example. I don't seem to have an up-to-date competitive data analysis example yet. Let me add that now.
@jchodera If you have an updated example that would be great, @MehtapIsik and I are trying to work our way through this and are both getting different errors when trying to run the 'Exploratory data analysis' ipynb. IOError: [Errno 2] No such file or directory: 'output.pickle'
and Neither Quantity object nor its magnitude has attribute m
(somewhat paraphrased)
I don't know if that exploratory data analysis is something that is supposed to work.
Can you try running the example.py
in data/full_example/
and see if that runs for you?
On further study, it looks like I've only implemented the SingletAssay
experiment setup helper. Can you remind me which competition assay format you're using? I'll make sure that helper gets implemented ASAP.
My experimental data is a SingletAssay
, but I was actually trying to run your previous examples with the data they were written with, before trying with my data.
I couldn't make the following work:
data/full_example/example.py
data/full_example/Exploratory data analysis.ipynb
(I get an error about LogNormalWrapper from assay=SingletAssay(**params)
line)
and examples/autoprotocol/example.py
Last one works untill creation of well_group
(I fixed an indexing error on my branch), but CompetitiveBindingAnalysis class gives an error. It looks like arguments of this class may have changed over time to me.
Last one works untill creation of well_group (I fixed an indexing error on my branch), but CompetitiveBindingAnalysis class gives an error. It looks like arguments of this class may have changed over time to me.
Yeah, I realized that example is out of date and needs to be updated. I haven't created a CompetitionAssay
helper class that goes along with SingletAssay
yet.
I need to know the format of your competition assay on the plate. Do you have an example dataset you can post, along with a description of what is in each of the wells?
I don't know what exactly do you mean by "format of competition assay". It was a SingletAssay competition assay performed in 96-well plate. Do you mean the sample layout on the plate?
I am trying to put together all experiment related data similar to Sonya's full_example. Will that be useful? It will take me a while to bring everything to same format.
Error message I am geting from data/full_example/example.py
:
lski1946:full_example isikm$ python example.py
There are 24 wells to analyze in the provided WellGroup
Solutions in use: set(['buffer', 'protein', 'DMSO', 'GEF001'])
Traceback (most recent call last):
File "example.py", line 32, in <module>
assay = SingletAssay(**params)
File "/Users/isikm/opt/anaconda/lib/python2.7/site-packages/assaytools-0.1.0- py2.7.egg/assaytools/experiments.py", line 477, in __init__
self.experiment = CompetitiveBindingAnalysis(solutions=solutions, wells=well_group, receptor_name=receptor_species)
File "/Users/isikm/opt/anaconda/lib/python2.7/site-packages/assaytools-0.1.0- py2.7.egg/assaytools/analysis.py", line 113, in __init__
self._create_solutions_model()
File "/Users/isikm/opt/anaconda/lib/python2.7/site-packages/assaytools-0.1.0-py2.7.egg/assaytools/analysis.py", line 153, in _create_solutions_model
self.model[name] = LogNormalWrapper(name, mean=solution.concentration.to_base_units().m, stddev=solution.uncertainty.to_base_units().m)
File "/Users/isikm/opt/anaconda/lib/python2.7/site-packages/pint/quantity.py", line 1003, in __getattr__
"has attribute '{1}'".format(self._magnitude, item))
AttributeError: Neither Quantity object nor its magnitude (0.00100729617424)has attribute 'm'
Can you try this?
>>> import pint
>>> pint.__version__
'0.7.2'
Also, how did you install assaytools
on your system? Did you remember to uninstall your previous version before installing from source?
I tried uninstalling previous assaytools and installing from source again. It doesn't help to solve this error.
This is odd. I updated pint to 0.7.2 with conda, but I still get version 0.6 in python environment.
isikm$ conda list | grep pint
pint 0.7.2 py27_0 omnia
>>> import pint
>>> pint.__version__
'0.6'
Do you have any idea what is causing this?
I solved my pint problem. Somehow there were two pint versions installed at the same time. One through conda (pint 0.7.2) and one through pip (pint 0.6). Somehow python was using the old version. Now I can run data/full_example/example.py
without a problem.
Great! I was worried that dueling pip and conda versions may have been the problem.
How merge-able is this? Any idea?
This is just a start at using the
autoprotcol-python
API.