Closed albert-kevin closed 2 years ago
hi @albert-kevin , what error are you seeing when specifying the port and public_ip in ErrorAnalysisDashboard? The parameters port and public_ip are defined on the ErrorAnalysisDashboard init function: https://github.com/microsoft/responsible-ai-widgets/blob/main/raiwidgets/raiwidgets/error_analysis_dashboard.py#L48 So the above should just work. I'm not sure why it is failing for you.
@albert-kevin I tried your notebook, I think you just need to open a port in your VM's networking properties, based on error message:
RuntimeError: Port 5000 is not available. Please specify another port for use via the 'port' parameter
Hi, thank you for your reply, here is a screenshot of the open ports for this VM
I also don't understand why it is blank if I use it without the port and IP.
ok, we have a little bit of progress, it is able to host but not quite yet able to browse to this webpage yet: normally it should be possible to connect now to the public IP of this VM: http://40.113.65.18:5700
ErrorAnalysisDashboard(global_explanation, dashboard_pipeline, dataset=X_test_original_full,
true_y=y_test, categorical_features=categorical_features,
true_y_dataset=y_test_full,port=5700,
public_ip="0.0.0.0")
hmm, that is strange... I'm trying to setup my own DSVM right now to have a full repro. I would recommend removing yours since I'm worried anyone could access it. If you need to send credentials just send it to my email directly (ilmat at microsoft dot com)
hi, how did it go with your DSVM ? Should I destroy my VM and test again later or is there any other way I can help a hand testing or anything :-)
@albert-kevin yes please remove the VM. I am currently testing on my own DSVM. It's not going great honestly. Basically any POST calls are failing - so even the ExplanationDashboard's WhatIf and ICE plots feature isn't working either (not just ErrorAnalysis, which relies on POST calls much more since it's more dynamic). POST calls were working several months ago on DSVMs, so I'm not sure what changed, or perhaps I haven't setup the configuration correctly somewhere on the new DSVM. There were a few obvious issues that I fixed before that (like the socket error I was seeing at first). I'm still investigating the latest one, where I am hitting some connection error (maybe due to VM middleware that I don't have control over), but it's quite slow.
ah interesting, thank you for sharing, failures are good, room for improvements :-) May I share some thoughts about the ExplanationDashboard. I have until now only used with the Microsoft azureml-interpret SDK wrapper
from interpret_community.widget import ExplanationDashboard
conda : 4.9.2
pip : 21.0.1
python : 3.8.8
pandas : 1.2.3
numpy : 1.20.1
sklearn : 0.24.1
lime : 0.2.0.1
lightgbm: 3.1.1
interpretML : 0.2.1
azureml-core : 1.23.0
azureml-interpret : 1.23.0
interpret-community: 0.16.0
raiwidgets : 0.2.2
# then I setup
server = "40.113.65.18" # virtual machine IP
port = 5000 # blackbox dashboard port
# I build my model and pipeline and create an explainer
# dataset must have fewer than 10000 rows and 1000 columns
# global_mimic_explanation can be replaced with downloaded_global_mimic_explanation, downloaded_global_tabular_explanation
ExplanationDashboard(explanation=global_mimic_explanation,
model=model,
datasetX=x_test,
true_y=y_test,
port=5000,
public_ip=server)
I have made a notebook to help myself understand how this all works, do you see the same failures ? You should be able completely at the end bottom to click the link to the explainability dashboard... My goal is to expand this notebook with the erroranalysis tools in the future.
I will send you the notebook by email
I had tested the notebook with success the 5th March, today it fail I have no detailed version list unfortunately, but I fail to reinstall the environment, something is strange. Will try all new VM tomorrow
@albert-kevin I was able to get it to work when opening the widget in a new tab, see PR here:
https://github.com/microsoft/responsible-ai-widgets/pull/377
this was a very tricky issue. To get it to work within notebook, I'm currently getting a "mixed-content" error on requests, and I would need to change this to HTTPS:
https://github.com/microsoft/responsible-ai-widgets/blob/main/wrapped-flask/rai_core_flask/environments/public_vm_environment.py#L30
However, then I need to figure out how to setup WSGIServer with TLS/SSL, it can be done like this:
server = wsgiserver.WSGIServer(my_app, certfile='cert.pem', keyfile='privkey.pem')
according to:
https://pypi.org/project/WSGIserver/
we call WSGIServer here:
https://github.com/microsoft/responsible-ai-widgets/blob/main/wrapped-flask/rai_core_flask/flask_helper.py#L94
however, having user setup these certs on DSVM (or any public VM) seems like too much to me, and I'm not sure if it is already provided somehow (I mean, notebook is somehow under https url, but I have no idea where those cert/key files are coming from).
also, I should note that the old interpret-community dashboard wasn't working for me either yesterday, but somehow today it magically started working. I think actually when I configured my DSVM to open my ports the settings didn't apply right away, I needed to wait some time before the configuration started working.
note I need to open in new tab for it to work on DSVM in that PR (https://github.com/microsoft/responsible-ai-widgets/pull/377):
within the notebook itself it's still not working (I get a "mixed content" error):
owh yes it seems mixed-content means a mix between http with https content on that same site.
You either fix the web content or you allow mixed-content if you trust the source.
there is a test weblink
if you use Chrome browser you click on
and then for that site you "enable":
then reload the site and you will see:
Waiting for insecure script to run... INSECURE SCRIPT LOADED AND RUN!!!
This insight explanation: https://www.howtogeek.com/443032/what-is-mixed-content-and-why-is-chrome-blocking-it/
but indeed, I am not in the knowledge how to ErrorAnalysis dashboard with HTTPS instead of HTTP
now, about ports opening I open the ports after I created the VM and then at least 15-20min pass before I use the VM,
this delay is remarkable and perhaps can be tested using putty CLI command in the future.
I don't think the delay is due to Azure, but the Ubuntu OS network driver could need a bit more time to adapt.
We cold perhaps scan for open ports and only list port 8082 with this command:
sudo netstat -tnlp | grep :8082
this netstat tool sis installed with:
sudo apt install net-tools
I don't know but would you need to change http into https for line 30 ? https://github.com/microsoft/responsible-ai-widgets/blob/main/wrapped-flask/rai_core_flask/environments/public_vm_environment.py#L30
I wonder about that WSGIServer with TLS/SSL issue, would it be able to be found within the codebase from interpret_community.widget import ExplanationDashboard and be inspired how they did it there for using within your from raiwidgets import ExplanationDashboard and ErrorAnalysisDashboard ?
Maybe you can get it working using this: https://blog.miguelgrinberg.com/post/running-your-flask-application-over-https
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run(ssl_context='adhoc')
The idea is to use on-the-fly certificates,
which are useful to quickly serve an application over HTTPS without having to mess with certificates.
All you need to do, is add ssl_context='adhoc'
to your app.run()
call.
You had an amazing find there, fantastic, not easy
I created a new VM, I confirm the notebook I send by mail works fine. I also tried the breast_cancer example notebook from this git repo and it is able to Load the interpretability visualization on port 5000 within the notebook properly. I reset and clear the notebook kernel. rerun all skip interpret and Load the Error Analysis dashboard and get the same as you. A blank page, I tried to allow all Chrome settings for this webpage, I can click on explanation button top right and see visualizations, but not the Tree.
ssl_context='adhoc' doesn't work with WSGIServer - although the parameter exists, I can only pass an SSLContext object. I tried the following: and changed the base URL to use HTTPS for public server, but then I got "no shared cipher" errors like: maybe there is some way to get this path working with dummy SSL auth - the alternative is to have the user setup actual cert/key on their public VM and pass those as parameters, but I'm not sure if that can automatically be done in a safe way for the user instead.
@albert-kevin I just released a new rai-core-flask package, you can try to pip install it to get the fixes from this PR: https://github.com/microsoft/responsible-ai-widgets/pull/377 https://pypi.org/project/rai-core-flask/#history just pip install --upgrade rai-core-flask I am working on releasing new raiwidgets that pins to that version, but as long as you only install that package and not reinstall raiwidgets (which will downgrade to older pinned package) that should work. Then, you can view error analysis in a separate tab on the VM (but still get mixed content error within notebook)
ok, thank you, I have done the following and it gives me results that I find that everything works.
It still works when I use enabled or disabled:
what have I done exactly (using python3.7 this time): login to my DSVM as user ubuntu
/anaconda/bin/conda create -y --name py37_errorML01 python=3.7
conda activate py37_errorML01
pip install psutil
pip install --upgrade ipykernel
git clone https://github.com/microsoft/responsible-ai-widgets.git
open the notebook with kernel py37_errorML01
%pip install --upgrade interpret-community
%pip install --upgrade raiwidgets
%pip install --upgrade lime
%pip install --upgrade rai-core-flask
Package Version
------------------- -------------------
backcall 0.2.0
certifi 2020.12.5
click 7.1.2
cycler 0.10.0
decorator 4.4.2
Flask 1.1.2
Flask-Cors 3.0.9
gevent 20.6.2
greenlet 0.4.16
imageio 2.9.0
interpret-community 0.17.1
interpret-core 0.2.4
ipykernel 5.5.0
ipython 7.16.1
ipython-genutils 0.2.0
itsdangerous 1.1.0
jedi 0.18.0
Jinja2 2.11.1
joblib 1.0.1
jupyter-client 6.1.11
jupyter-core 4.7.1
kiwisolver 1.3.1
lightgbm 3.1.1
lime 0.2.0.1
MarkupSafe 1.1.1
matplotlib 3.3.4
networkx 2.5
numpy 1.20.1
packaging 20.9
pandas 1.2.3
parso 0.8.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 8.1.2
pip 21.0.1
prompt-toolkit 3.0.17
psutil 5.8.0
ptyprocess 0.7.0
Pygments 2.8.1
pyparsing 2.4.7
python-dateutil 2.8.1
pytz 2021.1
PyWavelets 1.1.1
pyzmq 22.0.3
rai-core-flask 0.2.1
raiwidgets 0.2.2
scikit-image 0.18.1
scikit-learn 0.24.1
scipy 1.6.1
setuptools 52.0.0.post20210125
shap 0.34.0
six 1.15.0
threadpoolctl 2.1.0
tifffile 2021.3.5
tornado 6.1
tqdm 4.59.0
traitlets 5.0.5
wcwidth 0.2.5
Werkzeug 1.0.1
wheel 0.36.2
zope.event 4.5.0
zope.interface 5.2.0
I do notice that the webpage does not load properly within the notebook. here is a screenshot:
To give more context on this issue: I am leaving this issue open for now, because the user still can't view the error analysis and what-if parts of the widgets in the notebook, although with the fixes mentioned the user can now at least view it in a new tab or see the static view of the widget. However the original bug is quite different from the last part that is blocking this from being closed.
Closing this issue, as customer was unblocked and there haven't been comments for a while here. If someone is seeing something similar please open a new issue on github.
Hi, I just found out about error analysis widget tool via Azure AI video on Youtube. I hope to include this in my development toolkit, but I fail. Maybe I can be of assistance by reporting my struggle getting this dashboard to work and test out.
I have setup an Azure data science VM with Ubuntu. Cores : 2 (2GHz) Memory: 7.78 GB (21.8%) Swap : 8 GB Disk : 145 GB (54.1% ext4) System: 18.04.1-Ubuntu
I have currently installed the following on my notebook. kernel name : py36_erroranalysis01 builded like this:
and then I ran the whole notebook example from your git repo
below full pip list:
My VM has inbound and outbound ports 5000 open
I have this test server running for you to look for yourself if you like (I will leave this open for a week): https://40.113.65.18:8000/user/ubuntu/notebooks/responsible-ai-widgets/notebooks/erroranalysis-interpretability-dashboard-census.ipynb
I am getting an empty dashboard !
I would also like to be able to load the dashboard on the VM IP and access it remotely like with ExplanationDashboard
I would love to be able to do the same for error analyis for example:
for some reason, maybe you did not yet implemented it, we can not define this port and public_ip it returns an error. Nor, am I able to connect to http://localhost:5000 and replace it ==> http://40.113.65.18:5000 ?
I appreciate your time, and effort and this nice new tool, it looks amazing :-)