Closed TristanWright closed 7 years ago
Merging #549 into master will increase coverage by
0.47%
. The diff coverage is67.66%
.
@@ Coverage Diff @@
## master #549 +/- ##
=========================================
+ Coverage 61.92% 62.4% +0.47%
=========================================
Files 66 68 +2
Lines 3278 3474 +196
=========================================
+ Hits 2030 2168 +138
- Misses 1248 1306 +58
Impacted Files | Coverage Δ | |
---|---|---|
src/utils/AccessHelper.js | 35.86% <ø> (ø) |
:arrow_up: |
src/redux/reducers/preferences.js | 100% <ø> (ø) |
:arrow_up: |
src/panels/run/index.js | 95.23% <ø> (+7.23%) |
:arrow_up: |
src/network/remote/tasks.js | 50% <ø> (ø) |
:arrow_up: |
src/panels/ImageIcon/index.js | 54.54% <ø> (ø) |
|
src/redux/actions/taskflows.js | 56.29% <0%> (-1.25%) |
:arrow_down: |
src/redux/actions/aws.js | 87.83% <100%> (-1.06%) |
:arrow_down: |
src/network/index.js | 97.77% <100%> (+0.1%) |
:arrow_up: |
src/network/remote/volumes.js | 30% <25%> (ø) |
|
src/panels/run/RunEC2.js | 47.05% <38.46%> (-4.56%) |
:arrow_down: |
... and 13 more |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 920a366...83875da. Read the comment docs.
@cjh1 this is ready fro you to try out.
I am trying to test this PR out and am getting the following error?
HPCCloud.js:58762 Uncaught (in promise) TypeError: Cannot read property '_id' of undefined
at Object.updateItem (HPCCloud.js:58762)
at projectsReducer (HPCCloud.js:58605)
at combination (HPCCloud.js:30074)
at dispatch (HPCCloud.js:29621)
at Object.dispatch (HPCCloud.js:54836)
at dispatch (HPCCloud.js:54882)
at HPCCloud.js:57182
updateItem @ HPCCloud.js:58762
projectsReducer @ HPCCloud.js:58605
combination @ HPCCloud.js:30074
dispatch @ HPCCloud.js:29621
(anonymous) @ HPCCloud.js:54836
dispatch @ HPCCloud.js:54882
(anonymous) @ HPCCloud.js:57182
What page is this error happening on?
When I try to create a project
So this is from a mistake I made when doing #583. 💀
I can run fully through a pyfr and a visualization job.
Having a ws with the final step of actually visualizing though, something with the visualizer itself I think.
@TristanWright Awesome! Thanks for continuing to push on this :medal_sports:
I just tried to test this and am running into a problem with the PyFR workflow, I am seeing the following celery error:
[2017-02-24 21:40:11,052: ERROR/MainProcess] Received unregistered task of type u'hpccloud.taskflow.pyfr.setup_input'.
The message has been ignored and discarded.
It looks like the PyFR taskflow is not being picked up. I know @jourdain did some restructure of the taskflow directory we may been to rebase on master to see if it fixed there.
Looks like its up to date, in which case this is probably broken in master. Which workflow where you using to test this branch ParaViewWeb?
@jourdain Wasn't there a fix for this?
I I tested this on a PyFR workflow, I've never seen that error though.
I've tested all workflow on traditional cluster and everything was fine.
I think there was a fix I pushed in cumulus to pick modules as well as files. But that's just what I remember.
Ok, I figured it out, need to rebase cumulus branch on master, the fix was in cumulus.
Now I am seeing;
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "cumulus/taskflow/__init__.py", line 117, in wrapped
return func(celery_task, *args, **kwargs)
File "/opt/hpccloud/hpccloud/server/taskflows/hpccloud/taskflow/pyfr/__init__.py", line 185, in setup_input
mesh_file_id = kwargs['input']['meshFile']['id']
KeyError: 'id'
Looks like the mesh file id is not being passed to the taskflow.
These are the SHA's I am using: cumulus: 49cda2f2 hpccloud: a82e4781
I will do some more digging next week.
I was able to run a PyFR workflow to completion. However, no volume was create or mounted?
@TristanWright There seem to be two volume size fields?
I'm not seeing any log messages show up for the attach_volume(...) task.
OK, so I am now seeing the volume being attached and mounted on the head node at /data :smile: The final step is to get the simulation result written to this volume, for this we need to do two things:
@TristanWright If you would like I can take a look at adding these last two parts?
@TristanWright There seem to be two volume size fields?
The first one is definitely redundant, I'll remove that.
we need to do two things: Export /data to other node in the cluster using NFS Update the job directory base to point to /data @TristanWright If you would like I can take a look at adding these last two parts?
I thought it was being uploaded to /data (or wait are we just mounting it at /data)? Yeah that would wrap up this branch while I'm working on sharing.
Definitely work in progress
We can save and delete but not attach right now.