Closed rossbar closed 8 years ago
Hi, I'll have the fix out soon! nn.py is independent, and can be run individually with $ python nn.py
Would you like the hidden layer size reduced in order to make it run faster too?
No, you don't have to modify the nn.py, that's not what was taking so long to run. As long as nn.py can run independently that's fine, I'll do that. Let me know when you push your changes
scratch that, the nn.py does take a really long time to run; but that's still not my concern. I would just like something that runs without errors, so let me know when you fixed the issues
@rossbar "Elapsed time is: 951 min, 58 sec" This is the run time for the regression analysis, which only takes ~20mins on my computer (Mac book Air) every time I ran it. It seems weird to me that it take so long on your machine.
Does your laptop have more than 16 gb ram? That's the only thing that would really slow it down
I have 8gb RAM.
hmm... have you run it from scratch? It's very strange to get an order-of-magnitude difference in compute time on similar systems. That's okay though, you are not penalized for the run-time or anything. Let me know when the corrections to nn.py have been made.
Hi Ross, it is merged!
Alright, I will pull and run again this afternoon, thanks for taking a look
Still getting errors:
cd code && python nn.py
5
937
649
4.41807739764e-09 1.26477264637e-10
4.41807739764e-09 1.0000010595e-09
(1890, 7984) (200, 7984) (1890, 943) (200, 943)
(7984, 5000) (5000, 10)
relu on: False
(73, 0.08669833729216152) 5.35623409669
(23, 0.027315914489311165) 7.85447761194
(0, 0.0) 10.0
(4, 0.004750593824228029) 9.54648526077
(0, 0.0) 10.0
(171, 0.20308788598574823) 3.29937304075
(0, 0.0) 10.0
(36, 0.04275534441805225) 7.00499168053
(53, 0.06294536817102138) 6.13702623907
(171, 0.20308788598574823) 3.29937304075
(0, 0.0) 10.0
(36, 0.04275534441805225) 7.00499168053
(22, 0.026128266033254157) 7.92843691149
(8, 0.009501187648456057) 9.13232104121
^[[A^[[B(0, 0.0) 10.0
(12, 0.014251781472684086) 8.7525987526
(36, 0.04275534441805225) 7.00499168053
(5, 0.0059382422802850355) 9.43946188341
(28, 0.0332541567695962) 7.50445632799
(22, 0.026128266033254157) 7.92843691149
(171, 0.20308788598574823) 3.29937304075
(0, 0.0) 10.0
(8, 0.009501187648456057) 9.13232104121
(73, 0.08669833729216152) 5.35623409669
(171, 0.20308788598574823) 3.29937304075
(36, 0.04275534441805225) 7.00499168053
(171, 0.20308788598574823) 3.29937304075
(171, 0.20308788598574823) 3.29937304075
(22, 0.026128266033254157) 7.92843691149
(0, 0.0) 10.0
(0, 0.0) 10.0
(9, 0.010688836104513063) 9.03433476395
(77, 0.09144893111638955) 5.22332506203
(77, 0.09144893111638955) 5.22332506203
(5, 0.0059382422802850355) 9.43946188341
(9, 0.010688836104513063) 9.03433476395
(5, 0.0059382422802850355) 9.43946188341
(171, 0.20308788598574823) 3.29937304075
FINAL ACC mostcommon: (18, 0.1978021978021978)
60.4990811339 0.0
ztest: None
Traceback (most recent call last):
File "nn.py", line 333, in <module>
pred = np.load("../data/nnpred_mostcommon.npy").astype(np.int)
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 369, in load
fid = open(file, "rb")
IOError: [Errno 2] No such file or directory: '../data/nnpred_mostcommon.npy'
yikes, ok, I thought I had rerun to make sure it worked, but I'm really sorry about that, working on it now.
HI Ross, we just got to merging the commit a bit ago. I really really apologize for that; I had a final this morning and was stressing mostly and fixed and checked the file, and then accidentally misclicked something just before pushing.
I checked the L1 regression as well, so there shouldn't be further problems, I hope. If you want to run that one separately as well, it can be run as "$ python regression_l1.py" from the code directory.
Sorry for the inconvenience
No problem, thanks for taking a look! I'll close the issue when I can verify everything has run successfully
I was able to run the entire build/analysis chain after a reclone. Thanks for taking care of this. :+1:
I got the following error when running
make analysis
:I included the top portion as well because the run time on my machine was almost 16 hours.
Please push changes to correct the error. Also, are your results from previous analysis saved or cached anywhere? In other words, after the fix can I run nn.py individually, or does it depend on upstream analysis? I would like to avoid running
make analysis
again if I can given the huge runtime.