ConPaaS-team / conpaas

ConPaaS: integrated runtime environment for elastic cloud applications
http://www.conpaas.eu
BSD 3-Clause "New" or "Revised" License
14 stars 3 forks source link

xtreemfs request time out error #72

Closed gtato closed 9 years ago

gtato commented 9 years ago

Hi, I created a PHP service and a xtreemfs one using rc5 on OpenNebula. When tried to mount the xtreemfs module on the PHP agent. I get the following error

[ E |  9/26 14:26:51.349 | 0x1c81430      ] Got no response from server 10.100.43.2:32638, retrying (infinite attempts left)

I went on the xtreemfs agent and tried to list the volumes and I got this:

Listing all volumes of the MRC: localhost
[ E |  9/26 16:13:13.776 | 0x26d3cc0      ] The client encountered a communication error sending a request to the server: localhost:32636. Error: Request timed out (call id = 1, interface id = 20001, proc id = 36, server = localhost:32636).
Failed to list the volumes, error:
    Request timed out (call id = 1, interface id = 20001, proc id = 36, server = localhost:32636).

@noma, @tschuett any idea what is going wrong?

noma commented 9 years ago

please check the manager and agent log to see the XtreemFS services were started correctly.

tcrivat commented 9 years ago

Hi @gtato,

Your issue is caused by the fact that XtreemFS was changed to use SSL, so certificates need to be generated and used when accessing the service. This is discussed in the last messages from #71. Generating a certificate and using it when mounting the XtreemFS volume solved the problem for me.

gtato commented 9 years ago

Teodor, could you please check if the hadoop service is running correctly? I tried to run it but there were some issues with the users. Sorry for not giving many details, but I am really busy right now. Cheers.

On Tue, Sep 30, 2014 at 4:48 PM, Teodor Crivat notifications@github.com wrote:

Hi @gtato https://github.com/gtato,

Your issue is caused by the fact that XtreemFS was changed to use SSL, so certificates need to be generated and used when accessing the service. This is discussed in the last messages from #71 https://github.com/ConPaaS-team/conpaas/issues/71. Generating a certificate and using it when mounting the XtreemFS volume solved the problem for me.

— Reply to this email directly or view it on GitHub https://github.com/ConPaaS-team/conpaas/issues/72#issuecomment-57325394.

tschuett commented 9 years ago

A few weeks ago, I tried to get hadoop running in the nutshell image. I spent a few days on it, but no success. I don't know whether this only applies to nutshell.

tcrivat commented 9 years ago

I just tried Hadoop on the Amazon EC2 installation and is seems to start successfully, however I didn't do any other tests.

tcrivat commented 9 years ago

This timeout issue happens because XtreemFS only allows SSL connections authenticated using certificates. Support for generating certificates has now been added to both the Frontend (web interface) and cps-tools (in cpsclient it was already present from the beginning).

We can close this issue now or let it stay open until XtreemFS will be changed to print a more relevant error in this case.

gpierre42 commented 9 years ago

It would be worthwhile to open an issue on this in the XtreemFS git repository, and link to it from here: https://github.com/xtreemfs/xtreemfs/issues

tcrivat commented 9 years ago

This was now fixed in XtreemFS 1.5.1 by adding the message "(Possible reason: The server is using SSL, and the client is not.)", so after the timeout the following message is now printed:

root@conpaas:~# mount.xtreemfs 192.168.122.25/data /var/data [ E | 3/24 12:27:25.514 | 0x236fff0 ] Got no response from server 192.168.122.25:32638, retrying (infinite attempts left) (Possible reason: The server is using SSL, and the client is not.)

This can now be closed.