Closed bergtwvd closed 8 years ago
LOGFILE FOR CONTAINER B:
WARN [main] portico.lrc: MOM support is currently unsupported in IEEE-1516e federations.
DEBUG [main] org.jgroups.conf.ClassConfigurator: Using jg-magic-map.xml as magic number file and jg-protocol-ids.xml for protocol IDs
DEBUG [main] org.jgroups.stack.Configurator: set property UDP.bind_addr to default value /10.10.0.2
DEBUG [main] org.jgroups.stack.Configurator: set property UDP.diagnostics_addr to default value /224.0.75.75
DEBUG [main] org.jgroups.protocols.FRAG2: received CONFIG event: {bind_addr=/10.10.0.2}
TRACE [main] org.jgroups.blocks.MessageDispatcher$ProtocolAdapter: setting local_addr (null) to e96f542b7216-17605
DEBUG [main] org.jgroups.protocols.FRAG2: received CONFIG event: {flush_supported=true}
TRACE [main] org.jgroups.protocols.pbcast.STABLE: stable task started
TRACE [main] org.jgroups.protocols.UNICAST2: e96f542b7216-17605: stable task started
DEBUG [main] org.jgroups.protocols.UDP: sockets will use interface 10.10.0.2
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the send buffer of socket DatagramSocket was set to 640KB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the receive buffer of socket DatagramSocket was set to 8MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the send buffer of socket MulticastSocket was set to 640KB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the receive buffer of socket MulticastSocket was set to 8MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
DEBUG [main] org.jgroups.protocols.UDP: socket information:
, mcast_addr=239.255.20.16:20913, bind_addr=/10.10.0.2, ttl=8
sock: bound to 10.10.0.2:56777, receive buffer size=212992, send buffer size=212992
mcast_sock: bound to 10.10.0.2:20913, send buffer size=212992, receive buffer size=212992
TRACE [main] org.jgroups.protocols.UDP: joined /224.0.75.75:7500 on ethwe
TRACE [main] org.jgroups.protocols.UDP: joined /224.0.75.75:7500 on eth0
TRACE [main] org.jgroups.protocols.UDP: joined /224.0.75.75:7500 on lo
TRACE [main] org.jgroups.protocols.UDP: sending msg to null, src=e96f542b7216-17605, headers are PING: [PING: type=GET_MBRS_REQ, cluster=ExampleFederation, arg=e96f542b7216-17605, view_id=, is_server=false, is_coord=false, logical_name=e96f542b7216-17605, physical_addrs=10.10.0.2:56777], UDP: [channel_name=ExampleFederation]
TRACE [main] org.jgroups.protocols.UDP: looping back message [dst:
TRACE [Incoming,ExampleFederation,e96f542b7216-17605] org.jgroups.protocols.pbcast.STABLE: e96f542b7216-17605: sending stability msg (in 294 ms) e96f542b7216-17605: [0]
TRACE [Incoming] org.jgroups.protocols.pbcast.STABLE: e96f542b7216-17605: sending stability msg e96f542b7216-17605: [0]
TRACE [Incoming] org.jgroups.protocols.UDP: sending msg to null, src=e96f542b7216-17605, headers are STABLE: [STABILITY]: digest is e96f542b7216-17605: [0 (5)], UDP: [channel_name=ExampleFederation]
TRACE [Incoming] org.jgroups.protocols.UDP: looping back message [dst:
TRACE [ViewHandler,ExampleFederation,e96f542b7216-17605] org.jgroups.protocols.pbcast.NAKACK2: e96f542b7216-17605 sending e96f542b7216-17605#7
TRACE [ViewHandler,ExampleFederation,e96f542b7216-17605] org.jgroups.protocols.UDP: sending msg to null, src=e96f542b7216-17605, headers are GMS: GmsHeader[VIEW]: view=[e96f542b7216-17605|1] [e96f542b7216-17605, a627c2c18b9a-60763], NAKACK2: [MSG, seqno=7], UDP: [channel_name=ExampleFederation]
TRACE [ViewHandler,ExampleFederation,e96f542b7216-17605] org.jgroups.protocols.UDP: looping back message [dst:
TRACE [Incoming,ExampleFederation,e96f542b7216-17605] org.jgroups.protocols.UDP: received [dst:
TRACE [ViewHandler,ExampleFederation,e96f542b7216-17605] org.jgroups.protocols.pbcast.NAKACK2: e96f542b7216-17605 sending e96f542b7216-17605#9
TRACE [ViewHandler,ExampleFederation,e96f542b7216-17605] org.jgroups.protocols.UDP: sending msg to null, src=e96f542b7216-17605, headers are GMS: GmsHeader[VIEW]: view=[e96f542b7216-17605|2] [e96f542b7216-17605], NAKACK2: [MSG, seqno=9], UDP: [channel_name=ExampleFederation]
TRACE [ViewHandler,ExampleFederation,e96f542b7216-17605] org.jgroups.protocols.UDP: looping back message [dst:
LOGFILE FOR CONTAINER A:
WARN [main] portico.lrc: MOM support is currently unsupported in IEEE-1516e federations.
DEBUG [main] org.jgroups.conf.ClassConfigurator: Using jg-magic-map.xml as magic number file and jg-protocol-ids.xml for protocol IDs
DEBUG [main] org.jgroups.stack.Configurator: set property UDP.bind_addr to default value /10.10.0.1
DEBUG [main] org.jgroups.stack.Configurator: set property UDP.diagnostics_addr to default value /224.0.75.75
DEBUG [main] org.jgroups.protocols.FRAG2: received CONFIG event: {bind_addr=/10.10.0.1}
TRACE [main] org.jgroups.blocks.MessageDispatcher$ProtocolAdapter: setting local_addr (null) to a627c2c18b9a-60763
DEBUG [main] org.jgroups.protocols.FRAG2: received CONFIG event: {flush_supported=true}
TRACE [main] org.jgroups.protocols.pbcast.STABLE: stable task started
TRACE [main] org.jgroups.protocols.UNICAST2: a627c2c18b9a-60763: stable task started
DEBUG [main] org.jgroups.protocols.UDP: sockets will use interface 10.10.0.1
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the send buffer of socket DatagramSocket was set to 640KB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the receive buffer of socket DatagramSocket was set to 8MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the send buffer of socket MulticastSocket was set to 640KB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the receive buffer of socket MulticastSocket was set to 8MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
DEBUG [main] org.jgroups.protocols.UDP: socket information:
, mcast_addr=239.255.20.16:20913, bind_addr=/10.10.0.1, ttl=8
sock: bound to 10.10.0.1:48841, receive buffer size=212992, send buffer size=212992
mcast_sock: bound to 10.10.0.1:20913, send buffer size=212992, receive buffer size=212992
TRACE [main] org.jgroups.protocols.UDP: joined /224.0.75.75:7500 on ethwe
TRACE [main] org.jgroups.protocols.UDP: joined /224.0.75.75:7500 on eth0
TRACE [main] org.jgroups.protocols.UDP: joined /224.0.75.75:7500 on lo
TRACE [main] org.jgroups.protocols.UDP: sending msg to null, src=a627c2c18b9a-60763, headers are PING: [PING: type=GET_MBRS_REQ, cluster=ExampleFederation, arg=a627c2c18b9a-60763, view_id=, is_server=false, is_coord=false, logical_name=a627c2c18b9a-60763, physical_addrs=10.10.0.1:48841], UDP: [channel_name=ExampleFederation]
TRACE [main] org.jgroups.protocols.UDP: looping back message [dst:
DEBUG [main] org.jgroups.protocols.pbcast.NAKACK2:
[a627c2c18b9a-60763 setDigest()]
existing digest: []
new digest: e96f542b7216-17605: [6 (6)], a627c2c18b9a-60763: [0 (0)]
resulting digest: a627c2c18b9a-60763: [0 (0)], e96f542b7216-17605: [6 (6)]
DEBUG [main] org.jgroups.protocols.pbcast.GMS: a627c2c18b9a-60763: installing view [e96f542b7216-17605|1] [e96f542b7216-17605, a627c2c18b9a-60763]
TRACE [main] org.jgroups.protocols.pbcast.STABLE: a627c2c18b9a-60763: resetting digest from NAKACK: a627c2c18b9a-60763: [0], e96f542b7216-17605: [6]
TRACE [Incoming,ExampleFederation,a627c2c18b9a-60763] org.jgroups.protocols.UDP: received [dst:
STACK TRACE OF CONTAINER A:
ERROR [main] portico.lrc: org.portico.lrc.compat.JRTIinternalError: Waited 5 seconds for RoleCall from federate [1], none received, connection error hla.rti1516e.exceptions.RTIinternalError: org.portico.lrc.compat.JRTIinternalError: Waited 5 seconds for RoleCall from federate [1], none received, connection error at org.portico.impl.hla1516e.Rti1516eAmbassador.joinFederationExecution(Rti1516eAmbassador.java:668) at ieee1516e.ExampleFederate.runFederate(Unknown Source) at ieee1516e.ExampleFederate.main(Unknown Source) Caused by: org.portico.lrc.compat.JRTIinternalError: Waited 5 seconds for RoleCall from federate [1], none received, connection error at org.portico.lrc.services.federation.handlers.outgoing.JoinFederationHandler.process(JoinFederationHandler.java:146) at org.portico.utils.messaging.MessageSink.process(MessageSink.java:187) at org.portico.impl.hla1516e.Impl1516eHelper.processMessage(Impl1516eHelper.java:99) at org.portico.impl.hla1516e.Rti1516eAmbassador.processMessage(Rti1516eAmbassador.java:5554) at org.portico.impl.hla1516e.Rti1516eAmbassador.joinFederationExecution(Rti1516eAmbassador.java:647) ... 2 more
Dockerfile #1
FROM ubuntu:14.04 MAINTAINER Tom van den Berg tom.vandenberg@tno.nl
RUN apt-get update && apt-get install -y supervisor
COPY ./port /usr/local/portico/ WORKDIR /usr/local/portico RUN tar -xvf portico-2.0.1-linux64.tar RUN rm portico-2.0.1-linux64.tar
ENV RTI_HOME=/usr/local/portico/portico-2.0.1 ENV JAVA_HOME=$RTI_HOME/jre ENV CLASSPATH=$CLASSPATH:$RTI_HOME/lib/portico.jar ENV PATH=$PATH:$JAVA_HOME/bin
CMD ["/bin/bash"]
Dockerfile #2
FROM bergtwvd/po-rti-base:2.0.1
COPY ./sample /tmp/sample
WORKDIR /tmp/sample ENTRYPOINT ["/bin/sh", "./start.sh"]
start.sh starts the sample program:
java -cp ./sample.jar:$RTI_HOME/lib/portico.jar ieee1516e.ExampleFederate $*
Additonal tests show:
who is at fault here? weave, jgroups or Portico?
Hi there bergtwvd, thanks for your report. Just a quick note to advise that it has been received.
I'll have a chat with Tim about what might be at fault here. UDP can be a bit of a fickle beast, especially when VMs are involved.
Some suggestions below:
Hope that helps,
Michael
Michael, thanks for your input.
I have different setups for the VB networking, depending on which network my laptop to connected (work or home).
At home I set up both VB instances as follows:
At work I use Host Only for adapter 2, with some additional network configuration inside docker for IP routing.
Weave creates an overlay network, using the IP address of adapter 2 to communicate between hosts. Weave does not create additional adapters. When I start containers and attach them to weave I can ping them on their weave overlay IP address. When using the Pitch RTI (configured to use TCP/IP), the weave net works fine. I can for example run part of the federation in AWS and part on my laptop, using weave. So I am flexible where I move my containers to.
The following Jgroups multicast test appears to succeed:
Where: The 11.x.x.x address is a weave overlay address. The 10.1.x.x address is for VB#1 The 10.2.x.x address is for VB#2
On a VB#2 container:
root@475d52fd2266:/tmp/sample# java -cp $RTI_HOME/lib/portico.jar org.jgroups.tests.McastReceiverTest -mcast_addr 228.10.10.10 -port 20913 Socket=0.0.0.0/0.0.0.0:20913, bind interface=/fe80:0:0:0:b460:cdff:fec3:1c8d%ethwe Socket=0.0.0.0/0.0.0.0:20913, bind interface=/11.11.11.3 Socket=0.0.0.0/0.0.0.0:20913, bind interface=/fe80:0:0:0:42:aff:fe02:104%eth0 Socket=0.0.0.0/0.0.0.0:20913, bind interface=/10.2.1.4 Socket=0.0.0.0/0.0.0.0:20913, bind interface=/0:0:0:0:0:0:0:1%lo Socket=0.0.0.0/0.0.0.0:20913, bind interface=/127.0.0.1 HELLO [sender=11.11.11.1:20913] HELLO [sender=11.11.11.1:20913] HELLO [sender=11.11.11.1:20913] HELLO [sender=11.11.11.1:20913] HELLO [sender=11.11.11.1:20913] HELLO [sender=11.11.11.1:20913] HELLO [sender=11.11.11.1:20913] HELLO [sender=11.11.11.1:20913]
On a VB#1 container: root@00f7305172ed:/tmp/sample# java -cp $RTI_HOME/lib/portico.jar org.jgroups.tests.McastSenderTest -mcast_addr 228.10.10.10 -port 20913 Socket #1=0.0.0.0/0.0.0.0:20913, ttl=32, bind interface=/fe80:0:0:0:f07e:4eff:fe8e:a744%ethwe Socket #2=0.0.0.0/0.0.0.0:20913, ttl=32, bind interface=/11.11.11.1 Socket #3=0.0.0.0/0.0.0.0:20913, ttl=32, bind interface=/fe80:0:0:0:42:aff:fe01:104%eth0 Socket #4=0.0.0.0/0.0.0.0:20913, ttl=32, bind interface=/10.1.1.4 Socket #5=0.0.0.0/0.0.0.0:20913, ttl=32, bind interface=/0:0:0:0:0:0:0:1%lo Socket #6=0.0.0.0/0.0.0.0:20913, ttl=32, bind interface=/127.0.0.1
HELLO
As a fall back I would like to configure JGroups to use TCP and see if that works. Did you ever run the Portico RTI with such a JGroups configuration?
-- Tom
From: Michael Fraser [mailto:notifications@github.com] Sent: dinsdag 2 juni 2015 13:49 To: openlvc/portico Cc: Berg, T.W. (Tom) van den Subject: Re: [portico] Running Portico example across Docker Weave does not work (#140)
Hi there bergtwvd, thanks for your report. Just a quick note to advise that it has been received.
I'll have a chat with Tim about what might be at fault here. UDP can be a bit of a fickle beast, especially when VMs are involved.
Some suggestions below:
In the past I've had to change the Virtual Box network adaptor type between NAT and Bridged to get UDP connectivity with the physical network (can't remember which one of the two worked, but I think it might be Bridged). I'm not familiar with the details with weave, or how it fits into the Virtual Box eco-system, but it sounds like it replaces the Virtual Box network adaptor completely?
If your machines have a statically assigned IP address, try setting a default gateway in your network config
The other thing I'd recommend would be to sit wireshark on both machines and sniff traffic on 224.0.75.75:7500. Check if wireshark can see traffic from both the local machine (which should mean that jgroups is working), and the remote machine (what weave will have the most influence over).
Hope that helps,
Michael
— Reply to this email directly or view it on GitHubhttps://github.com/openlvc/portico/issues/140#issuecomment-107926379. This message may contain information that is not intended for you. If you are not the addressee or if this message was sent to you by mistake, you are requested to inform the sender and delete the message. TNO accepts no liability for the content of this e-mail, for the manner in which you use it and for damage of any kind resulting from the risks inherent to the electronic transmission of messages.
I have uploaded a Docker image on the Docker Hub that can be used for testing.
For more information see: https://registry.hub.docker.com/u/bergtwvd/po/
-- Tom
The trace output suggests that the two containers did find each other.
In B we have
TRACE [main] org.jgroups.protocols.PING: discovery took 5007 ms: responses: 1 total (0 servers (0 coord), 1 clients)
TRACE [main] org.jgroups.protocols.pbcast.GMS: e96f542b7216-17605: no initial members discovered: creating group as first member
whereas A shows
TRACE [main] org.jgroups.protocols.PING: discovery took 16 ms: responses: 2 total (1 servers (1 coord), 1 clients)
TRACE [main] org.jgroups.protocols.pbcast.GMS: a627c2c18b9a-60763: initial_mbrs are e96f542b7216-17605
DEBUG [main] org.jgroups.protocols.pbcast.GMS: election results: {e96f542b7216-17605=1}
DEBUG [main] org.jgroups.protocols.pbcast.GMS: sending JOIN(a627c2c18b9a-60763) to e96f542b7216-17605
TRACE [main] org.jgroups.protocols.UNICAST2: a627c2c18b9a-60763: created connection to e96f542b7216-17605 (conn_id=0)
Yes, that is correct, that is what I see happening as well. It goes wrong after that.
At application level, federate A wants to join the federation after its attempt to create it first. Both federates A and B run the same code shown below (a fraction).
I probably do another run to get additional debug from Portico, since it is hard to put the JGroups log statements in context.
<.../>
///////////////////////////////////////////////////////////////////////////
////////////////////////// Main Simulation Method /////////////////////////
///////////////////////////////////////////////////////////////////////////
/**
* This is the main simulation loop. It can be thought of as the main method of
* the federate. For a description of the basic flow of this federate, see the
* class level comments
*/
public void runFederate( String federateName ) throws Exception
{
/////////////////////////////////////////////////
// 1 & 2. create the RTIambassador and Connect //
/////////////////////////////////////////////////
log( "Creating RTIambassador" );
rtiamb = RtiFactoryFactory.getRtiFactory().getRtiAmbassador();
encoderFactory = RtiFactoryFactory.getRtiFactory().getEncoderFactory();
// connect
log( "Connecting..." );
fedamb = new ExampleFederateAmbassador( this );
rtiamb.connect( fedamb, CallbackModel.HLA_EVOKED );
//////////////////////////////
// 3. create the federation //
//////////////////////////////
log( "Creating Federation..." );
// We attempt to create a new federation with the first three of the
// restaurant FOM modules covering processes, food and drink
try
{
URL[] modules = new URL[]{
(new File("foms/RestaurantProcesses.xml")).toURI().toURL(),
(new File("foms/RestaurantFood.xml")).toURI().toURL(),
(new File("foms/RestaurantDrinks.xml")).toURI().toURL()
};
rtiamb.createFederationExecution( "ExampleFederation", modules );
log( "Created Federation" );
}
catch( FederationExecutionAlreadyExists exists )
{
log( "Didn't create federation, it already existed" );
}
catch( MalformedURLException urle )
{
log( "Exception loading one of the FOM modules from disk: " + urle.getMessage() );
urle.printStackTrace();
return;
}
////////////////////////////
// 4. join the federation //
////////////////////////////
URL[] joinModules = new URL[]{
(new File("foms/RestaurantSoup.xml")).toURI().toURL()
};
rtiamb.joinFederationExecution( federateName, // name for the federate
"ExampleFederateType", // federate type
"ExampleFederation", // name of federation
joinModules ); // modules we want to add
log( "Joined Federation as " + federateName );
<.../>
I have run the test again, with TRACE logging enabled for both Portico and JGroups. Federate XXX is started first (IP 11.11.11.1/24). When XXX waits for user input, federate YYY is started (IP 11.11.11.2/24).
Both IP addresses can be pinged in the Weave network.
Again a stacktrace at YYY. See next posts.
Some obervations: Log XXX line 253: GET_MBRS_REQ message from YYY (corresponds roughly to Log YYY line 104) Log XXX line 308: JOIN_RSP message to YYY (corresonds to Log YYY line 121) Log YYY line 163: portico.lrc.jgroups: SUCCESS Connected to channel Log YYY line 181-184: YYY creates a connection to itself?
FEDERATE XXX
DEBUG [main] portico.lrc: Creating new LRC
DEBUG [main] portico.lrc: Portico version: 2.0.1 (build 0)
DEBUG [main] portico.lrc: Interface: IEEE1516e
WARN [main] portico.lrc: MOM support is currently unsupported in IEEE-1516e federations.
TRACE [main] portico.lrc: Provided connection implementation is "org.portico.bindings.jgroups.JGroupsConnection"
TRACE [main] portico.lrc: Trying to load connection class: org.portico.bindings.jgroups.JGroupsConnection
TRACE [main] portico.lrc: ATTEMPT create IConnection, class= class org.portico.bindings.jgroups.JGroupsConnection
TRACE [main] portico.lrc: SUCCESS created IConnection, class= class org.portico.bindings.jgroups.JGroupsConnection
TRACE [main] portico.lrc: Applying modules using component keyword: lrc1516e
TRACE [main] portico.lrc: STARTING Apply module [lrc-base] to LRC
TRACE [main] portico.lrc: Applied [82/92] handlers
TRACE [main] portico.lrc: STARTING Apply module [lrc1516-callback] to LRC
TRACE [main] portico.lrc: Applied [0/11] handlers
TRACE [main] portico.lrc: STARTING Apply module [lrc1516e-callback] to LRC
TRACE [main] portico.lrc: Applied [24/24] handlers
TRACE [main] portico.lrc: STARTING Apply module [lrc13-callback] to LRC
TRACE [main] portico.lrc: Applied [0/24] handlers
DEBUG [main] portico.lrc: Messaging framework configuration complete
INFO [main] portico.lrc: LRC initialized (HLA version: IEEE1516e)
INFO [main] portico.lrc: Opening LRC Connection
INFO [main] portico.lrc.jgroups: jgroups connection is up and running
DEBUG [main] portico.lrc.fom: Parsing FED file (format=ieee1516e): file:/usr/local/portico/portico-2.0.1/examples/java/ieee1516e/foms/RestaurantProcesses.xml
DEBUG [main] portico.lrc.fom: Parsing FED file (format=ieee1516e): file:/usr/local/portico/portico-2.0.1/examples/java/ieee1516e/foms/RestaurantFood.xml
DEBUG [main] portico.lrc.fom: Parsing FED file (format=ieee1516e): file:/usr/local/portico/portico-2.0.1/examples/java/ieee1516e/foms/RestaurantDrinks.xml
DEBUG [main] portico.lrc: Standard MIM not present - adding it
DEBUG [main] portico.lrc.fom: Parsing FED file (format=ieee1516e): jar:file:/usr/local/portico/portico-2.0.1/lib/portico.jar!/etc/ieee1516e/HLAstandardMIM.xml
TRACE [main] portico.lrc.merger: Beginning merge of 4 FOM models
TRACE [main] portico.lrc.merger: Merging [file:/usr/local/portico/portico-2.0.1/examples/java/ieee1516e/foms/RestaurantProcesses.xml] into combined FOM
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Employee]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Employee.Waiter]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Employee.Cashier]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Employee.Greeter]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Employee.Dishwasher]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Employee.Cook]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Customer]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Order]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.CustomerSeated]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.CustomerPays]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.CustomerPays.ByCreditCard]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.CustomerPays.ByCash]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.OrderTaken]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.OrderTaken.FromAdultMeny]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.OrderTaken.FromKidsMenu]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.FoodServed]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.FoodServed.DessertServed]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.FoodServed.MainCourseServed]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.FoodServed.DrinkServed]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.FoodServed.AppetizerServed]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.CustomerLeaves]
TRACE [main] portico.lrc.merger: Merging [file:/usr/local/portico/portico-2.0.1/examples/java/ieee1516e/foms/RestaurantFood.xml] into combined FOM
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Appetizers]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Appetizers.Soup]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Appetizers.Nachos]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Drink]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.MainCourse]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.SideDish]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.SideDish.Broccoli]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.SideDish.BakedPotato]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.SideDish.Corn]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Dessert]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Dessert.Cake]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Dessert.IceCream]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Dessert.IceCream.Vanilla]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Dessert.IceCream.Chocolate]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Pasta]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Beef]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Chicken]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Seafood]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Seafood.Fish]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Seafood.Lobster]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Seafood.Shrimp]
TRACE [main] portico.lrc.merger: Merging [file:/usr/local/portico/portico-2.0.1/examples/java/ieee1516e/foms/RestaurantDrinks.xml] into combined FOM
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Drink.Coffee]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Drink.Water]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Drink.Soda]
DEBUG [main] portico.lrc: ATTEMPT Create federation execution [ExampleFederation]
TRACE [main] portico.lrc.jgroups: ATTEMPT Connecting to channel [ExampleFederation]
DEBUG [main] org.jgroups.conf.ClassConfigurator: Using jg-magic-map.xml as magic number file and jg-protocol-ids.xml for protocol IDs
DEBUG [main] org.jgroups.stack.Configurator: set property UDP.diagnostics_addr to default value /224.0.75.75
DEBUG [main] org.jgroups.protocols.FRAG2: received CONFIG event: {bind_addr=/11.11.11.1}
TRACE [main] org.jgroups.blocks.MessageDispatcher$ProtocolAdapter: setting local_addr (null) to 79e078919f2e-55593
DEBUG [main] org.jgroups.protocols.FRAG2: received CONFIG event: {flush_supported=true}
TRACE [main] org.jgroups.protocols.pbcast.STABLE: stable task started
TRACE [main] org.jgroups.protocols.UNICAST2: 79e078919f2e-55593: stable task started
DEBUG [main] org.jgroups.protocols.UDP: sockets will use interface 11.11.11.1
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the send buffer of socket DatagramSocket was set to 640KB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the receive buffer of socket DatagramSocket was set to 8MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the send buffer of socket MulticastSocket was set to 640KB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the receive buffer of socket MulticastSocket was set to 8MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
DEBUG [main] org.jgroups.protocols.UDP: socket information:
, mcast_addr=239.255.20.16:20913, bind_addr=/11.11.11.1, ttl=8
sock: bound to 11.11.11.1:33013, receive buffer size=212992, send buffer size=212992
mcast_sock: bound to 11.11.11.1:20913, send buffer size=212992, receive buffer size=212992
TRACE [main] org.jgroups.protocols.UDP: joined /224.0.75.75:7500 on ethwe
TRACE [main] org.jgroups.protocols.UDP: joined /224.0.75.75:7500 on eth0
TRACE [main] org.jgroups.protocols.UDP: joined /224.0.75.75:7500 on lo
TRACE [main] org.jgroups.protocols.UDP: sending msg to null, src=79e078919f2e-55593, headers are PING: [PING: type=GET_MBRS_REQ, cluster=ExampleFederation, arg=79e078919f2e-55593, view_id=, is_server=false, is_coord=false, logical_name=79e078919f2e-55593, physical_addrs=11.11.11.1:33013], UDP: [channel_name=ExampleFederation]
TRACE [main] org.jgroups.protocols.UDP: looping back message [dst:
TRACE [Incoming,ExampleFederation,79e078919f2e-55593] org.jgroups.protocols.pbcast.STABLE: 79e078919f2e-55593: sending stability msg (in 324 ms) 79e078919f2e-55593: [0]
TRACE [Incoming] org.jgroups.protocols.pbcast.STABLE: 79e078919f2e-55593: sending stability msg 79e078919f2e-55593: [0]
TRACE [Incoming] org.jgroups.protocols.UDP: sending msg to null, src=79e078919f2e-55593, headers are STABLE: [STABILITY]: digest is 79e078919f2e-55593: [0 (5)], UDP: [channel_name=ExampleFederation]
TRACE [Incoming] org.jgroups.protocols.UDP: looping back message [dst:
TRACE [ViewHandler,ExampleFederation,79e078919f2e-55593] org.jgroups.protocols.pbcast.NAKACK2: 79e078919f2e-55593 sending 79e078919f2e-55593#7
TRACE [ViewHandler,ExampleFederation,79e078919f2e-55593] org.jgroups.protocols.UDP: sending msg to null, src=79e078919f2e-55593, headers are GMS: GmsHeader[VIEW]: view=[79e078919f2e-55593|1] [79e078919f2e-55593, 5f7daed5d167-29951], NAKACK2: [MSG, seqno=7], UDP: [channel_name=ExampleFederation]
TRACE [ViewHandler,ExampleFederation,79e078919f2e-55593] org.jgroups.protocols.UDP: looping back message [dst:
TRACE [Incoming,ExampleFederation,79e078919f2e-55593] org.jgroups.protocols.UDP: received [dst:
TRACE [Incoming,ExampleFederation,79e078919f2e-55593] org.jgroups.protocols.pbcast.STABLE: 79e078919f2e-55593: sending stability msg (in 1555 ms) 5f7daed5d167-29951: [0], 79e078919f2e-55593: [7]
TRACE [Incoming] org.jgroups.protocols.pbcast.NAKACK2: 79e078919f2e-55593: sending XMIT_REQ ([2]) to 5f7daed5d167-29951
TRACE [Incoming] org.jgroups.protocols.UDP: sending msg to 5f7daed5d167-29951, src=79e078919f2e-55593, headers are NAKACK2: [XMIT_REQ, sender=5f7daed5d167-29951], UDP: [channel_name=ExampleFederation]
TRACE [Incoming] org.jgroups.protocols.pbcast.NAKACK2: 79e078919f2e-55593: sending XMIT_REQ ([2]) to 5f7daed5d167-29951
TRACE [Incoming] org.jgroups.protocols.UDP: sending msg to 5f7daed5d167-29951, src=79e078919f2e-55593, headers are NAKACK2: [XMIT_REQ, sender=5f7daed5d167-29951], UDP: [channel_name=ExampleFederation]
TRACE [Incoming,ExampleFederation,79e078919f2e-55593] org.jgroups.protocols.UDP: received [dst:
TRACE [ViewHandler,ExampleFederation,79e078919f2e-55593] org.jgroups.protocols.pbcast.NAKACK2: 79e078919f2e-55593 sending 79e078919f2e-55593#9
TRACE [ViewHandler,ExampleFederation,79e078919f2e-55593] org.jgroups.protocols.UDP: sending msg to null, src=79e078919f2e-55593, headers are GMS: GmsHeader[VIEW]: view=[79e078919f2e-55593|2] [79e078919f2e-55593], NAKACK2: [MSG, seqno=9], UDP: [channel_name=ExampleFederation]
TRACE [ViewHandler,ExampleFederation,79e078919f2e-55593] org.jgroups.protocols.UDP: looping back message [dst:
FEDERATE YYY
DEBUG [main] portico.lrc: Creating new LRC
DEBUG [main] portico.lrc: Portico version: 2.0.1 (build 0)
DEBUG [main] portico.lrc: Interface: IEEE1516e
WARN [main] portico.lrc: MOM support is currently unsupported in IEEE-1516e federations.
TRACE [main] portico.lrc: Provided connection implementation is "org.portico.bindings.jgroups.JGroupsConnection"
TRACE [main] portico.lrc: Trying to load connection class: org.portico.bindings.jgroups.JGroupsConnection
TRACE [main] portico.lrc: ATTEMPT create IConnection, class= class org.portico.bindings.jgroups.JGroupsConnection
TRACE [main] portico.lrc: SUCCESS created IConnection, class= class org.portico.bindings.jgroups.JGroupsConnection
TRACE [main] portico.lrc: Applying modules using component keyword: lrc1516e
TRACE [main] portico.lrc: STARTING Apply module [lrc13-callback] to LRC
TRACE [main] portico.lrc: Applied [0/24] handlers
TRACE [main] portico.lrc: STARTING Apply module [lrc-base] to LRC
TRACE [main] portico.lrc: Applied [82/92] handlers
TRACE [main] portico.lrc: STARTING Apply module [lrc1516-callback] to LRC
TRACE [main] portico.lrc: Applied [0/11] handlers
TRACE [main] portico.lrc: STARTING Apply module [lrc1516e-callback] to LRC
TRACE [main] portico.lrc: Applied [24/24] handlers
DEBUG [main] portico.lrc: Messaging framework configuration complete
INFO [main] portico.lrc: LRC initialized (HLA version: IEEE1516e)
INFO [main] portico.lrc: Opening LRC Connection
INFO [main] portico.lrc.jgroups: jgroups connection is up and running
DEBUG [main] portico.lrc.fom: Parsing FED file (format=ieee1516e): file:/usr/local/portico/portico-2.0.1/examples/java/ieee1516e/foms/RestaurantProcesses.xml
DEBUG [main] portico.lrc.fom: Parsing FED file (format=ieee1516e): file:/usr/local/portico/portico-2.0.1/examples/java/ieee1516e/foms/RestaurantFood.xml
DEBUG [main] portico.lrc.fom: Parsing FED file (format=ieee1516e): file:/usr/local/portico/portico-2.0.1/examples/java/ieee1516e/foms/RestaurantDrinks.xml
DEBUG [main] portico.lrc: Standard MIM not present - adding it
DEBUG [main] portico.lrc.fom: Parsing FED file (format=ieee1516e): jar:file:/usr/local/portico/portico-2.0.1/lib/portico.jar!/etc/ieee1516e/HLAstandardMIM.xml
TRACE [main] portico.lrc.merger: Beginning merge of 4 FOM models
TRACE [main] portico.lrc.merger: Merging [file:/usr/local/portico/portico-2.0.1/examples/java/ieee1516e/foms/RestaurantProcesses.xml] into combined FOM
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Order]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Employee]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Employee.Greeter]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Employee.Waiter]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Employee.Cashier]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Employee.Cook]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Employee.Dishwasher]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Customer]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.CustomerSeated]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.CustomerPays]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.CustomerPays.ByCash]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.CustomerPays.ByCreditCard]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.FoodServed]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.FoodServed.MainCourseServed]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.FoodServed.DessertServed]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.FoodServed.AppetizerServed]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.FoodServed.DrinkServed]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.CustomerLeaves]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.OrderTaken]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.OrderTaken.FromKidsMenu]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAinteractionRoot.CustomerTransactions.OrderTaken.FromAdultMeny]
TRACE [main] portico.lrc.merger: Merging [file:/usr/local/portico/portico-2.0.1/examples/java/ieee1516e/foms/RestaurantFood.xml] into combined FOM
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Appetizers]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Appetizers.Soup]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Appetizers.Nachos]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Drink]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Dessert]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Dessert.IceCream]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Dessert.IceCream.Chocolate]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Dessert.IceCream.Vanilla]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Dessert.Cake]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.MainCourse]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Seafood]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Seafood.Lobster]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Seafood.Shrimp]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Seafood.Fish]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Pasta]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Beef]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Entree.Chicken]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.SideDish]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.SideDish.Corn]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.SideDish.Broccoli]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.SideDish.BakedPotato]
TRACE [main] portico.lrc.merger: Merging [file:/usr/local/portico/portico-2.0.1/examples/java/ieee1516e/foms/RestaurantDrinks.xml] into combined FOM
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Drink.Water]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Drink.Coffee]
TRACE [main] portico.lrc.merger: -> Inserting class [HLAobjectRoot.Food.Drink.Soda]
DEBUG [main] portico.lrc: ATTEMPT Create federation execution [ExampleFederation]
TRACE [main] portico.lrc.jgroups: ATTEMPT Connecting to channel [ExampleFederation]
DEBUG [main] org.jgroups.conf.ClassConfigurator: Using jg-magic-map.xml as magic number file and jg-protocol-ids.xml for protocol IDs
DEBUG [main] org.jgroups.stack.Configurator: set property UDP.diagnostics_addr to default value /224.0.75.75
DEBUG [main] org.jgroups.protocols.FRAG2: received CONFIG event: {bind_addr=/11.11.11.2}
TRACE [main] org.jgroups.blocks.MessageDispatcher$ProtocolAdapter: setting local_addr (null) to 5f7daed5d167-29951
DEBUG [main] org.jgroups.protocols.FRAG2: received CONFIG event: {flush_supported=true}
TRACE [main] org.jgroups.protocols.pbcast.STABLE: stable task started
TRACE [main] org.jgroups.protocols.UNICAST2: 5f7daed5d167-29951: stable task started
DEBUG [main] org.jgroups.protocols.UDP: sockets will use interface 11.11.11.2
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the send buffer of socket DatagramSocket was set to 640KB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the receive buffer of socket DatagramSocket was set to 8MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the send buffer of socket MulticastSocket was set to 640KB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
WARN [main] org.jgroups.protocols.UDP: [JGRP00014] the receive buffer of socket MulticastSocket was set to 8MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
DEBUG [main] org.jgroups.protocols.UDP: socket information:
, mcast_addr=239.255.20.16:20913, bind_addr=/11.11.11.2, ttl=8
sock: bound to 11.11.11.2:33187, receive buffer size=212992, send buffer size=212992
mcast_sock: bound to 11.11.11.2:20913, send buffer size=212992, receive buffer size=212992
TRACE [main] org.jgroups.protocols.UDP: joined /224.0.75.75:7500 on ethwe
TRACE [main] org.jgroups.protocols.UDP: joined /224.0.75.75:7500 on eth0
TRACE [main] org.jgroups.protocols.UDP: joined /224.0.75.75:7500 on lo
TRACE [main] org.jgroups.protocols.UDP: sending msg to null, src=5f7daed5d167-29951, headers are PING: [PING: type=GET_MBRS_REQ, cluster=ExampleFederation, arg=5f7daed5d167-29951, view_id=, is_server=false, is_coord=false, logical_name=5f7daed5d167-29951, physical_addrs=11.11.11.2:33187], UDP: [channel_name=ExampleFederation]
TRACE [main] org.jgroups.protocols.UDP: looping back message [dst:
DEBUG [main] org.jgroups.protocols.pbcast.NAKACK2:
[5f7daed5d167-29951 setDigest()]
existing digest: []
new digest: 79e078919f2e-55593: [6 (6)], 5f7daed5d167-29951: [0 (0)]
resulting digest: 79e078919f2e-55593: [6 (6)], 5f7daed5d167-29951: [0 (0)]
DEBUG [main] org.jgroups.protocols.pbcast.GMS: 5f7daed5d167-29951: installing view [79e078919f2e-55593|1] [79e078919f2e-55593, 5f7daed5d167-29951]
TRACE [main] org.jgroups.protocols.pbcast.STABLE: 5f7daed5d167-29951: resetting digest from NAKACK: 5f7daed5d167-29951: [0], 79e078919f2e-55593: [6]
TRACE [Incoming,ExampleFederation,5f7daed5d167-29951] org.jgroups.protocols.UDP: received [dst:
DEBUG [main] org.jgroups.protocols.pbcast.FLUSH: 5f7daed5d167-29951: received RESUME, sending STOP_FLUSH to all
TRACE [main] org.jgroups.protocols.pbcast.NAKACK2: 5f7daed5d167-29951 sending 5f7daed5d167-29951#1
TRACE [main] org.jgroups.protocols.UDP: sending msg to null, src=5f7daed5d167-29951, headers are FLUSH: FLUSH[type=STOP_FLUSH,viewId=1], NAKACK2: [MSG, seqno=1], UDP: [channel_name=ExampleFederation]
TRACE [main] org.jgroups.protocols.UDP: looping back message [dst:
TRACE [Incoming,ExampleFederation,5f7daed5d167-29951] org.jgroups.protocols.pbcast.STABLE: 5f7daed5d167-29951: sending stability msg (in 1403 ms) 5f7daed5d167-29951: [0], 79e078919f2e-55593: [6]
TRACE [Incoming,ExampleFederation,5f7daed5d167-29951] org.jgroups.protocols.UDP: received [dst:
STACKTRACE OF YYY at the end:
ERROR [main] portico.lrc: org.jgroups.TimeoutException: TimeoutException hla.rti1516e.exceptions.RTIinternalError: Unknown exception received from RTI (class org.jgroups.TimeoutException) for createFederationExecution(): TimeoutException at org.portico.impl.hla1516e.Rti1516eAmbassador.logException(Rti1516eAmbassador.java:5588) at org.portico.impl.hla1516e.Rti1516eAmbassador.createFederationExecution(Rti1516eAmbassador.java:349) at ieee1516e.ExampleFederate.runFederate(ExampleFederate.java:206) at ieee1516e.ExampleFederate.main(ExampleFederate.java:560) Caused by: org.jgroups.TimeoutException: TimeoutException at org.jgroups.util.Promise._getResultWithTimeout(Promise.java:145) at org.jgroups.util.Promise.getResultWithTimeout(Promise.java:40) at org.jgroups.util.AckCollector.waitForAllAcks(AckCollector.java:93) at org.jgroups.protocols.RSVP$Entry.block(RSVP.java:287) at org.jgroups.protocols.RSVP.down(RSVP.java:118) at org.jgroups.protocols.pbcast.STABLE.down(STABLE.java:328) at org.jgroups.protocols.pbcast.GMS.down(GMS.java:965) at org.jgroups.protocols.FlowControl.down(FlowControl.java:351) at org.jgroups.protocols.FlowControl.down(FlowControl.java:351) at org.jgroups.protocols.FRAG2.down(FRAG2.java:147) at org.jgroups.protocols.pbcast.STATE_TRANSFER.down(STATE_TRANSFER.java:238) at org.jgroups.protocols.pbcast.FLUSH.down(FLUSH.java:312) at org.jgroups.stack.ProtocolStack.down(ProtocolStack.java:1025) at org.jgroups.JChannel.down(JChannel.java:729) at org.jgroups.JChannel.send(JChannel.java:445) at org.portico.bindings.jgroups.channel.FederationChannel.createFederation(FederationChannel.java:283) at org.portico.bindings.jgroups.JGroupsConnection.createFederation(JGroupsConnection.java:232) at org.portico.lrc.services.federation.handlers.outgoing.CreateFederationHandler.process(CreateFederationHandler.java:79) at org.portico.utils.messaging.MessageSink.process(MessageSink.java:187) at org.portico.impl.hla1516e.Impl1516eHelper.processMessage(Impl1516eHelper.java:99) at org.portico.impl.hla1516e.Rti1516eAmbassador.processMessage(Rti1516eAmbassador.java:5554) at org.portico.impl.hla1516e.Rti1516eAmbassador.createFederationExecution(Rti1516eAmbassador.java:310) ... 2 more
Any further updates on this issue? Did you get it resolved @bergtwvd ?
The issue is still open.
I was able to run the c++ example federate on different hosts using the weave.
I just checked as well. ubuntu@docker-2A:~$ weave version weave script 1.3.1 weave router 1.3.1 weave proxy 1.3.1
Yes it works now with this version!!! Great, and thanks for letting me know.
(Ps. what got fixed?)
I set up a test with the sample program installed in a Docker container (Dockerfile to be provided later).
Test environment:
Test:
A exits with a stacktrace. Logfile are attached.
Following test succeeds: On container A: java -cp $RTI_HOME/lib/portico.jar org.jgroups.tests.McastReceiverTest -mcast_addr 228.10.10.10 -port 20913
On container B: java -cp $RTI_HOME/lib/portico.jar org.jgroups.tests.McastSenderTest -mcast_addr 228.10.10.10 -port 20913
Route table on A: Destination Gateway Genmask Flags Metric Ref Use Iface default 10.1.0.1 0.0.0.0 UG 0 0 0 eth0 10.1.0.0 * 255.255.0.0 U 0 0 0 eth0 10.10.0.0 * 255.255.255.0 U 0 0 0 ethwe 224.0.0.0 * 240.0.0.0 U 0 0 0 ethwe
Route tabel on B: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.2.0.1 0.0.0.0 UG 0 0 0 eth0 10.2.0.0 * 255.255.0.0 U 0 0 0 eth0 10.10.0.0 * 255.255.255.0 U 0 0 0 ethwe 224.0.0.0 * 240.0.0.0 U 0 0 0 ethwe