markqvist / LXMF

A universal, distributed and secure messaging protocol for Reticulum
MIT License
238 stars 15 forks source link

Sending LXMF message to Group Destination fails because single_packet_content_limit is not initialized #4

Closed HarlekinSimplex closed 2 years ago

HarlekinSimplex commented 2 years ago

Hi Mark,

and another issue. This time it is a show stopper.

I tried to send a message using LXMF to a Group Destination. LXMF terminates at line 240:

Traceback (most recent call last):
  File "D:/JetBrains/PyCharmProjects/nexus/bsbdock.nexus_context/nexus_server/test2.py", line 62, in <module>
    lxrouter.handle_outbound(lxmessage)
  File "D:\JetBrains\PyCharmProjects\nexus\venv\lib\site-packages\LXMF\LXMF.py", line 1102, in handle_outbound
    lxmessage.pack()
  File "D:\JetBrains\PyCharmProjects\nexus\venv\lib\site-packages\LXMF\LXMF.py", line 240, in pack
    if content_size > single_packet_content_limit:
UnboundLocalError: local variable 'single_packet_content_limit' referenced before assignment

The resaon is obvious and simply because with OPPORTUNISTIC this code does not initialize the single_packet_content_limit at all. Neither within the snippet below nor earlier in the code.

            if self.desired_method == LXMessage.OPPORTUNISTIC:
                if self.__destination.type == RNS.Destination.SINGLE:
                    single_packet_content_limit = LXMessage.ENCRYPTED_PACKET_MAX_CONTENT
                elif self.__destination.type == RNS.Destination.PLAIN:
                    single_packet_content_limit = LXMessage.PLAIN_PACKET_MAX_CONTENT

                if content_size > single_packet_content_limit:
                    raise TypeError("LXMessage desired opportunistic delivery method, but content exceeds single-packet size.")
                else:
                    self.method = LXMessage.OPPORTUNISTIC
                    self.representation = LXMessage.PACKET
                    self.__delivery_destination = self.__destination

Using DIRECT instead does work with that part, but no path is found at all during outbound handling until max delivery attemtps are used up.

I tried to announce the GROUP destination(s) to publish destination hashes so a path could be registered. However LXMF told me that GROUP targets can not be announced.

So that essentially means, even with the above bug fixed, group targets may not work with LXMessages at all. At least actually. Right?

Here is my little test program to indicate what I tried. Maybe there is still something fundamental wrong with my understandig. Any suggestions are appreciated.

import os
import RNS
import LXMF
import time

APP_NAME = "myapp"
ASPECT = "cockpit"

# LXMF storage
LXMF_STORAGE_PATH = os.path.expanduser("~") + "/.myapp/lxmf"

RNS.Reticulum()

if not os.path.isdir(LXMF_STORAGE_PATH):
    # Create storage path
    os.makedirs(LXMF_STORAGE_PATH)
    # Log that storage directory was created
    RNS.log("Storage path was created")
# Log storage path
RNS.log("Storage path is " + LXMF_STORAGE_PATH)

key = b'\xab/\x91\x02t\xae\xe5\xa1-\xf3\xc9\xf8\xcd\x06\x0b\\\xfa\xcddU\t\xc3O\xa5\x93gH\xe61\xad\xa4 '

destination_to1 = RNS.Destination(
    None, RNS.Destination.IN, RNS.Destination.GROUP, APP_NAME, ASPECT
)
destination_to1.load_private_key(key)

destination_to2 = RNS.Destination(
    None, RNS.Destination.IN, RNS.Destination.GROUP, APP_NAME, ASPECT
)
destination_to2.load_private_key(key)

destination_from = RNS.Destination(
    RNS.Identity(), RNS.Destination.OUT, RNS.Destination.SINGLE, APP_NAME, ASPECT
)

#destination_to1.announce()
#destination_to2.announce()
destination_from.announce()

RNS.log("Destination To1 is " + str(destination_to1))
RNS.log("Destination To1 Hash is " + RNS.prettyhexrep(destination_to1.hash))
RNS.log("Destination To2 is " + str(destination_to2))
RNS.log("Destination To2 Hash is " + RNS.prettyhexrep(destination_to2.hash))
RNS.log("Destination From is " + str(destination_from))
RNS.log("Destination From Hash is " + RNS.prettyhexrep(destination_from.hash))

message_text = "Test Content"
message_title = "Test Title"

lxmessage = LXMF.LXMessage(
    destination=destination_to1,
    source=destination_from,
    content=message_text,
    title=message_title,
    desired_method=LXMF.LXMessage.DIRECT
    # desired_method=LXMF.LXMessage.OPPORTUNISTIC
)

lxrouter = LXMF.LXMRouter(storagepath=LXMF_STORAGE_PATH)
lxrouter.handle_outbound(lxmessage)

while True:
    time.sleep(1)

Best Stephan

markqvist commented 2 years ago

This is working as intended ;) Until globally routed multicast is implemented in Reticulum, the only way to send LXMs to GROUP destination is via propagation nodes:

lxmessage = LXMF.LXMessage(
    destination=destination_to1,
    source=destination_from,
    content=message_text,
    title=message_title,
    desired_method=LXMF.LXMessage.PROPAGATED
)

lxrouter = LXMF.LXMRouter(storagepath=LXMF_STORAGE_PATH)
lxrouter.set_outbound_propagation_node(bytes.fromhex("3308bc1aa72e9761fca2"))

Produces the following result:

[2022-04-13 11:57:02] [Verbose] Identity keys created for <ebf66a2830d1a154f142>
[2022-04-13 11:57:02] [Debug] Starting outbound processing for <LXMessage 5117036bf5cc31dadded4ab40ea31390c4c74f3da6eb900eb7053d11926c9c60> to <df98e7ddf0e79124a64a>
[2022-04-13 11:57:02] [Debug] Attempting propagated delivery for <LXMessage 5117036bf5cc31dadded4ab40ea31390c4c74f3da6eb900eb7053d11926c9c60> to <df98e7ddf0e79124a64a>
[2022-04-13 11:57:02] [Debug] No path known for propagation attempt 1 to <3308bc1aa72e9761fca2>. Requesting path...
[2022-04-13 11:57:02] [Debug] Validating announce from <ea75f96ea04e159c7ba2>
[2022-04-13 11:57:02] [Debug] Stored valid announce from <ea75f96ea04e159c7ba2>
[2022-04-13 11:57:02] [Verbose] Path to <ea75f96ea04e159c7ba2> is now 0 hops away via <8d82eb8cc5d4de1a701c> on LocalInterface[37428]
[2022-04-13 11:57:03] [Debug] Validating announce from <3308bc1aa72e9761fca2>
[2022-04-13 11:57:03] [Debug] Stored valid announce from <3308bc1aa72e9761fca2>
[2022-04-13 11:57:03] [Verbose] Path to <3308bc1aa72e9761fca2> is now 2 hops away via <8d82eb8cc5d4de1a701c> on LocalInterface[37428]
[2022-04-13 11:57:06] [Debug] Validating announce from <3dc65a05105c7d7dab35>
[2022-04-13 11:57:06] [Debug] Stored valid announce from <3dc65a05105c7d7dab35>
[2022-04-13 11:57:06] [Verbose] Path to <3dc65a05105c7d7dab35> is now 3 hops away via <8d82eb8cc5d4de1a701c> on LocalInterface[37428]
[2022-04-13 11:57:07] [Debug] Starting outbound processing for <LXMessage 5117036bf5cc31dadded4ab40ea31390c4c74f3da6eb900eb7053d11926c9c60> to <df98e7ddf0e79124a64a>
[2022-04-13 11:57:07] [Debug] Attempting propagated delivery for <LXMessage 5117036bf5cc31dadded4ab40ea31390c4c74f3da6eb900eb7053d11926c9c60> to <df98e7ddf0e79124a64a>
[2022-04-13 11:57:07] [Debug] Establishing link to <3308bc1aa72e9761fca2> for propagation attempt 2 to <df98e7ddf0e79124a64a>
[2022-04-13 11:57:07] [Debug] Registering link <559838619cbba5a6e793>
[2022-04-13 11:57:07] [Debug] Link request <559838619cbba5a6e793> sent to <lxmf.propagation.364d6282ccce0511dd40/3308bc1aa72e9761fca2>
[2022-04-13 11:57:07] [Debug] Activating link <559838619cbba5a6e793>
[2022-04-13 11:57:07] [Verbose] Link <559838619cbba5a6e793> established with <lxmf.propagation.364d6282ccce0511dd40/3308bc1aa72e9761fca2>, RTT is 0.15389752388000488
[2022-04-13 11:57:07] [Debug] Starting outbound processing for <LXMessage 5117036bf5cc31dadded4ab40ea31390c4c74f3da6eb900eb7053d11926c9c60> to <df98e7ddf0e79124a64a>
[2022-04-13 11:57:07] [Debug] Attempting propagated delivery for <LXMessage 5117036bf5cc31dadded4ab40ea31390c4c74f3da6eb900eb7053d11926c9c60> to <df98e7ddf0e79124a64a>
[2022-04-13 11:57:07] [Debug] Starting propagation transfer of <LXMessage 5117036bf5cc31dadded4ab40ea31390c4c74f3da6eb900eb7053d11926c9c60> to <df98e7ddf0e79124a64a> via <3308bc1aa72e9761fca2>
[2022-04-13 11:57:07] [Debug] Received propagation success notification for <LXMessage 5117036bf5cc31dadded4ab40ea31390c4c74f3da6eb900eb7053d11926c9c60>
[2022-04-13 11:57:12] [Debug] Propagation has occurred for <LXMessage 5117036bf5cc31dadded4ab40ea31390c4c74f3da6eb900eb7053d11926c9c60>, removing from outbound queue

This is all more or less impossible to figure out without the LXMF documentation being published, so I'm impressed you made it that far ;)

In general there is a lot of things related to group messaging that is not completely ready yet, and that is why you are running into all those problems and the other errors you reported.

Give it a month or so, and it will be there, with a well documented API. Until the it's gonna be a bit rough sailing, and things might very well change API wise during that time ;)

HarlekinSimplex commented 2 years ago

Thx for the kind words.

And yes it was a rough sailing during dozends of hours of code analysis and experimentation. I would love to see the group destinations come alive over multi hop routes as well. Guess it would be very helpful to reduce complexity on my side. I remember that I saw the propagation option in the code, but did not went down that rabbit hole at that time. To many frustrating failures have been accumulated that night already.

For now I use DIRECT transport for distributing SINGLE type messages with my global multicast Reticulum Nexus Server :-). The global network topology concept I use to limit announcement traffic and redundant message n:m distribution is established and managed by a special server I called Nexus. Based on that server there is already an android messenger app as well, that uses that server to post public broadcast messages that are served and shared (federated) by the Nexus Server topology.

Its on Git as well and if you like I can share the basic concepts of that approach with you. Actually I tried to replace the RNS Packet based distribution of Nexus with a LXMF based distribution, just because the size limitations of the RNS Packets. During the analysis of LXMF I tried to use the GROUP targets to optimize the transport and to reduce complexity of my Nexus server further, but without success.

By now I succeeded roughly up to 90% in using the LXMF SINGLE/DIRECT transport for large (Multipart) packets. Last implementation step now, after understanding and implementing the receiving end of a Ressouce transport via a RNS Link, is now to digest the received ressoure (file) into a proper Nexus Message and fed that one into the global broadcast distribution mechanism. Guess that will be done within next 1-2 nights. A functional release of the overhauled Nexus server is then due latest Monday or like.

Next goal would be to mirgrate the whole thing to a Django based architecture, to enable user login at the Android App and Web Client to start the implementation of Group Messaging and 1:1 encrypted messaging.

And Yes - This Nexus beast is at best alpha software, though it can be used for SMS like messages already. If you like to have a look at that one, you are welcome to do so.

An endpoint of the Nexus test network serving the Nexus WebApp can be found at https(:)//nexus(.)deltamatrix(.)de The actual Android App can be downloaded there with /download/NexusMessenger_v1.3.0.3.apk as well. Both the APK and WebApp are identical just because the App is developed using FlutterFlow and build using Android Studio with Flutter/Dart cross development for Web, Android and iOS.

The Nexus server can be run natively using Python or by just deploying the bsbdock/nexus Docker Container I share on DockerHub as well. This container actually supports the AMD64 and ARMV7 architecture. Meaning it can be deployed easily on hosts (like my deltamatrix.de) or on RPis or like ARM Devices, that have enough power to reasonably run docker. If there are any questions reagarding that project don't hesitate to give me a ping.

Cheers Stephan