Open chopmann opened 3 years ago
TTN Mapper is starting to support multiple networks. The feature will allow end users to view different networks' coverage as different layers on the map. See https://twitter.com/ttnmapper/status/1374275806969667585?s=20. Exactly which networks will be shown on the public website, and which will be behind a login, is still under consideration.
The main thing to take from the above is that networks with unique coverage areas need to be identified. We can easily distinguish between a Things Stack network and a ChirpStack network, as their json formats are different, and therefore will use different api endpoints for the webhooks.
Distinguishing between different ChirpStack instances is very difficult, or actually impossible. The most instances will use the experimental NetID block (000000-000001
), so we can not use that (on its own). The LoRaWAN Backend Interfaces apparently introduced a NSID to identify different networks using the same NetID block. Therefore NSID@NetID
should be globally unique. But because ChirpStack does not have a single controlling entity, there is no way to guarantee users use unique NSID values.
A solution to this is to generate a UUID when the ChirpStack instance is started up the first time, persisting it, and using that as the NSID/DeploymentID. The DeploymentID@NetID
should then be globally unique.
For 3rd party systems like TTN Mapper to know which network the data originates from, we need to pass the DeploymentID
or DeploymentID@NetID
in every uplink (webhook, mqtt).
More things to keep in mind:
This same issue is discussed for the TTI stack here: https://github.com/TheThingsNetwork/lorawan-stack/issues/4076
@brocaar any comments on this?
Yes, I can see the use-case of at least exposing the NetID field for now in the uplink messages. I'm a bit hesitant to add a Deployment ID at this point as I'm planning to make some other changes which potentially might overlap with this. I want to avoid that we add something now, which I will then remove again in the near future. @jpmeijers would an unique value per organization work or should it be server wide. Note that it is possible to make gateways private, so the coverage might be per organization, not per server-instance.
The LoRaWAN Backend Interfaces apparently introduced a NSID to identify different networks using the same NetID block.
@jpmeijers do you have a reference to this?
Note, https://github.com/brocaar/chirpstack-api/blob/829ff1994dd6c3a2c69a01462c8e8986ef1bdbda/rust/proto/chirpstack-api/gw/gw.proto#L175 might not be the best place for this. the gw.proto
defines the messages for the GW<>NS communication. I would like to keep this clean.
I believe this is a better place for this: https://github.com/brocaar/chirpstack-api/blob/829ff1994dd6c3a2c69a01462c8e8986ef1bdbda/rust/proto/chirpstack-api/as/as.proto#L94.
Then the same info can be exposed in the uplink integration message.
The LoRaWAN Backend Interfaces apparently introduced a NSID to identify different networks using the same NetID block.
@jpmeijers do you have a reference to this?
Only @johanstokking's comment on https://github.com/TheThingsNetwork/lorawan-stack/issues/4076, second last paragraph, point 2:
That we use the terms tenant_id and cluster_id, and not a more opaque ns_id although that is more LoRaWAN Backend Interfaces 1.1 compliant
coverage might be per organization, not per server-instance.
Yes, very good point. On the Mapper's side it should be possible to merge multiple server's coverage into an organisation's coverage. But I can't split an organisation's coverage into their separate network instances. So tagging/identifying a server instance is more versatile than identifying the organisation.
"Private" gateways will still contribute coverage areas (heatmap), but will not have markers indicating their locations on the map. Remember here that private networks (all ChirpStack networks) will not be shown publicly on TTN Mapper. Users will either get a unique URL, or need to sign in. This is still a work in progress on the Mapper's side.
I'm a bit hesitant to add a Deployment ID at this point as I'm planning to make some other changes which potentially might overlap with this.
Fair enough. If we add only the NetID
for now that will get us closer. For now we are adding a workaround - a header that is added to identify the originating network. This is ok for now, and can be changed in the database at a later stage, but because a user can change the value at any point, this is not reliable:
https://github.com/ttnmapper/ingress-api/pull/7
Evolution of identifying the ChirpStack coverage on TTN Mapper's side:
1) NS_CHIRP://my.network.name
- where my.network.name
is the value passed in the TTNMAPPERORG-NETWORK
header.
2) NS_CHIRP://my.network.name@000001
- where 000001
is a NetID. This will be what we'll use as soon as the NetID is available.
3) NS_CHIRP://DeploymentID@000001
- as soon as some identifier for the network/org is available, we'll use that rather than the value from the header.
This is how it looks as of now for my network in Wolfsburg
and with the header TTNMAPPERORG-NETWORK
added
Coming back to this issue now, because I'm facing this again.
A big ChirpStack user has one/two instances running. Per instance there are multiple tenants. The data that the Mapper receives from ChirpStack identifies the Tenant, but there is no way to identify the server instance/organisation. In this specific case tenants share coverage, so tagging coverage by the tenant is not ideal. I'd rather want to tag the coverage by the NetID or server instance. Or a combination of the Tenant and Server/NetID.
What is the likelyhood of exposing the NetID in the Up event? Or alternatively a UUID of the server instance, similar to the Tenant ID.
For clarity, coverage is shared between Tenants. See docs.
We had the idea to use the DevAddr
to calculate the NetID
and then tag coverage by the NetID
. But reading this, right at the end it says:
multiple NetIDs are likely to map to the same NwkID value. Section 11.3 Passive Roaming describes how the fNS tries multiple NSs to find the sNS of the End-Device.
That means we can not uniquely identify a network based on the DevAddr.
Summary
What is the use-case?
Identifying networks across services. For example enabling a ttn-mapper integration as discussed with @jpmeijers
Implementation description
We could add it to every frame forwarded to the application-server. https://github.com/brocaar/chirpstack-api/blob/829ff1994dd6c3a2c69a01462c8e8986ef1bdbda/rust/proto/chirpstack-api/gw/gw.proto#L175
Frames coming from the gateway would be missing the property, but on processing/forwarding the frame could get the NetID of the ns handling it, a bit like the roaming code does it.
In any case, as most (I assume) are using the default config. I would be useful to have an "UUID" identifying the NS Deployment. This UUID can be generated on first start, like the admin-user in the AS and be unique to that deployment. We would then have the NetID and a DeploymentID. Changing this "DeploymentID" should not have any impact on the processing of frames/routing.
Can you implement this by yourself and make a pull request?
Probably yes.