This isn’t really what the issue tracker is for, you should be using the discussion forums, or irc/discord/matrix. But...
1- connections are established first by contacting known peers that are hard coded in to your application, those servers let your app know who else is online that they can connect to, then the node in your app organises itself in to an appropriate part of the DHT network and maintains connections with other peers who are responsible for keeping track of information that has similar hashes to the information your node is responsible for. If your app is behind a NAT, it will do it’s best to punch a hole in the NAT, and if that doesn’t work, as a last resort, some nodes volunteer to be “relays” (it’s a feature that’s off by default), who will forward messages between nodes that cannot open themselves up to the internet properly.
3a- You don’t need to connect to any remote system manually. When you add something to the IPFS Mutable Filesystem, IPFS gives you a “CID” hash, which describes the data that’s in the file or directory. On another node, if you attempt to access that hash, the node will navigate the DHT to find peers who are responsible for keeping track of who has that data, and then connect to those users and download parts of the files as needed in to it’s local cache.
3b- Directories don’t have names, unless they’re inside of another directory. You can wrap everything in a directory so everything has a name. Content is addressed by it’s hash, or by it’s path within a parent directory’s hash, in the mutable filesystem.
3c- I don’t understand this question. Do you want a list of every file and folder on IPFS globally? Your computer probably does not have enough storage to process that.
3d- you can use the MFS API to manipulate something that behaves like a normal filesystem, where you can make changes like that, but it will change the hashes of the content you changed and every parent directory containing it, and you’ll need to distribute that change somehow (perhaps with DNSLink, IPNS, a blockchain, or announcing it through a PubSub room to interested peers)
3e- you cannot delete things from people’s remote computers, but you can design your app so that when you tell it there’s a new version, it stops pinning the old content, and pins the new directory structure with updated files. You can also ask IPFS to run the GC (Garbage Collector) to have it delete local copies of things that aren’t currently pinned, to reduce use of local storage space when you unpin content that you wont need again
4- You can organise this however you like, but a reasonable design might be to have a directory, that contains other directories for each setlist, and tabs inside each of those subdirectories, then you only have to distribute that root parent directory’s hash to your clients for them to access it and keep up to date. There is no limit to how deeply you can nest directories to make the structures you need, and if you want to do something more complicated than a simple filesystem-like structure, you can use the DAG api to store objects similar to JSON instead, where the values can be what you’d expect in JSON, or they can be links to files and folders and other objects. If you Pin a DAG node the IPFS node will automatically take care of downloading it, and everything it references, and everything that references, recursively, if you ask it to do so, so you don’t need to do any work tracking the full list of resources that need to be downloaded to local storage.
This isn’t really what the issue tracker is for, you should be using the discussion forums, or irc/discord/matrix. But...
1- connections are established first by contacting known peers that are hard coded in to your application, those servers let your app know who else is online that they can connect to, then the node in your app organises itself in to an appropriate part of the DHT network and maintains connections with other peers who are responsible for keeping track of information that has similar hashes to the information your node is responsible for. If your app is behind a NAT, it will do it’s best to punch a hole in the NAT, and if that doesn’t work, as a last resort, some nodes volunteer to be “relays” (it’s a feature that’s off by default), who will forward messages between nodes that cannot open themselves up to the internet properly. 3a- You don’t need to connect to any remote system manually. When you add something to the IPFS Mutable Filesystem, IPFS gives you a “CID” hash, which describes the data that’s in the file or directory. On another node, if you attempt to access that hash, the node will navigate the DHT to find peers who are responsible for keeping track of who has that data, and then connect to those users and download parts of the files as needed in to it’s local cache. 3b- Directories don’t have names, unless they’re inside of another directory. You can wrap everything in a directory so everything has a name. Content is addressed by it’s hash, or by it’s path within a parent directory’s hash, in the mutable filesystem. 3c- I don’t understand this question. Do you want a list of every file and folder on IPFS globally? Your computer probably does not have enough storage to process that. 3d- you can use the MFS API to manipulate something that behaves like a normal filesystem, where you can make changes like that, but it will change the hashes of the content you changed and every parent directory containing it, and you’ll need to distribute that change somehow (perhaps with DNSLink, IPNS, a blockchain, or announcing it through a PubSub room to interested peers) 3e- you cannot delete things from people’s remote computers, but you can design your app so that when you tell it there’s a new version, it stops pinning the old content, and pins the new directory structure with updated files. You can also ask IPFS to run the GC (Garbage Collector) to have it delete local copies of things that aren’t currently pinned, to reduce use of local storage space when you unpin content that you wont need again 4- You can organise this however you like, but a reasonable design might be to have a directory, that contains other directories for each setlist, and tabs inside each of those subdirectories, then you only have to distribute that root parent directory’s hash to your clients for them to access it and keep up to date. There is no limit to how deeply you can nest directories to make the structures you need, and if you want to do something more complicated than a simple filesystem-like structure, you can use the DAG api to store objects similar to JSON instead, where the values can be what you’d expect in JSON, or they can be links to files and folders and other objects. If you Pin a DAG node the IPFS node will automatically take care of downloading it, and everything it references, and everything that references, recursively, if you ask it to do so, so you don’t need to do any work tracking the full list of resources that need to be downloaded to local storage.