olijeffers0n / rustplus

Rust+ API Wrapper Written in Python for the Game: Rust
https://rplus.ollieee.xyz/
MIT License
102 stars 28 forks source link

Awesome Work! #41

Closed MillionthOdin16 closed 1 year ago

MillionthOdin16 commented 1 year ago

Nice job with the parsing and decoding of camera data. I don't know how you figure it all out haha. What are you using?

It's pretty crazy how much data is available from the cameras and how much functionality is possible now. Hopefully it doesn't get nerfed too soon xD

olijeffers0n commented 1 year ago

Hiya! Thanks a lot - it did take quite a while to get it all figured out (I think about a week? šŸ˜).

My development process for this was kind of twofold:

I do hope that they donā€™t nerf it, rather they allow for some higher quality images! I have been working on improving the camera rendering speed but it has been hard as I only have a CPU running Python, whereas it would be ideally done on the GPU :)

Another thing I would like to see is the ability to watch several cameras at once (Currently possible with multiple sockets) - Properly though. Also, as I said, just some nicer graphics!

Feel free to head over to our discord if you wanna discuss this further, otherwise I will answer any more questions you have here!

Thanks again!

MillionthOdin16 commented 1 year ago

Oh nice! Will def join this discord.

Look at the server code which you can decompile. This allows you to find out what all the information actually means.

I hadn't thought of this. I've been looking at the decompiled android APKs, but it's definitely not easy to figure out. I can't tell if they're using a standard method of delivering images, or if the idea of sending a subset of 'camera rays' is something they wanted to uniquely implement. Speaking of image quality, the method you're using to compose the images out of a series of random pixel buffers reminds me a lot of how some thermal cameras generate their images from combining sets of random pixel samplings. I've been thinking about ways to get more out of the images since I saw how the rust+ renders images in a unique way.

Another thing I would like to see is the ability to watch several cameras at once (Currently possible with multiple sockets) - Properly though. Also, as I said, just some nicer graphics!

I've also been working on getting multiple cameras as well xD by multiple sockets, I don't know if you mean the server will allow simultaneous connections from the same user on different cameras, or just rotating the subscribed cam in a round robin. (I think from your documentation) my impression was that the server will basically kick you off your previous cam once you start viewing a new cam stream. So my plan was to grab a couple images from a cam, move to the next, and continue that in a reasonable loop.

The entity data paired with location and material data makes some pretty crazy things possible in terms of data collection and automation in game :) You could track specific players that are around base, move cameras to follow them, control autoturrents based off this data and implement your own fire control at distances much greater than normal turrets. I'm not sure how much positional data is available, but if you can get world coordinates for the origin camera and any entities in view, we could basically map out the whole area in a way similar to lidar lol (which I'm sure you're aware of because of all the transformations you're doing in your code haha). Thinking about all this because your lib is the first one to map out and make this data accessible programmatically. It reminds me a lot of robotics xD

olijeffers0n commented 1 year ago

Yeah you can open 5 sockets at the same time which means you can actually technically have 5 camera streams at once without having to round-robin them.

Yeah, the Rust+ way is with a series of raytraces that then can be decoded into the image data. They send a subset of each frame based on a seeded random object so thatā€™s fun!

I have thought about tracking players global position but it it difficult because you need to essentially setup some sort of pairing process with the camera to get its global position. Even then though, the camera axis are different. It is aligned so that the Z axis runs through the camera, X is perpendicular to that and Y is up and down. This is done to simplify the camera operations inside the transformation matrices, but means there is no easy way to convert from global space to camera space.

I did think about the lidar idea, but I would need to think REALLY hard about how that would work :) - talking about tracking pixels in 3d or whatever