Open Cu3PO42 opened 1 week ago
@Cu3PO42:
Thank you for the nice PR, and you have done some really impressive work. ππ You are definitely on the right track with both your ideas and the changes you propose.
I have been a bit reluctant about using SVG instead of basic primitive 2D objects, but I think you got a good point. I agree that most platforms have good support for SVG. We should be able to support it in the UI and the current Bot API for Java and C#. Later on, we can add support for Web (JavaScript/TypeScript/WebAssembly?) and perhaps Python. I guess all of those have great support for SVG.
And the debug painting is optional, so if some platform or language does not support it, then the Bot API for that platform or language might not be able to support SVG easily. The worst-case scenario is to let the bots output SVG directly as a plain SVG string. So, I am not worried about this part.
Your solution with using the drawing context (Graphics2D
) for would also make it easier to port this feature for the Robocode Bridge.
Regarding the items to be tackled, I will answer those in the following.
Yes, we need a switch to enable/disable graphical debugging.
1) It should be possible to enable/disable it per bot in the Bot console like the original Robocode, as it would spam the battle view otherwise, and we can decide which bot to see debug information from. We should mark in the Bot console if the bot supports graphical debugging (trivial when it is sending something in debugGraphics
).
2) Like (1), we should have a global option to disable/enable debug painting for the entire UI. This could be done by adding an additional option to the existing Debug Options. Or perhaps add a View Options, where the user can enable/disable multiple items like radar scans etc.
3) The server might also disallow the debugging information and tell this via the Server Handshake. The reason could be to save traffic over the network. Here we might set an option from the UI if we don't want the bots to send anything to debugGraphics
. And if they do anyways, they will be killed by the server, providing an error for the Bot API.
Yes, every new feature (and bugfix) needs to be applied to every Bot API. Luckily, we only have 2 at the moment.
I guess we can use System.Drawing.Graphics
on .NET the same way as we use Graphics2D
for Java.
Here we could use Svg.Skia / SkiaSharp and do something very similar to what you have done for the Java version. That is, so it is close to a 1-1 mapping, just in the .NET / C# way.
I am happy that you bring up this topic and not just ignore it. :+1:
I ran into the exact same issue when implementing the graphical debugging feature for the original Robocode. So, I invented the MirroredGraphics.
MirroredGraphics is used something like this:
mirroredGraphics.bind(g, battleField.getHeight()); // we need the view height for the mirror transformation
// ...
// Use mirroredGraphics for painting graphics e.g. with paint(Graphics2D g)
// ...
mirroredGraphics.release(); // restores the original graphics state of the input Graphics2D object
The drawRobotPaint() from the original Robocode is a good example of how it is being used.
Also note that another class, GraphicsState, is used for saving and restoring the current state of the original Graphics2D instance.
When the feature has been implemented, we should also include the feature in some of the sample bots. In fact, I did not include the paint()
for some of the sample bots in Tank Royale, so we could just add these missing methods to the current sample bots from the original Robocode.
I will checkout the code and try out your work to see if we need to adjust something and get a fell of the changes as well. π
Thank you for the extensive reply! If you would rather not go with SVG for debugging, there is no expectation from me that you do simply because I invested some time in this prototype. My motivation was just to validate my approach. I would be curious to hear your concerns, however. I would make the argument that SVG is nothing but a well-established format of graphical primitives.
Later on, we can add support for Web (JavaScript/TypeScript/WebAssembly?) and perhaps Python. I guess all of those have great support for SVG.
For Python, there is drawsvg, which is indeed a very good API. For the web you could use any of the well-established drawing libraries.
Your solution with using the drawing context (Graphics2D) for would also make it easier to port this feature for the Robocode Bridge.
This was one of my primary motivations for going this route!
I fully agree that it needs to be possible to enable/disable graphical debugging per bot. However, there are different ways to make that happen. The one you're proposing sounds like the best, but also most complicated solution. I imagine bots would have an extra flag on the server defining if that particular bot is allowed to send debug information. It can be changed by any controller and the server forwards debug information only for bots that have this flag. Bots are also informed if they are currently permitted to send debug information. I would also like to make this flag configurable via environment variables, the command line or something else so you don't always need to click the button while developing a bot.
Yes, I believe picking a native API makes more sense than porting Graphics2D to .NET. However, I'm not sure that Svg.Skia, etc. are a good fit. I have no prior experience with them, but it seems to me they all mainly support rendering an SVG to screen. This would be useful for writing a GUI in C#, but not so much for the Bot API. I currently don't see anything in them to create SVGs. System.Drawing.Graphics
is only available on Windows, so I would prefer to go a different route.
I'm not currently sure what the best library to use would be, but as a fallback we could always implement our own, supporting the same primitives you would have otherwise supported explicitly in the API.
EDIT: VectSharp looks fairly nice and has a similar API to Graphics2D.
Interesting! I was hoping you had something to share from the original Robocode. I'll take a look at MirroredGraphics
. To me, this indicates that the coordinate transform should be handled on the side of the bot API rather than the side of the GUI, as it is in the current solution.
In my last bot, I did draw text extensively for debugging and the coordinate transform was fully tranparent, which I suppose is the ideal.
I did add some arbitrary drawing commands to Crazy and verified they worked as I expected. However, because they were not sensible, I didn't add them to the PR. I expect the existing draw commands could be ported 1:1 for the sample bots.
First of all. I checked out your code for Graphical Debugging, and I got it up and running, and did some simple painting from a bot. ππ
The reason why I have been reluctant on using SVG is mainly due to:
But in the end this might not be a problem if the XML is fast to parse, and lengthy SVG is not a problem when transmitted over the network.
Currently JSVG outputs this SVG for a red filled rectangle:
<!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.0//EN'
'http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd'>
<svg xmlns:xlink="http://www.w3.org/1999/xlink" style="fill-opacity:1; color-rendering:auto; color-interpolation:auto; text-rendering:auto; stroke:black; stroke-linecap:square; stroke-miterlimit:10; shape-rendering:auto; stroke-opacity:1; fill:black; stroke-dasharray:none; font-weight:normal; stroke-width:1; font-family:'Dialog'; font-style:normal; stroke-linejoin:miter; font-size:12px; stroke-dashoffset:0; image-rendering:auto;" width="800" height="600" xmlns="http://www.w3.org/2000/svg"
><!--Generated by the Batik Graphics2D SVG Generator--><defs id="genericDefs"
/><g
><g style="fill:rgb(255,0,0); fill-opacity:0.498; stroke-opacity:0.498; stroke:rgb(255,0,0);"
><rect x="10" width="50" height="50" y="10" style="stroke:none;"
/></g
></g
></svg
>
But we might optimize it to this:
<svg width="800" height="600" xmlns="http://www.w3.org/2000/svg">
<g><rect x="10" y="10" width="50" height="50" fill="rgba(255,0,0,0.498)"/></g>
</svg>
But this is an optimization issue, and I don't think we need to handle this now. π
Great to hear it's working for you as well. I appreciate your reply and your reasoning. I don't think any of these points are a showstopper, though. SVG is much more powerful than the graphical debugging primitives need to be, but I very much enjoyed making graphical debugging output pretty, so this extra power may also be a benefit. And we don't need to implement SVG ourselves.
Regarding performance, there is probably a small penalty compared to a simpler list of primitives directly in JSON, but I expect this to be negligible. I don't expect to need graphical debugging at tick rates higher than, say, 100 TPS since you wouldn't make out much at that speed anyway. Since SVG compresses extremely well, we're talking about a fex extra kbps here, which would barely have an impact on any modern connection. If we're communicating locally only (which is the likely scenario for debugging), data transfer is virtually instant since most OSes optimize local connections to bypass the majority of the network stack.
As for parsing and drawing, I have yet to conduct an experiment with complex graphics, but I don't believe we're going to hit a bottle neck.
As for the SVG generated by Batik, yes, it is very verbose and can certainly be optimized, but I'd look at that only if performance actually becomes a problem.
I'll take a look at implementing the debugging on/off functionality over the weekend.
Regarding trying out complex painting, I could make a branch of the Robocode Bridge which can take advantage of the getGraphics()
method to implement paint(Graphics2D)
. There should be a lot of legacy robots out there with advanced debug painting we could try out. π
I just made debug graphics toggleable for each bot. The current implementation has the following behavior:
onPaint
event as there was in the original Robocode, but it should be fairly straight forward to generate this as an artificial event.These days I don't work with Java a lot and I wasn't always 100% clear on which way of doing things is most in line with the design philosophy, so I'm sure the code isn't super clean. But at least it works ;)
I think it'd be great to test more complex scenarios with the bridge.
Regarding the coordinate transform, I think we should decide to either:
The first is definitely the most flexible way, but also more work than the second, since I expect there are going to be more bot APIs than there are viable GUIs. The current strategy is the second one, but only because it was the easiest for the proof of concept. I'm totally down to change that. Do you have a preference?
I have merged my work on the server, where the logging framework was changed, and fixed conflicts I caused due to that.
It looks like you did a really good job with the recent changes as well, and you seem to hit the design philosophy pretty well, so no worries. π
coordinate transform Regarding the coordinate transform, I will have a look at that. I already tried out a couple of things on the UI side. But it did not work correctly. It would be great if the GUI take care of it, so (all) bot API does not have to deal with this. Especially when more Bot APIs are added.
A bot can query whether it is currently allowed to send debug graphics.
Regarding added isDebuggingEnabled()
as new method on the public API. I don't think this is necessary as it is not very useful for the bot developer. We should just let the developer do the painting regardless of the debugging feature being enabled or disabled. This would make it easier to enable and disable debug painting on the fly without the bot developer having to worry about this detail.
It might be better to handle this in the bot internals and check if the bot is allowed to sent the debug painting or not. And you did a great job here by adding isDebuggingEnabled
to the BotState
sent from the server. That is perfect. π
There is currently no onPaint I think the way you implement it makes sense, and you are right that this should be easy to incorporate in the Robocode Bridge as an artificial event. π
I have not had a look and the server changes yet, but I need to dive into those as well.
Thanks for taking care of that merge conflict! I have an idea for how to handle the text mirroring in the GUI. I'll take a look at it soon. However, I'm thinking of adding an escape hatch for advanced use cases where the bot can opt to handle everything itself.
I agree that in most cases it would be fine for the bot to always paint and for the sake of simplicity, they probably should. However, I do have a use case for querying this. In a bot I developed for legacy Robocode, calculating the information needed for graphical debugging was quite computationally intensive and was not needed for normal operation. It is cases like this one I wanted to support with the method.
The bot internals already do check this and do not send debugging information when the flag is false so as not to waste network traffic.
Okay. Let us keep the isDebuggingEnabled()
in the public API. π
I did update the code with some of your changes:
The schema set-debugging-enabled-for-bot
was generalised into bot-policy-update
. This way we can support more flags to be added in the future, and we don't need to add a new schema for it, but just additional fields.
Please go ahead with the "mirrored text issue" if you have a good idea of how to fix it.
I see that something is off with the server logging. π’I will fix that!
I like that change!
My idea was to inject the following CSS styles into the debug graphics:
text {
transform-origin: center center;
transform-box: fill-box;
transform: scaleY(-1);
}
This works as expected in my web implementation, however JSvg does not yet support the required transform-box
property. I opened an issue upstream at weisJ/jsvg#98. If that could be added, I think this is a pretty elegant solution that will also translate to other GUIs in the future.
Controlling the text using CSS is a brilliant idea! π€©
The idea explained above works as intended now. I have submitted a PR upstream at weisJ/jsvg#99, but until that is merged I have included a build of Jsvg in the repo.
The idea explained above works as intended now. I have submitted a PR upstream at weisJ/jsvg#99, but until that is merged I have included a build of Jsvg in the repo.
Nice!
When we have everything needed in place, I think we should merge it into main as the PR is big. Then we can extend the C# API later. π
Sure! I'm happy to go that route. I'm currently working to get that PR to Jsvg merged.
Hello again!
I kept thinking about #114 and the more I did, the more I believe SVG is a perfectly good abstraction for transporting graphical debugging information that doesn't require inventing a new primitive language.
I wrote a prototype implementation to validate the approach and it's working great! This provides a full Graphics2D object on the bot side and draws the results in the GUI. I was also able to trivially add those graphics to my WIP web client. Since the Graphics2D API is available on the Java side, the bridge could also be extended with support for graphical debugging.
This prototype is not yet in a state to be merged, but I wanted to share my efforts. I would expect that the following items still need to be tackled:
Enabling/disabling graphical debugging
I see multiple ways forward here. We could add a switch in the GUI, similar to legacy Robocode and transmit this permission to the robot, which is then permitted to send debugging graphics that will be rendered.
Alternatively, a simple command line switch could be introduced that causes the GUI to draw debug information. A similar switch could then be added for the bot to send debug information. Unsolicited debug information would be dropped.
Extending the API
I believe it is reasonable to have the drawing APIs be different between platforms, so that each platform can rely on native libraries that integrate well with the language. If, however, you want to ensure a very similar API surface, one could introduce a simple SVG generator that allows drawing primitives such as rectangles, circles, etc. that could be ported to all platforms. Bot authors would still have the option of using a more powerful library.
Coordinate Transform
SVG's coordinate origin is in the top left, just like AWT's. Thus the transform you already have for Graphics2D in the context directly applies to the SVG as well and no work is needed on the side of the bot API. However, text requires special consideration so that it is not flipped upside down. This is not something I solved yet. I expect that injecting some CSS might help, but it may remain a hacky solution.
Another approach would be to not do anything on the side of the GUI and let the respective bot APIs handle it. This might be the more robust solution but potentially requires more code.
I'd be happy to hear your opinion on this approach!