Closed Meorge closed 1 year ago
It may seem the comments haven't been properly attached, I'll re-write them
I also have a doubt. Does V4 code interact with V3 code in any way? If not, why didn't you replace V3 code and decided for V4 to be in a separate folder
V4 code doesn't interact with V3 code at all, except for the fact that the current main user-facing API uses the same Comment
class as V3 code. This makes it so that it's a simple one-line change to convert from V3 to V4: all you have to do is change the line
from objection_engine.renderer import render_comment_list
to
from objection_engine.v4.make_movie import render_comment_list
I opted to not replace the V3 code so that it could be used as a fallback in case we experienced issues with the V4 engine.
Finally I really like the work you've done here you've basically rewrote the entire thing. I've yet to do more revisions and testing but I can already tell you I'm really grateful
Thanks! ๐ I hope it will open the door to doing more features in the future, like maybe HD Trilogy versions or special character reactions.
I just realized... I think I never made it save the video with the user-inputted filename - instead it'll always output the video as "output-{timestamp}.mp4" no matter what name the user requests. Oops. ๐
I don't think I have time to get that fixed right now, but I should be able to get around to it in the next couple of days.
Changes that should be made:
test_compare_new_engine_wrap.py
font_constants.py
into old font_constants.py
Most of the changes have been made now. I didn't do anything about font_constants.py
yet since I'm not sure if it should be merged (right now, I'm leaning more towards keeping it a separate file, so that v4 can have its own assets folder independent of v3 and vice versa).
Performance needs an in-depth check. I suspect imports and start-up time is performing way worse on v4. On my Arch Linux machine the v4 version (test_easy) throws these results:
real 3m1.776s
user 1m36.427s
sys 0m43.500s
While the v3 version (text_rtl) throws this
real 0m43.572s
user 0m19.292s
sys 0m4.792s
The performance is 3 times worse. This is really weird. @Meorge can you confirm if you can replicate the performance impact on your own machine. In that case it's time for some profiling :P (I can do it myself, don't you worry)
Oof, yeah - three times worse performance is definitely not great, and based on a quick test of mine that might even be too optimistic. I put together a script to compare the run time of the v3 and v4 engines on the same comment chain, and got the following results (using timeit.default_timer()
:
Time for v3: 6.86 s
Time for v4: 50.02 s
The v4 engine displays the ratio of "seconds-per-second" for each box (i.e. how long it took each second of the box in the final video to be rendered) and for the most part they're still under 1.0, with one bizarre exception:
" " Render 0.00 sec Visible 0.03 sec 0.01 SPS
"Hello it's me, the first guy. " Render 2.50 sec Visible 5.57 sec 0.45 SPS
"I am going to say a few more s" Render 6.55 sec Visible 4.87 sec 1.35 SPS
"ูุฐุง ูุต ุนุฑุจู. ุฃูุง ุฃุณุชุฎุฏู
ู
ุชุฑุฌู
ู" Render 5.87 sec Visible 7.10 sec 0.83 SPS
"Here's a few more lines, becau" Render 2.45 sec Visible 5.37 sec 0.46 SPS
"And, last but not least, here'" Render 1.92 sec Visible 4.43 sec 0.43 SPS
"I have the second most comment" Render 2.49 sec Visible 4.40 sec 0.57 SPS
"Why don't I do a little talkin" Render 1.50 sec Visible 3.60 sec 0.42 SPS
"I have very few lines, so I'm " Render 2.43 sec Visible 5.10 sec 0.48 SPS
"I'm also a random person! " Render 1.54 sec Visible 3.10 sec 0.50 SPS
"Hey it's me, the first random " Render 1.78 sec Visible 3.77 sec 0.47 SPS
"And last but not least, it's P" Render 2.08 sec Visible 5.23 sec 0.40 SPS
I'll push this script to the branch so that you can use it for testing if you'd like. I wish I had more time to spend on profiling/analyzing where the bottlenecks are, but unfortunately I don't think I'll be able to focus on it until sometime next week possibly.
Thanks for the script. I shall do some profiling myself, but possibly not on New year's Eve xD
To add more questionings (sorry, but this is a big MR so loads of questions). While a rewrite this big could be considered a Major version, this projects aims to use semver . Which means changes in internal code are not relevant to the versioning, only API exposure.
So the API itself as I've gathered is fully backwards compatible. I'm more worried about the asset files. However the only big change has been adding settings to the sprites section. Which could be easily handled with an update script (I may write one myself!). This changes would need to be done (as always, I can take care of them) before merging so this can keep being Objection engine 3.x and users (including myself) wouldn't need to change a thing to get their new and shinny rendering engine.
I write this to ask your opinion and whether you know of other potential breakers of backwards compatibility. To be extra clear, I don't really opposite to bumping to v4, I just think it can be easily avoided in this case..
I want to add that I've edited your first post to serve as a checklist of what needs to be done. Thanks!
Thanks for the script. I shall do some profiling myself, but possibly not on New year's Eve xD
No problem! Of course it'll be most helpful to look at the profiling data once we have it, but currently my prediction/concern is that there may not be one particular time-hogger, but instead that the difference in time may just be due to the way the engines are built. As I understood it, the old engine was built very specifically to draw the graphics necessary for what it wanted to do at that time. The new engine is a bit more abstract, like a game engine without user input and a fixed frame-rate: it loops through all the objects in the scene to run an "update" function on them, then loops through them again to render them to the frame. This approach makes it much more flexible and easy to add new objects/features/scenes/etc, but based on this initial analysis it may come at a heavy time cost.
Hopefully we'll be able to pin down some parts to optimize, but there was another thing I thought of the other day that may be worth discussing here. For the bots, we may be able to implement a scheduling/priority system which attempts to maximize throughput. So if a user requests a 100-message video, and then halfway through rendering it, another user requests a 2-message video, the 2-message video would get prioritized and finished before attempting to complete the 100-message video.
sorry, but this is a big MR so loads of questions
It makes total sense to me! ๐
As far as I know, this pull request shouldn't contain any breaking changes. It just adds the option to use the new engine instead of the old engine (and even then, code change is required to move it to the new engine, so existing code using the library will see no change in output). In that regard, I suppose this pull request would make more sense as v3.3.0, since according to the semver link, minor versions are for when "you add functionality in a backwards compatible manner". That being said, it might also be confusing to have the code/code history refer to a "v4" engine if the official version of the library overall is still at 3.3. So I feel like it could go either way, but strictly from the semver perspective, v3.3.0 would probably fit better.
The performance issues were continuing to bug me, so perhaps against my better judgement I went and started trying to profile it. This was the result I got:
As you can see, the Image.save()
function is taking a huge chunk of the time, around 34.5%. The way the new engine works currently is by rendering each frame and saving it to a PNG file, then concatenating all of those PNGs into a video using FFMPEG. It looks like the old engine just keeps all of the Pillow Image data in memory, then writes it all to a video file at once using cv2 (see video.py
). I'll see if I can try implementing this in the new engine soon.
Yup that should speed things up
One warning though. If you keep all images in RAM for long videos RAM can get out of control. This was actually a patch I made in the V3 engine which for each X seconds of video it would actually render the video and
I just pushed the change from the individual PNGs to the single video as you posted the comment, haha. I hadn't thought of the RAM issue, but that's a really good point - I'll have to make sure to add something to take care of that.
Typing on phone is hard
So I was saying you have to flush the buffer each x second and concat it later Check this commit for info https://github.com/LuisMayo/objection_engine/commit/01a29a171c9dbd7ec07a925d5e8d40a582d9579c
As you suspected, it looks like import/initialization is causing a lot of slowdown.
According to this, almost 9 seconds are being taken to initialize the sentiment analysis model. 8 seconds are being taken to check all of the sprites in the character folder to confirm they are usable and remove any corrupted characters from the pool. These are both one-time operations, though, so at least in theory you should be able to run them once on bot startup then render lots of comment chains without having to reload the data each time. It would require using the new engine's API, though.
Ok so forget that I said last message. It seems using opencv you can keep writing a video in a live way so we avoid having to write out several videos only to concat them later.
For each image you can just flush it to video https://docs.opencv.org/3.4/dd/d43/tutorial_py_video_display.html
Great! I think that's how I have it set up right now, so it wouldn't be necessary to push a fix for it.
As for the sprite verification, I may be able to add an optional parameter to render_comment_list
that tells it to skip the verification process and/or only verify for characters it wants to assign to users. That should hopefully speed it up a lot. I'm not sure what we could do for the sentiment analysis model, aside from loading it once on bot startup and not throwing it away after a single "job" is done.
Ok I'm at PC again which means I can type a bit more comfortably now.
About the import time don't sweat it. Imports are doing once during the initialization of the bot/script. And not re-done until a reboot is performed. This means that it shouldn't be an issue for us.
About the priority thing. As you know I'm rewriting the bots to work as a queued RPC system. Where jobs are being submitted to a RabbitMQ queue, I was already de-prioritizing renders if the user had pending renders already. So I can perfectly up-prioritize renders that are smaller. The worker itself can be found here (unstable version, things will change): https://github.com/LuisMayo/Objection-Engine-Rabbit-Worker. The only bot that has code available to work with it is the Twitter bot in a branch. The telegram bot and possibly the Discord bots are going to be fully rewritten in another language since to be honest I don't like Python that much (sorry I know you do!) and since I have to rewrite good chunks of them already to make them async code I may as well do it in Typescript (which has better async support anyway)
The sprite verification I gather it's only doing it during load-up right? Then the same logic of point 1 applies. As long as it's a one time operation it's fine
Thanks.
About the import time don't sweat it. Imports are doing once during the initialization of the bot/script. And not re-done until a reboot is performed. This means that it shouldn't be an issue for us. ... The sprite verification I gather it's only doing it during load-up right? Then the same logic of point 1 applies. As long as it's a one time operation it's fine
The initialization steps (loading character data and sentiment analysis model) are done every time render_comment_list
is called, because they run within the constructor for the DialogueBoxBuilder
. The function without all of the progress callbacks is basically just
def render_comment_list(
comment_list: list["Comment"],
output_filename: str = "hello.mp4",
music_code: str = "pwr",
resolution_scale: int = 1,
assigned_characters: dict = None,
adult_mode: bool = False,
avoid_spoiler_sprites: bool = False,
):
builder = DialogueBoxBuilder() # <--- sentiment model loaded and characters verified in here
builder.render(
comment_list,
output_filename=output_filename,
music_code=music_code,
assigned_characters=assigned_characters,
adult_mode=adult_mode,
avoid_spoiler_sprites=avoid_spoiler_sprites,
resolution_scale=resolution_scale
)
So, if you created a DialogueBoxBuilder
object on startup and then reused it for each render, I think it would cut down significantly on the render time for subsequent videos.
The telegram bot and possibly the Discord bots are going to be fully rewritten in another language since to be honest I don't like Python that much (sorry I know you do!) and since I have to rewrite good chunks of them already to make them async code I may as well do it in Typescript (which has better async support anyway)
No problem! While I like Python, I definitely know other languages have better support for some things and/or are more efficient. I recall in particular hearing that the current Python libraries for building Discord bots were going to become outdated soon and that JS/TS was more than likely the best way to go on that front.
Thinking about this more, it might make sense for us to pass a sentiment analyzer to the DialogueBoxBuilder
object when creating it. That way we can avoid unnecessarily reloading it, while also making it easier to switch out for other models/functions if we decide we want to do that. For the character verification, I imagine it should be possible to have it only verify characters once it thinks it may want to use them for a specific user, instead of every character every time.
Does the builder use only one sentiment analyser engine for the whole text?
Apart from that. Does the builder has any property that would be understandable for us to share across different videos? This means, can we reuse the same builder over and over or may it have side effects?
Does the builder use only one sentiment analyser engine for the whole text?
Yes, it uses the same sentiment analyzer for all texts that it processes, and (to my knowledge) there's no "memory" within the analyzer itself, so later outputs won't be impacted by earlier ones. As such, we should be able to create one on initial startup and then reuse it for as many builders/engines as we need.
Apart from that. Does the builder has any property that would be understandable for us to share across different videos? This means, can we reuse the same builder over and over or may it have side effects?
I'll have to look into this more when I'm able (which probably won't be for around a week, unfortunately). I think, at the present moment, it may indeed have side effects. Splitting it up to prevent side effects may take some work, but I think it'd well be worth it.
I've made some optimizations that have cut down the rendering time significantly!
With these, profiling shows that the actual rendering of the comment list is now taking only about twice as long as the old engine! Here's a screenshot of the most recent profiling result:
Within the render function, a lot of the time is apparently taken up by drawing text and compositing the layers together:
I'm not sure what to do for the text, but for the compositing, there may be some cases where we can use paste()
instead of alpha_composite()
for better performance.
I replaced alpha_composite()
with paste()
and now we're down to around 13 seconds versus around 7.5 seconds for the old renderer!
At this point, I'm not sure what further optimization could be done - possibly something with the text, but I'm still not sure what exactly. I thought it might be possible that more time is spent in the text rendering just because the text is displayed slower, but I tried lowering the value and didn't see a substantial improvement.
Hi again.
After a lot of time I've found some more time to check on this.
Thanks again for your work. I'm terribly sorry this didn't get merged yert I'm working on it I promise
No problem at all! :)
For the profiler, I use SnakeViz. It's been a while since I last used it, but I'm pretty sure I just followed the tutorial steps and it worked great! Thanks for catching the bug, I'm sorry I didn't notice it before.
It occurred to me recently that I hadn't tested the engine with long words that couldn't be wrapped on whitespace. I added a test script to test it, and this was the result:
https://user-images.githubusercontent.com/9957987/220001806-c590657f-286e-4236-bbcc-d180fdd3c5ac.mp4
To compare, this is what the current renderer gives:
https://user-images.githubusercontent.com/9957987/220001848-f1eab585-0d8e-4a54-ad4c-55aafe04fef8.mp4
Do you think this is something worth investigating further and trying to fix for this initial merge?
Sorry I never got to answer.
I'm afraid yes, it is indeed kind of important. This is because some languages do not use spaces to separate their words. I guess they'd be broken in that case?
I thought I'd remembered that I had tested it on the sample scripts and not issues when it came to languages without spaces... apparently that was not the case ๐ฅฒ
Here's the example.py
script from the original code, but run on the original engine (and with the color tags taken out):
https://user-images.githubusercontent.com/9957987/222469931-073cbe1e-1788-48d1-ac76-7b683d814cc8.mp4
The text wrapping issue happens with Japanese at about 0:33 and with Chinese at about 01:28.
I'm sorry I didn't catch this earlier; today and later this week, I'll try to look more into the issue and see how we might be able to fix it. If you by chance have time to look into it as well, it looks like the line break code is around line 1429 of ace_attorney_scene.py
: https://github.com/Meorge/objection_engine/blob/v4/objection_engine/v4/ace_attorney_scene.py#L1429
Thanks, and sorry again! ๐
As you know (I'm stating this for anyone who get to this thread), this branch is now being tested in the Telegram bot. And a fellow user has encountered an error.
After some tinkering I've gotten to the error. It seems to be caused by emojis in user names (I haven't tested other extended Unicode characters, but they may be similar). Also not all emojis trigger it
I tested it with
from objection_engine.v4.make_movie import render_comment_list
from objection_engine.beans.comment import Comment
comments = [
Crender_comment_list(comments)omment(
user_name="Phoenixยฎ๏ธ",
text_content="Hello. My name is Phoenix. I am a defense attorney.",
),
Comment(user_name="Phoenix", text_content="Here is another line of dialogue."),
Comment(
user_name="Edgeworth",
text_content="I am Edgeworth, because I have the second-most lines.",
),
]
render_comment_list(comments)
And I got this error
Warning! Translator couldn't be initialized, fallbacking to unofficial translation engine: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://cloud.google.com/docs/authentication/getting-started Cast โโโโโโโโโโโโโณโโโโโโโโโโโโโโโ โ User ID โ Character ID โ โกโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ โ Phoenixยฎ๏ธ โ phoenix โ โ Phoenix โ edgeworth โ โ Edgeworth โ larry โ โโโโโโโโโโโโโดโโโโโโโโโโโโโโโ neutral neutral neutral " " Render 0.00 sec Visible 0.03 sec 0.14 SPS WARNING. NO OPTIMAL FONT FOUND, font: {'path': './assets/igiari/Igiari.ttf'}, font score: 8/9, text "Phoenixยฎ๏ธ"
Processing comments... โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 100% 3/3 0:00:08 Rendering video... โโโโโโโโโบโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 20% 1/5 0:00:00 Traceback (most recent call last): File "/home/luis/dev/objection_engine/examples/v4/example_easy.py", line 16, in
render_comment_list(comments) File "/home/luis/.local/lib/python3.10/site-packages/objection_engine/v4/make_movie.py", line 120, in render_comment_list builder.render( File "/home/luis/.local/lib/python3.10/site-packages/objection_engine/v4/ace_attorney_scene.py", line 1548, in render director.render_movie( File "/home/luis/.local/lib/python3.10/site-packages/objection_engine/v4/MovieKit.py", line 581, in render_movie self.update(1 / self.fps) File "/home/luis/.local/lib/python3.10/site-packages/objection_engine/v4/ace_attorney_scene.py", line 703, in update self.textbox.namebox.set_text(name) File "/home/luis/.local/lib/python3.10/site-packages/objection_engine/v4/ace_attorney_scene.py", line 157, in set_text self.font = ImageFont.truetype(font_stuff["path"], size=font_stuff["size"]) KeyError: 'size'
Thanks for noting that! I was able to reproduce it on my end, and I think it should now be fixed. With the following test script (example_emoji.py
), I get a video:
from objection_engine.v4.make_movie import render_comment_list
from objection_engine.beans.comment import Comment
comments = [
Comment(
user_name="Phoenixยฎ๏ธ",
text_content="Hello. My name is Phoenix. I am a defense attorney.",
),
Comment(user_name="Phoenix๐๐คก๐๏ธโก๏ธ", text_content="Here is another line of dialogue. ๐๐ป๐ฌ๐ท"),
Comment(
user_name="Edgeworth",
text_content="I am Edgeworth, ๐ซ๐ฅน๐ฌ because I have the second-most lines.",
),
]
render_comment_list(comments)
https://user-images.githubusercontent.com/9957987/222766559-955c9693-b7d5-424a-81de-b003dd7c6978.mp4
Adding proper emoji support in the future would be really good, but for now at least it won't crash on encountering an emoji.
I just did a quick Google search for line breaking, for the heck of it, and found this! https://github.com/google/budoux I'll have to investigate it a lot more and give a shot at implementing it, but maybe this could be a nice solution to the line break problem??
It seems to only cover Chinese and Japanese languages but it's an start!
Yeah, and looking at it more, it also looks like we might have to tell it which language's parser to use... :( As you said though, it's a start and was the result of a super quick search, so there must be more out there!
I implemented a very basic extra step to the line-break algorithm. Before the regular processing, it checks to see if any one word is longer than a line; if so, it breaks it into individual characters, which the regular processing treats as a word each. The characters are marked so that they don't have spaces inserted after them.
Here are videos I rendered with this change:
https://user-images.githubusercontent.com/9957987/224463425-d4c031ff-7ba4-43aa-bee4-6b7dfd7b9719.mp4
https://user-images.githubusercontent.com/9957987/224463430-1a8016ec-7668-4b50-b4b4-99019b49df33.mp4
Most of the old code is now cleaned out, so that the new renderer isn't under the v4
submodule(?). I was able to test it a bit this evening, and renders seemed to be going fine, but it would probably be good to test some more before merging, just to make sure. We might also need to rename some files to maintain API compatibility
Looking at it more, there are two potential issues I see regarding the API:
The render_comment_list
function is currently in make_movie.py
, such that to import it, the user runs
from objection_engine.make_movie import render_comment_list
In the old renderer, this line would have been
from objection_engine.renderer import render_comment_list
For this case, I suppose we could just rename make_movie.py
to renderer.py
to maintain compatibility with old scripts, or add a new renderer.py
file that just redirects to the object in make_movie.py
, perhaps? (I'm not super experienced with Python module stuff, unfortunately ๐
).
The second one is a little bit more concerning. To import the Comment
object, the user runs
from objection_engine.beans.comment import Comment
The beans
folder used to have more files in it, but with the new renderer, all it holds is comment.py
. It would make sense to me to get rid of the beans
folder then and just have comment.py
in the objection_engine
folder... but this would break existing scripts. :(
How do you think we should handle these changes?
In the first case you can keep the function in the make_movie in order to keep your changes. We just need to make a renderer.py which just re-exports the function. That should be possible I think? Function pointers do exist in Python after all
In the second case I'm afraid we have to keep the folder with only that file. It's not elegant but it's backwards compatibility for those who may be using the library
Thanks again!
Got it. The renderer.py
fix has been implemented, and the comment.py
file has been left unchanged. I went through and cleaned up the examples folder so that all of the scripts should be using the correct import statements now :)
There are two more tasks I can think of that would be good to have for the merge:
1) Now that the old renderer code has been removed, we may be able to revise the list of Python package dependencies, slimming it down.
2) If we want the system to be able to automatically download the asset folder, I think it'd be wise to figure out a way to handle version control. I think I've seen some tools just use Git for versioning, where it downloads the latest commit on the main branch - maybe we could look into that? I'm not certain that this needs to be done before merging the new renderer code, though. For now we could just direct people to download the assets manually, and work on the auto-updater thing for a future version?
3) Do you think it'd be a good idea to remove the pretty-printing progress bars and such from the render_comment_list
function, or at least make it enabled via an optional argument? It occurred to me that while it looks nice when run manually in a terminal, it wouldn't work as well for a bot/when written to a log file.
True. But it's not a requirement for the merge
I've been thinking how to solve this issue for a long time now. I have the v3->V4 update script is almost done. As for how to handle this from now on I have 2 ideas 2.1. Keep doing it like now and dynamically detect the things to fix and fix them. Like maybe in one update we introduced a new character so we download it and only it 2.2 save the build number/version in a txt file and use that for the update scripts
It could be settable by an environment variable. But I wouldn't remove it it's really informative. My logs are really dirty anyway so it's not a requirement for merge either
For the things that aren't requirements, that sounds good. Perhaps we can look into those for future small merges.
Is the V3->V4 update script you're referring to one that will handle downloading the assets? If not, is that something I should work on, or will you be doing that since the assets are hosted on your server? Out of the two options you gave, the dynamic detection would probably be better in the long run (we would avoid having to store a full copy of the asset folder for each version, etc), but I could see the code for it getting messy as well.
The V3/v4 script handles the asset downloading and if the user had custom songs (custom songs were already supported in the previous version) it migrates it to the new format
Sounds good!
Discussing the asset-downloading process with some people online, it sounds like we could create a Git LFS repository on the server and checkout the commit with the tag that corresponds to the current engine version.
Someone I was talking to brought up an approach to downloading only the necessary files by having an asset_list.txt
on the server like so:
Filename Version
...
Character_Phoenix_objection.png 5
...
arrow.gif 1
Then the engine could compare its local asset_list.txt
against the server copy, and download any files where the server's asset version is higher than the local asset version.
A more brute-force approach might be to save each version of the core asset folder as a zip file on the server, and just replace the current downloaded asset folder with it whenever the engine is updated. Custom assets (characters, music) could be placed in a separate folder that isn't touched by the update scripts. However this would mean the server would hold a lot of duplicate data. My current assets_v4
folder is about 104 MB which doesn't seem that big, but if we make lots of incremental updates the redundant file data could get pretty big.
Perhaps the simplest approach, although a bit more tedious for end users, would be to have each feature of Objection check for its necessary files, and raise an error if they are not found, prompting the user to go and download them from a given URL.
Hi!
In c6da87558e7aa1d85a2d5ee2f5a52a3841013f80 you removed the utils.py
file.
I've made changes in this file so this is a classic remove/modified conflict. It's really easy to fix but now I have a doubt?
Does the new engine code expose any methods for obtaining the music list/validating if a song is available?
Of your suggestions I do like the asset_list approach. I'm not implementing it in this update since the change is so massive it's just easier to redownload the full zip file, but is surely something I'd take a look at in the future
I've made changes in this file so this is a classic remove/modified conflict. It's really easy to fix but now I have a doubt?
Adding it back should be fine, as long as it doesn't lead to Git getting all confused, which has happened to me a few times. We'll just need to add code to the new engine to call the ensure_assets_are_available()
function before rendering.
Does the new engine code expose any methods for obtaining the music list/validating if a song is available?
The load_music_data()
function in objection_engine.loading
can retrieve a list of all of the available music:
>>> from objection_engine.loading import load_music_data
>>> from rich import print
>>> print(load_music_data())
{
'jfa': {'relaxed': ['cross-moderato', 'trial'], 'tense': ['objection', 'press']},
'pwr': {'relaxed': ['cross-moderato', 'trial'], 'tense': ['objection', 'press']},
'tat': {'relaxed': ['cross-moderato', 'trial'], 'tense': ['objection', 'press']}
}
As can be seen in the source code for the function, it looks for folders inside the music asset folder that have a config.toml
file. There's currently no validation on the data inside the config.toml
file... maybe that should be fixed in a future update.
I'll add it back. And for backwards compatibility reasons I'm filling the methods in utils.py
pointing to the new music method.
This is because some bots (discord mainly) use the music functions in utils.
Thanks!
This should get merged this week.
I pushed a fix for the "invisible Phoenix" glitch. It just adds a command to set his sprite to the default idle one, before any other commands are executed.
While doing that, I did notice that, as far as I can tell, the new renderer doesn't take comments' manual polarity score into consideration; that is, all three of these comments would produce the same results (angry Phoenix):
Comment(
user_name="Phoenix",
text_content="I HATE YOU SO MUCH!!",
)
Comment(
user_name="Phoenix",
text_content="I HATE YOU SO MUCH!!",
score=1.0
)
Comment(
user_name="Phoenix",
text_content="I HATE YOU SO MUCH!!",
score=-1.0
)
It doesn't look to me like this should be difficult to fix, but I don't have time to do it right now. Hopefully tomorrow or within a few days I should be able to get around to it.
I've pushed the commits with the upgrade script and other backwards compatibility / stability improvements.
There are for now only two things left before merging
For the second point, I'm assuming the method you're referring to is is_renderable()
in beans/text.py
(https://github.com/LuisMayo/objection_engine/blob/main/objection_engine/beans/text.py#L139)? If so, I gather it and score_font()
will need to be kept in the same directory to avoid breaking API compatibility - is this correct?
See issue #97 for discussion of the new engine's features.
In order for this to work, the
assets_v4
folder must be placed in the same folder as the originalassets
folder: https://drive.google.com/open?id=1-2NV-wpXPovJXW3KaI34AXRZFsrqm1oMSome example scripts are included in the
examples/v4
folder.More dependencies have been added, so make sure to run
poetry lock --no-update
andpoetry install
before running an example script.EDITTED BY MAINTAINER (@LuisMayo ) Things to solve before merging.
Save video every X frames or something to avoid using too much RAM[x] Fix emoji error