jurialmunkey / plugin.video.themoviedb.helper

GNU General Public License v3.0
200 stars 95 forks source link

[BUG] Random Crashes on Nexus Nightlies #948

Closed xen0m0rph2020 closed 1 year ago

xen0m0rph2020 commented 1 year ago

Skin section

Other

Current Behavior

Randomly when moving around home screen or widgets or playing video Kodi crashes and restarts This does not happen on nightlies prior to 15th October and on skins without TVMBHelper,

Expected Behavior

Does not crash

Steps To Reproduce

Move down or left or right selection or do nothing

Screenshots and Additional Info

kodi_crashlog_20221130182515.log

Checklist

jurialmunkey commented 1 year ago

Everyone --

Could you all please try this version: https://github.com/jurialmunkey/plugin.video.themoviedb.helper/archive/refs/heads/log_testing_nexus_noreadahead_offscreen.zip

It sets offscreen=False and also disables readahead. Let's see if we still get crashes or not.

tekker commented 1 year ago

@jurialmunkey Cool. Have just installed, rebooted, and can confirm no-addons have reuselangaugeinvoker loaded into the system...

crash in same area (navigation within TMDBHelper)

kodi_log_12.log

further crash

kodi_log_13.log

kodi_log_14.log


Up until recently (a few weeks) I would get a crash (nothing enlightening in the logs) on a specific random item with TMDBHelper which would re-occur on that episode, deleting thumbnails / image cache etc. would be the best way so I could continue watching my show - its basically the same pattern of usage / cause and was exacerbated on FireTV (Same build of Nexus / AH2 / TMDB). It seemed like a form of cache or image corruption... May or may not help.

MoojMidge commented 1 year ago

More importantly, the _locked property is only there to prevent the readahead from spawning multiple threads. We only want one readahead so I lock it whilst it is working and then destroy it once it finishes or respawn it as a new thread once the user is no longer idle.

That's what I meant by not being thread safe. The calls to _next_readahead and next_readahead are not guaranteed to be completed as an atomic operation, as the GIL generally only locks for individual bytecode instructions across all running threads.

It is possible for the following to happen:

  1. Thread1 spawns a thread, Thread2, with a target of _next_readahead
  2. Thread2 starts but does not necessarily progress past the check for _locked before the thread scheduler switches threads
  3. Thread1 spawns another thread, Thread3, with a target of _next_readahead

In this scenario it is possible for next_readahead to be called multiple times.

That is the purpose of threading.Lock, it ensures that the GIL is not released until the lock that has being acquired, has actually been released, because setting and checking the _locked variable across multiple threads is not necessarily going to be evaluated in a consistent manner across the threads i.e. checking _locked cannot be relied upon to function as an actual mutex.

I'm not saying that this is actually what is causing the problem, but it definitely can happen, and all the logs people have posted appear to point to some kind of issue with functions that utilise threading (get_readahead, _process_artwork, _process_ratings, etc.)

jurialmunkey commented 1 year ago

In this scenario it is possible for next_readahead to be called multiple times.

@MoojMidge - it's kinda irrelevant though because only the main service thread can spawn those new threads and it sits in a while loop with a forced sleep of a minimum 200ms between function calls (and thus at least 200ms between spawning new threads). If it is taking more than 200ms to set the _locked attribute to true then we have far bigger problems. https://github.com/jurialmunkey/plugin.video.themoviedb.helper/blob/2e7b0dd8f09766d067daafbd819e7c9d856f700c/resources/lib/monitor/service.py#L27-L29

Even assuming another thread is somehow spawned and gets past that _locked check, it still won't do much. The actual ListItemReadAhead class object is an attribute of self belonging to the main service thread. Considering it is only possible to have one self._readahead attribute, it is only possible to have one ListItemReadAhead object. https://github.com/jurialmunkey/plugin.video.themoviedb.helper/blob/2e7b0dd8f09766d067daafbd819e7c9d856f700c/resources/lib/monitor/listitem.py#L271-L272

In the extremely unlikely event we manage to get two of the threads triggerring at the same time before _locked can be set, and it manages to get past the other checks, the worst it can do is tell ListItemReadAhead to pass next(self.queue) to the readahead function. https://github.com/jurialmunkey/plugin.video.themoviedb.helper/blob/2e7b0dd8f09766d067daafbd819e7c9d856f700c/resources/lib/monitor/readahead.py#L44

So at very very worst we end up doing readahead for Item2 and Item3 simultaneously. We aren't ever working on the same object because they're queued in a generator that is constructed on init for ListItemReadAhead and we can only ever have one ListItemReadAhead object. https://github.com/jurialmunkey/plugin.video.themoviedb.helper/blob/2e7b0dd8f09766d067daafbd819e7c9d856f700c/resources/lib/monitor/readahead.py#L17

And even if we somehow were (for instances, a list with duplicate items), it still won't matter because the GIL won't ever let two threads have async access to the same object at the same time.

AND.... it is all moot anyway as we still get crashes even with readahead disabled AND offscreen=False as reported above (it was definitely worth a shot though -- I wasn't exactly hopeful it'd be the solution but I certainly wasn't feeling like it was hopeless either)

All that being said, I do agree that threading in general is definitely a bit funky on Android. I've noticed some weird behaviour in the past where Android appears to have some sort of upper limit on available threads or it crashes. Though I'm talking 100+ threads running, which the service wouldn't get anywhere close to.

tekker commented 1 year ago

Here is an addon that can switch off ReuseLanguageInvoker it might help with testing as long as users are prepared to take a backup beforehand. Go to Maintenance->Disable ReuseLangaugeInvoker (All, Video only, Check only, Enable, Disable, etc). Probably not a good idea to use Disable ReuseLangugeInvoker (All) but seems fine with video addons.

(Updated fix for android log printing) script.ezmaintenanceplus.zip

The intermittent bugs that appear with reuselanguageinvoker could be caused by bugs in Kodi itself as you have suggested but they may also be increasing the likelihood of a bug that doesn't occur very often at all without this setting turned on (as it is optimizing thread resources on low-end devices). Or it may not be nothing at all to do with it.

xyzfre commented 1 year ago

Is this from peno64 Repo? or is this one different I used to use this tool in the past to clear thumbnails and cache and it used to do a great job creating advanced settings when I was using Nvidia Shield but now I'm on a PC

tekker commented 1 year ago

@xyzfre Originally it probably was. This version has been updated to work with Matrix / Nexus etc with some additional functionality added by me to assist with TMDB Issues, because the TMDB Helper crop directory was filling up disk space in Android eventually crashing Kodi, and have added some functions specifically for this issue for cross-platform testing of reuselanguageinvoker. So its a custom hack of an old addon which was pretty useful.

xyzfre commented 1 year ago

Are these crashes with Nexus happening only on android I experimented at my job on PC Windows 11 Pro and it doesn't seem to crash just curious

jurialmunkey commented 1 year ago

@xyzfre on win 11 nexus is rock solid. My main loungeroom setup is AH2 on an Intel Nuc w/ win 11 running latest Nexus nighties. Practically all my widgets are tmdbhelper (except PVR/live TV). I cant even remember the last time it crashed, that's how stable it has been for me.

xyzfre commented 1 year ago

This is GREAT news @jurialmunkey thank you for your response going to install it now on my PC at home because I did the testing from the PC at work do you recommend a fresh install or can I just install it right over 19.4

jurialmunkey commented 1 year ago

This is GREAT news @jurialmunkey thank you for your response going to install it now on my PC at home because I did the testing from the PC at work do you recommend a fresh install or can I just install it right over 19.4

@xyzfre - I installed over the top without too many issues. I did have one problem with PVR guide data where IPTV merge was having some issue with a missing database entry. Fix was simple though -- deleted my kodi/userdata/Database/Epg16.db file then restarted Kodi to reimport EPG data and it has worked properly since.

tekker commented 1 year ago

@jurialmonkey Just a note that the machines I am using to test are all flavours of Arm / Linux (ie. Android FireTV 4K Max / MacOS M1 / LibreELEC on Pi4).

@xyzfre You might have to disable / uninstall IPTV Simple (or other "binary addons") first if you have problems updating (even updating nightly rc1 -> nightly rc2 for example). Then reinstall after updating.

MoojMidge commented 1 year ago

@jurialmunkey - it might all be a red herring, just thought it was worthwhile pointing it out as I had seen similar issues before.

I have recently updated to Nexus RC2 and am having the same type of crashes on CoreElec (AMLogic ARM Linux distribution), that others have reported, so the issue is not Android specific. Nexus on Windows x64 has been rock solid.

The stacktraces I see were all similar to one of the following types:

Thread 1 (Thread 0xd56fd2c0 (LWP 14436)):
#0  0xf2c2f4c0 in __dynamic_cast () from /usr/lib/libstdc++.so.6
#1  0x00912888 in CGUIControlLookup::RemoveLookup() ()
#2  0x0110acdc in ?? ()
#3  0x0110afa4 in ?? ()
#4  0x0093617c in CGUIListItem::~CGUIListItem() ()
#5  0x00b3d474 in CFileItem::~CFileItem() ()
#6  0x00b3d4cc in CFileItem::~CFileItem() ()
#7  0x0034a94c in ?? ()

This type of crash appears to indicate an issue with removing/replacing listitems from a container. On my system, so far, this seems to be resolved by the following commits

Thread 1 (Thread 0x983ff2c0 (LWP 26290)):
#0  0x69746162 in ?? ()
#1  0xc28035b4 in ?? () from /usr/lib/python3.11/lib-dynload/_elementtree.so
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

This type of crash appears to indicate an issue with the ElementTree C accelerator. On my system, so far, this seems to be resolved by the following commit, as this is the only use of the ElementTree module that I could see in TMDBHelper. When I get a bit more time I'll see whether there are similar issues when using a different XML parser - either package up lxml or use defusedxml which is already packaged up as a Kodi module.

For anyone else still experiencing crashes with the test addons posted previously, it might be worthwhile deleting the OMDb API key in the TMDBHelper addon settings and seeing if things improve.

jurialmunkey commented 1 year ago

it might all be a red herring, just thought it was worthwhile pointing it out as I had seen similar issues before.

@MoojMidge - Don't take my comments the wrong way. I definitely appreciate the investigation/insight and think it a completely worthwhile discussion to have! At this point, better to explore potential avenues even if it takes us on a bit of a detour.

Whilst I doubt readahead is the cause here, it undoubtably will exacerbate an underlying issue by increasing the frequency of lookups. Adding more cars to a road with dangerous conditions can only be expected to increase crashes.

I have recently updated to Nexus RC2 and am having the same type of crashes on CoreElec (AMLogic ARM Linux distribution), that others have reported, so the issue is not Android specific. Nexus on Windows x64 has been rock solid.

Yep, already aware that there are crashes on both Android and ELECs. However, I think the underlying cause is separate.

On the ELECs, my guess is module incompatibilities caused by their introducution of Python 3.11 whilst Kodi's bundled version stays at 3.8.5 (Linux versions of Kodi override the bundled version with the system version - constant source of frustration for a lot of C based modules like NumPy). For Android, on the other hand, I expect the issues are related to Google enforcing the Android API level be upgraded to 31 for new releases which has introduced a lot of hardware permission issues on that platform.

This type of crash appears to indicate an issue with the ElementTree C accelerator.

Would not be surprised at all if ELEC switching to Python 3.11 has broken the ElementTree module.

tekker commented 1 year ago

@jurialmunkey It must be very frustrating for you without the ability to test on these platforms and for the amount of time that you need to commit to respond and resolve these issues.

If you are familiar with Tailscale I can give you access to a dedicated Pi4 / LibreELEC machine except that at the moment I have no way of remoting in to the display (to allow navigation testing) - at best I have a system where I capture continual screenshots in Kodi and forward them over a terminal (that allows image display) that runs at about 0.5fps that I use to diagnose / update systems remotely. I have a better system in place on Android because I can use ADB to share the screen (same approach with Tailscale to allow remote TCP access via ssh etc). Not sure if you're familiar with Tailscale - it leverages Wireguard VPN approach under the bonnet and is a nice way to quickly connect a remote machine to a overlay network. If you are in Australia / Victoria I can loan you a Pi4 / LE setup, just let me know.

jurialmunkey commented 1 year ago

@tekker @MoojMidge - Okay, fairly confident crashes are related to Artwork and Ratings threads, so I'm going to take them out of the equation. Then once I get confirmation from you both that we aren't getting crashes, we'll reintroduce them individually and see if we can nail down an exact issue.

Can you test this version: https://github.com/jurialmunkey/plugin.video.themoviedb.helper/archive/refs/heads/log_testing_nexus_noartratings.zip

Once you're confident you aren't getting crashes, you can renable ratings/artwork individually by uncommenting the relevant sections in this commit (and make sure to restart Kodi after to get the service to reload) https://github.com/jurialmunkey/plugin.video.themoviedb.helper/commit/422b41ef0363788f9c6204dc4fa56151b791722c

I've also changed the code a bit to eliminate concerns about offscreen/onscreen status of the ListItem. This version will force the threads to join first before adding the ListItem to the container so it won't have any changes until after it is added.

I'm expecting crashes to restart once Artwork/Ratings code is uncommented but there is a long shot chance it might not considering what MoojMidge was finding above with the offscreen setting. Will be interesting to see what happens.

tekker commented 1 year ago

@jurialmunkey Ok just testing now. It did crash after first restart of Kodi post addon update but that might be a different issue. Actaully its looking really good and fast re. art but this might be because I have updated Nightly / uninstalled / reinstalled addon. It is faster for sure.

Note: reuselanguageinvoker is disabled / modified to set to false on all addons.

I'll explain my pattern of use that has been causing this issue:

Will continue testing. The readahead approach / preloading of images is significantly faster on this machine with only a couple of lags (background processes maybe)

At present, I am displaying artwork and displaying TMDB rating ( I still had it turned on) - I'm not sure what is not being displayed with the current commit, I will take a look through the code.

All ratings enabled -> no further crashes. Posters / fanart etc -> no issues.

Testing on LibreELEC Pi4 Nightly (25/12) have been unable to crash the system with current usage pattern at this time.

Moving on now to Android testing (Android has always been less stable than LE/Pi4 generally for Nexus)

After Android testing I will begin uncommenting the service monitor code. I was expecting Ratings / Art to not be displayed so I was a bit surprised.

@jurialmunkey Is it possible there is some redundant code / or code that is being re-executed due to threading causing a race condition?

jurialmunkey commented 1 year ago

@tekker - Ratings other than TMDb and Kodi should not be displayed ever with this version (i.e. no Metacritic, RT, or Trakt). Artwork might still be displayed but not cropped logos or blurred fanart (i.e. no image processing).

If you are getting ratings then you possibly haven't installed this version (potentially updated from repo instead of zip). Also note that you must restart Kodi to force the service to restart after installing a new version.

tekker commented 1 year ago

@jurialmunkey OK thats what I thought too.. HOWEVER... I am actually moving on now to UNCOMMENTING listitem.py - which should not be possible? The code is there and I have uncommenting art testing now (also deleted the pyc cache file)

eterna:~ # sed -n '208,218p' /storage/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/listitem.py
        if process_artwork:
             t_artwork = Thread(target=_process_artwork)
             t_artwork.start()

        t_ratings = None
        # if process_ratings:
        #     t_ratings = Thread(target=_process_ratings)
        #     t_ratings.start()

        t_artwork.join() if t_artwork else None
        t_ratings.join() if t_ratings else None

-- Test:

eterna:~ # sed -n '208,218p' /storage/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/listitem.py
        if process_artwork:
             kodi_log(f'SM create process_artwork thread', 1)
             t_artwork = Thread(target=_process_artwork)
             t_artwork.start()

        t_ratings = None
        # if process_ratings:
        #     t_ratings = Thread(target=_process_ratings)
        #     t_ratings.start()

        t_artwork.join() if t_artwork else None
2022-12-26 16:51:24.474 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: setup_listitem for: 10025 99950 Container(513).ListItem({}).
2022-12-26 16:51:24.474 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: setup_current_item
2022-12-26 16:51:24.475 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: get_itemdetails
2022-12-26 16:51:24.507 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: get_person_stats
2022-12-26 16:51:24.507 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_finalise_listcontainer
2022-12-26 16:51:24.512 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM create process_artwork thread
2022-12-26 16:51:24.513 T:11139    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_finalise _process_artwork get_builtartwork
2022-12-26 16:51:24.513 T:11139    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_finalise _process_artwork get_image_manipulations
2022-12-26 16:51:24.528 T:10598   debug <general>: PushCecKeypress - received key a8 duration 0
2022-12-26 16:51:24.540 T:10550   debug <general>: HandleKey: 168 (0xa8, obc87) pressed, window 10025, action is Right
2022-12-26 16:51:24.541 T:11139    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_finalise _process_artwork setArt: Start
2022-12-26 16:51:24.541 T:11139    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_finalise _process_artwork setArt: Done
2022-12-26 16:51:24.541 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_finalise is_same_item
2022-12-26 16:51:24.541 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_exit add_item_listcontainer
2022-12-26 16:51:24.541 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: add_item_listcontainer: start
2022-12-26 16:51:24.542 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: add_item_listcontainer: complete
2022-12-26 16:51:24.687 T:10598   debug <general>: PushCecKeypress - received key a8 duration 159
2022-12-26 16:51:24.758 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: setup_listitem for: 10025 99950 Container(513).ListItem({}).
2022-12-26 16:51:24.758 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: setup_current_item
2022-12-26 16:51:24.758 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: get_itemdetails
2022-12-26 16:51:24.790 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: get_person_stats
2022-12-26 16:51:24.790 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_finalise_listcontainer
2022-12-26 16:51:24.796 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM create process_artwork thread
2022-12-26 16:51:24.797 T:11140    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_finalise _process_artwork get_builtartwork
2022-12-26 16:51:24.797 T:11140    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_finalise _process_artwork get_image_manipulations
2022-12-26 16:51:24.824 T:11140    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_finalise _process_artwork setArt: Start
2022-12-26 16:51:24.824 T:11140    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_finalise _process_artwork setArt: Done
2022-12-26 16:51:24.825 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_finalise add_item_listcontainer: STARTED
2022-12-26 16:51:24.825 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: add_item_listcontainer: start
2022-12-26 16:51:24.825 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: add_item_listcontainer: complete
2022-12-26 16:51:24.825 T:10578    info <general>: [plugin.video.themoviedb.helper]
                                                   SM: on_finalise add_item_listcontainer: COMPLETE

You can see that the line: SM create process_artwork thread is being logged and the code is being executed.

jurialmunkey commented 1 year ago

@tekker - Yeah if that code's there then it's installed. Might be the old service still running. Need to restart Kodi to force the service to restart after installing.

With that code commented out, you shouldn't get ratings other than TMDb and Kodi. You will however still get TMDb because it is mapped earlier when details from TMDb are retrieved.

tekker commented 1 year ago

Its definitely executing the new code - but not picking up new ratings. Maybe they were cached? I will clear the caches...

Note: added a debug line " kodi_log(f'SM create process_artwork thread', 1)" before t_artwork = Thread(target=_process_artwork)

Testing now disabled Artwork and ENABLED Ratings (added kodi_log debug lines as well).

First crash now after navigating down to check Fanart etc from TV Show view:

eterna:~ # systemctl stop kodi
eterna:~ # rm /storage/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/__pycache__/listitem.cpython-311.opt-1.pyc  ^C
eterna:~ # vi /storage/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/listitem.py
eterna:~ # sed -n '208,218p' /storage/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/listitem.py
        #if process_artwork:
        #     kodi_log(f'SM TEKKER create process_artwork thread', 1)
        #     t_artwork = Thread(target=_process_artwork)
        #     t_artwork.start()

        t_ratings = None
        if process_ratings:
             kodi_log(f'SM TEKKER create process_ratings thread', 1)
             t_ratings = Thread(target=_process_ratings)
             t_ratings.start()
eterna:~ # systemctl start kodi

kodi_crashlog_20221226170604.log

Have now deleted Item Cache only. Cannot reproduce bug in same area with the same code as above. NOTE: Now crashed again, similar pattern as before. Rebooted kodi and crashed again just after restart.

kodi_crashlog_20221226171447.log kodi_crashlog_20221226171509.log

Continual pattern of crashes now - in reboot loop, kodi has gone to safe mode in LE. Will rollback changes to the code from current commit and retest:

I noticed that the Reboot cycle crash after clearing was erroring in the same(ish) spot:

kodi_crashlog_20221226172119.log

kodi_crashlog_20221226172039.log kodi_crashlog_20221226172059.log

t_artwork = None
        # if process_artwork:
        #     t_artwork = Thread(target=_process_artwork)
        #     t_artwork.start()

        t_ratings = None
        # if process_ratings:
        #     t_ratings = Thread(target=_process_ratings)
        #     t_ratings.start()

Rollback to current noartnoratings.zip - Kodi working OK.

@jurialmunkey Would it help if I were deleting the Cache(s) with this testing?

I have to take a break for testing for a bit. I can still see all of the Fanart etc with the current process_artwork. AH2 looks the same as usual (even with items I have never looked at before). Extra ratings not visible though. Assume this is expected...

jurialmunkey commented 1 year ago

I have to take a break for testing for a bit. I can still see all of the Fanart etc with the current process_artwork. AH2 looks the same as usual (even with items I have never looked at before). Extra ratings not visible though. Assume this is expected...

No worries. Thanks for testing. I think the issues with re-enabling ratings adds a lot of weight to MoojMidge's theory about some issue with ElementTree on the ELECs. Will be interesting if enabling only Artwork also causes issues (probably also a PIL issue if that's the case).

Been trying to get LibreELEC installed on a Virtual Machine but so far updating to the nightlies is unsuccessful (I can get the Matrix version installed in VirtualBox but after updating to Nexus it hangs on the splash). I've got an unused ASUS T100 lying around so I might try to get LibreELEC installed on that. Will be interesting to see if it is only Arm devices w/ CoreELEC or if I can also reproduce on an Atom device w/ LibreELEC.

tekker commented 1 year ago

Ok I'm retesting with Artwork Only enabled now (ratings section commented out / no additional debug line before t_artwork = Thread() line). Will post logs if / when a crash occurs.

Also thanks again for working on this bug. It's important to me especially since I built an install for a good friend, and I can now set him up at least with a working system in a couple of days - I had his box now for about 5 weeks while trying to workaround these crashes... Alternative was going to be setup with base Estuary or a crappy skin, even if ratings aren't 100% correct ATM.

I haven't been able to reproduce the error at all on LibreELEC (fingers crossed) and the difference in stability is a world apart from a few days / weeks / months ago.

I've switched over to Android FireTV 4K MAX testing now with Artwork enabled and Ratings disabled. So far, no crashes and very stable. Will use Android machine for now for the night's Kodi watching, and upload logs with any errors.

Results from testing on Android over the evening indicate marked increase in stability with current Service Monitor changes / commits (no crashes except a post-update crash). Very stable with most recent armv7 Kodi Nightly apk while browsing movies / tv shows through AH2 served primarily via Fen and TMDBHelper using AutoWidgets for multiple widget sections per home menu item.

I have yet to test the other usage pattern causing very common Nexus crashes which is browsing EPG / PVR quickly hoping this set of commits has squashed this bug - looks good so far though :)

jurialmunkey commented 1 year ago

@tekker - Ok this sounds really promising. Let's confirm then that it is actually ElementTree causing the problem.

I'll get you to re-enable the ratings code again, then we'll disable the ElementTree parsing and see if it crashes.

In resources/lib/api/request.py comment out these five lines in translate_xml() and replace with a return {} https://github.com/jurialmunkey/plugin.video.themoviedb.helper/blob/a9d4abc5af1043f99ce9080af52a8f9d0b0a63d9/resources/lib/api/request.py#L31-L35

i.e. so becomes:

#  if request: 
#     import xml.etree.ElementTree as ET 
#     request = ET.fromstring(request.content) 
#     request = dictify(request) 
#  return request 
return {}
MoojMidge commented 1 year ago

@jurialmunkey - A few observations as I can't properly test your test versions (TV is in near constant use over the holidays):

tekker commented 1 year ago

Ok I have applied the change to request.py, restarted kodi and all loaded correctly (stop kodi, delete .pyc file, apply change, start kodi).
Then applied change to enable ratings in listitem.py, Kodi restarted twice but I think that is due to some type of caching. No ratings except cached Kodi star ratings displayed (as expected?) on TV show individual listitems, ratings are displayed in Fullscreen for new show not yet navigated.

@MoojMidge Do you know any way that I can remotely connect to the Kodi instance (gdb) running in LE? I have been able to do LE builds in Parallels VMs on MacOS maybe enable RelWithDbgInfo if needed, but not sure how to connect to the running instance from a remote machine with VSCode etc.

I can not reproduce the crash condition, it all looks good for now. LibreELEC pi4 Nightly 251222 (same build).

jurialmunkey commented 1 year ago

@MoojMidge

A few observations as I can't properly test your test versions (TV is in near constant use over the holidays)

Haha no worries! I'm in the same boat at the moment with my loungeroom setup. Can't mess with anything while the cricket is on!

Setting offscreen=False introduces noticeable jankiness to skin fluidity when scrolling through listings and between widgets

Yeah I noticed the jankiness too which is why I reverted the change in the main branch.

I'm trying out a slightly different approach in the nexus_no_offscreen branch which seems promising.

That version adds an option: TMDbHelper Settings > Other > Rebuild service listitems offscreen

That will force a new version of the listitem to be rebuilt offscreen for the artwork/ratings threads to work on in the background. That way the original listitem can be added to the container early but we dont need to use offscreen=False because the artworks/ratings are added to a cloned ListItem which is then (re)added to the container once those threads finish working.

* Frequent crashes when enabling ElementTree parsing, similar to what @tekker was experiencing

Yeah fairly certain ElementTree is the main cause of the issue on the ELECs. It was reported in a forum thread for another addon https://forum.libreelec.tv/thread/26400-is-it-possible-to-downgrade-python-version-without-use-a-old-image-libreelec/?postID=174964#post174964

 No crashes so far when disabling loading of the C module accelerator ([MoojMidge@8648220](https://github.com/MoojMidge/plugin.video.themoviedb.helper/commit/864822065ed78157ec6c2d352140f2ec74a337cc)). Note this also includes a bunch of other test changes that I was trying out, but only the blocking of _elementtree is actually achieving anything different to the original code.

This is great! Though I'd be hesistant to add in this type of workaround in the main branch because it will create a bit of an ostrich effect that removes the motivation for users to report the underlying issue to LibreELEC devs. Not keen to give users sand to stick their heads in because they're allergic to disabling their banned addons and contributing by reporting issues properly.

ElementTree is part of the Python standard library and is particularly useful in the context of Kodi due to skins using XML. If LibreELEC is insistent on updating to Python 3.11 then this module is a fairly essential one to ensure works correctly.

jurialmunkey commented 1 year ago

@tekker - Please test v5.0.38 (standard release) and confirm that the below settings prevent the crashes: https://github.com/jurialmunkey/plugin.video.themoviedb.helper/releases/tag/v5.0.38

  1. Enable the setting: TMDbHelper Settings > Other > Rebuild service listitems offscreen
  2. Delete your OMDb key: TMDbHelper Settings > API Keys > OMDb API key
  3. Hit "OK" to save your settings and then restart Kodi for the settings to be applied.

Obviously you won't get new OMDb based ratings (RT, Metacritic, IMDb) but you will still get the others (Kodi, Trakt, TMDb). You will still get artwork manipulations (blur/crop) and readahead.

Based on the testing above this should prevent the crashes on the ELECs until the underlying platform issue with ElementTree can be resolved. Only thing I'm not certain on is the readahead but I think it should be fine. If it is causing issues then I can add an additional setting to disable it.

MoojMidge commented 1 year ago

@tekker I don't use VSCode but you should be able to SSH into your LE machine and attach gdb to the running kodi.bin process or stop kodi and launch it directly in gdb. I'd imagine VSCode has some facility to connect to a remote gdb target (a guick google search seems to confirm this is possible through the use of various debugging addons).

@jurialmunkey Can't test at the moment but nexus_no_offscreen looks like a good approach. Appreciate your efforts.

I would agree that blocking _elementtree from loading is not the way to get a proper solution, but deleting the OMDb key is also just masking the problem, albeit one that doesn't have any maintenance burden for you.

Blocking _elementtree is something I can handle on my own installations, which suits me fine until the root cause of the problem is resolved (or until I get some time to dust off my rusty C knowledge and investigate further). Either way the code is there if you, or anyone else, wants to use it.

tekker commented 1 year ago

@jurialmunkey Okay testing with current changes applied as requested.

Crash after first install of new version with settings and then restarted - all good so far (Android current nightly as what I was watching).

All good on LE4 251222 Nightly so far... will update this comment as testing proceeds. Very stable according to usage pattern identified (could not crash by navigating v. fast with key held down etc / back and forth quicky etc).

Appears* to have improved navigation speed and display of images a fair bit. New Trakt / IMDB ratings don't seem to be loaded as quickly as the images (may be subjective / relative).

jurialmunkey commented 1 year ago

@MoojMidge

I would agree that blocking _elementtree from loading is not the way to get a proper solution, but deleting the OMDb key is also just masking the problem, albeit one that doesn't have any maintenance burden for you.

Definitely agree that deleting the OMDb key also masks the problem. It's more that your approach being better in terms of completely restoring functionality also makes it better at completely masking the issue 😉

At least by deleting the OMDb key there is reduced functionality (no RT, Metacritic, IMDb) which should give users motivation to work with ELEC devs to get the underlying problem fixed whilst also allowing for normal usage in the interim.

On a more personal note, I'm so extremely tired of the forum nonsense about the problem being in my code, so I don't want to legitimise those crap takes with a workaround that magically "fixes" the crash but in reality is only obscuring the real issue.

That being said, I really appreciate having the alternative option and the effort that went into it! I think the best approach will be to revist once there is a "stable" ELEC Nexus release. If it appears that it will be waiting on an upstream Python fix then implementing your workaround as a longer term band-aid will probably be the best approach.

tekker commented 1 year ago

@jurialmunkey I think there may have been a bug introduced somewhere / unforeseen issue with v5.0.38 changes regarding the ClearLogo show titles (TVMDB Season view->Combination (episodes list)). The logo lags, switches back to previous show, then changes size, then corrects. Similar issue on LE, looks like related to re-adding ratings - Clearlogo loading ASAP definitely looks the cleanest so I would consider disabling ratings to restore speed if there is a compromise... Maybe ignore this comment for now, just for reference.

Sorry jurial, at the moment this actually does have a big impact on usability (up to 3-4 seconds to display updated logo when changing tv show). I will keep testing.

@jurialmunkey @MoojMidge - Would you be able provide me with a Test Case so that I can verify that ElementTree parsing issue is only contained within LibreELEC - I have Arm64 MacOS and Armv7 Android FireTV running on a few screens in parallel here, it might help to figure where to send the issue.

jurialmunkey commented 1 year ago

@tekker - Yep that's the cost of enabling the Rebuild service listitems offscreen setting.

With that setting disabled (default behaviour), TMDbHelper builds the base ListItem first then artwork and ratings are processed in separate threads to avoid delays. This is super smooth because the threads work directly on the existing ListItem and add the extra artwork/ratings properties immediately once available.

Technically, however, we're meant to add a special flag to lock the GUI while adding properties to an "onscreen" ListItem. TMDbHelper doesn't use the GUI lock because it makes animations super janky and navigation generally unpleasant.

Enabling that "rebuild" setting is essentially a "safe" mode. It forces TMDbHelper to rebuild the ListItem to provide a clone that the artwork/rating threads can work on separately so they aren't working on the "live" version. Downside is that both threads must complete before the clone can be (re)added and made available (so a delay in one thread will hold up the other one).

It's not the main issue causing the crashes here (that'd be ElementTree for OMDb ratings), so it is definitely worth experimenting with it disabled on account of the far better experience. Majority of setups will have no or very few issues with it disabled (it is far less problematic than reuselanguageinvoker) but nonetheless it might occassionally cause issues for some platforms.

tekker commented 1 year ago

@jurialmunkey thanks for the very clear explanation. I'm sure I did read somewhere in a recent-ish commit in Kodi that explicit gui locks are ignored for offscreen items?

Are the elementtree crashes on navigation solely isolated to parsing OMDB ratings? I wonder if using a different XML parsing library altogether instead of inbuilt could help to shed further light on what is going on - If there is a RLI related issue I'm also wondering if the module importing approach itself of elementtree could be contributing (I completely understand why you don't want to go down the RLI rabbit-hole at the moment). All in all, with the information in this Bug report and your current version, we can hopefully get to the other side of full resolution and understanding / documentation of root causes for further development. I'm pretty new to python development so take these comments with a grain of salt.

I am very interested as well in the reuselanguageinvoker's effects as it seems this was exacerbating the likelihood of hitting this bug, at some point I will test with these settings in combination with switching RLI back on in the addons. Until then I can workaround by: a) run a service at startup to force-disable RLI in all addons or b) disable addon auto-updates.

On Android / LE4 I think there is a side-effect of the Rebuild Listitems Offscreen approach of combining Artwork and Ratings - it looks like it is having an impact (strangely) when displaying artwork (ClearLogo) for example for the FEN logo when opening the addon, its taking much longer to display.

Still no crashes on any of my test machines all evening since installing latest version of TMDBHelper though.

Take it easy mate :)

MoojMidge commented 1 year ago

@jurialmunkey - I am not a programmer, certainly not a professional one, but I know how to program in a few languages, at least enough to appreciate the quality and complexity of code I see. I learnt Python solely because of issues I was having with Kodi, and did so by looking at the code of other peoples addons. No codebase is perfect and there will always be bugs that slip through, but your work on AH2 and TMBDHelper is impressive and really great learning material.

I suspect that people complaining about your code would not have any real idea about what they are talking about, let alone be able to recognise the time and effort that has gone into developing AH2 and TMDBHelper. I certainly doubt they would have any understanding about how frustrating dealing with things outside of your control, like a potentially platform specific standard library issue, can be.

As an aside, the bump to Python 3.11 came from Kodi which is why the crashes are occuring with Nexus on Android. LE and other similar Linux distributions are just following suit. Packages for other Linux distribution may use a different (system) Python version, but I think this problem is likely to effect most (all?) platforms that Kodi runs on, except Windows which is still on Python 3.8.

@tekker - A test case is going to be difficult to develop without investigating further (building LE/CE with DEBUG=Python3,kodi). The issue with _elementtree does not occur consistently, but the stacktraces indicates that there is a null pointer being referenced at some point. The issue could be something that changed in _elementtree, or it could be due to some race condition triggered by changes to gcmodule, or the import module, or when system load is high, or somewhere else due to something else...

There are a bunch of recent Python fixes, that include changes to _elementtree, that attempts to prevent race conditions associated with variable reference counting, so this issue may already be fixed in a future maintenance release of Python 3.11

I wouldn't try to draw too many conclusions about the use of reuselanguageinvoker, or about the actual XML parsing at this point. There are further workarounds that could be implemented, including using another XML parser or disabling _elementtree, but like jurialmunkey indicated, a proper fix to the crash in CPython is the best outcome.

In the meantime OMDB ratings will need to be disabled, but as jurialmunkey suggested, try disabling Rebuild service listitems offscreen and see how you go.

tekker commented 1 year ago

@MoojMidge Thanks mate. I think running up custom LE / Kodi build is the best approach for me, will try with your suggested argument of DEBUG=Python3,kodi. Not suggesting workarounds per se for the _elementtree etc, more so for reducing confounding issues or multiple overlapping bugs while testing until these Python3.11+ issues are stabilised in Kodi. Your point regarding assumptions/conclusions wrt. XML/RLI, and effect of CPython in the code is well taken. Agree that those complaining about AH2/TMDB overlook just how much the code is optimizing for performance at the same time necessarily competing with uncontrolled code from other areas, and his level of commitment to this project for the benefit of the kodi user community.

jurialmunkey commented 1 year ago

@tekker

I'm sure I did read somewhere in a recent-ish commit in Kodi that explicit gui locks are ignored for offscreen items?

Yep, those commits added the offscreen=True flag which is what TMDbHelper uses to ignore the GUI lock. Previously Kodi hardcoded offscreen=False which forced the GUI to freeze as a safety measure while a ListItem was being modified onscreen.

Since we technically aren't meant to edit the ListItem after it is added onscreen when using offscreen=True, I added that workaround setting to work on a clone offscreen instead.

Are the elementtree crashes on navigation solely isolated to parsing OMDB ratings?

Other way around. ElementTree is isolated to parsing OMDb because OMDb is the only API providing XML formatted metadata.

The other APIs provide JSON. If they also used XML then I'd expect to see crashes elsewhere.

I wonder if using a different XML parsing library altogether instead of inbuilt could help to shed further light on what is going on

The advantage of standard library modules is that they are bundled with Python. Kodi doesn't have anything like PIP to install additional packages, so additional modules need to be packaged as Kodi addons.

Plus, more importantly, there's an expectation that standard library modules work out of the box.

I am very interested as well in the reuselanguageinvoker's effects as it seems this was exacerbating the likelihood of hitting this bug, at some point I will test with these settings in combination with switching RLI back on in the addons.

Like MoojMidge said, not really worth speculating about. I only mentioned RLI because it is a potential confounding variable for a skin like AH2 which has widgets (keep in mind "combined" season/episode views use widgets to display the episodes component, so the problem with RLI isn't only limited to the home screen or search window).

I don't have any reason to think the two issues are related.

On Android / LE4 I think there is a side-effect of the Rebuild Listitems Offscreen approach of combining Artwork and Ratings - it looks like it is having an impact (strangely) when displaying artwork (ClearLogo) for example for the FEN logo when opening the addon, its taking much longer to display.

Yes, like I mentioned above, that's the downside of that setting. A delay in either thread will delay both. Additionally, there is a further delay while Kodi (re)adds the cloned item back to the hidden onscreen container.

With the setting off, TMDbHelper is working directly on the "live" version of the item already in the hidden onscreen container so there is no delay.

Also note the artwork thread only does artwork manipulations, so you will only see delays on cropped logos and blurred fanart. All other artwork is already added to the original ListItem before the artwork manipulation thread even starts.

jurialmunkey commented 1 year ago

@MoojMidge

I am not a programmer, certainly not a professional one, but I know how to program in a few languages, at least enough to appreciate the quality and complexity of code I see. I learnt Python solely because of issues I was having with Kodi, and did so by looking at the code of other peoples addons. No codebase is perfect and there will always be bugs that slip through, but your work on AH2 and TMBDHelper is impressive and really great learning material.

I'm only a hobbyist too. Had some basic functional Python skills beforehand but pretty similar path of sitting myself down to learn some more advanced Python because I ended up getting frustrated with what I could do purely as a skinner.

I did have some previous professional experience as an integrations developer for a few years but that was mostly limited to XML and YAML with a little bit of functional Perl for some simple backend integrations. Definitely no object oriented experience at this scale so it's nice to hear such nice words about my code!

There's definitely a progression where there's a lot of clunky parts from earlier on that I cringe at and wish I did differently. Though I think that's normal at any level and at some point you have to say "good enough".

As an aside, the bump to Python 3.11 came from Kodi which is why the crashes are occuring with Nexus on Android. LE and other similar Linux distributions are just following suit. Packages for other Linux distribution may use a different (system) Python version, but I think this problem is likely to effect most (all?) platforms that Kodi runs on, except Windows which is still on Python 3.8.

Actually you're completely right. I missed that PR on master where it was bumped to 3.11 on 1st of November.

Not sure why Windows is staying at 3.8.15 in that case. I'd always thought the Windows version was pegged to the internal version and had just assumed LibreELEC was making an early jump to iron out issues in advance.

In a way that's better news as the Android crashes are probably related to the same ElementTree issue rather than being a separate issue related to hardware permissions.

In that case, I'm going to try to get LibreELEC installed on this old ASUS T100 laptop I have lying around to see if I can replicate and make a proper bug report.

SamDW96 commented 1 year ago

In case it helps out, I can report that my crashing issues seem to be fixed on Android too, using the latest Nexus RC2 with AH2 and TMDBhelper on the latest versions, when using the workaround here:

@tekker - Please test v5.0.38 (standard release) and confirm that the below settings prevent the crashes: https://github.com/jurialmunkey/plugin.video.themoviedb.helper/releases/tag/v5.0.38

  1. Enable the setting: TMDbHelper Settings > Other > Rebuild service listitems offscreen
  2. Delete your OMDb key: TMDbHelper Settings > API Keys > OMDb API key
  3. Hit "OK" to save your settings and then restart Kodi for the settings to be applied.

I am using an NVIDIA Shield Pro (2019) and have been experiencing very frequent crashes for about two months - which could be mitigated by rolling back to a build from Oct 15th (according to comments in various places), but presumably anything up until Oct 31st.

Glad to see this getting figured out slowly. Once the actual bug gets reported, let's hope for a quick fix. I do miss the additional ratings that I've so gotten used to from this wonderful skin.

Thank you so much to everyone helping to figure this out.

AakashC2020 commented 1 year ago

In case it helps out, I can report that my crashing issues seem to be fixed on Android too, using the latest Nexus RC2 with AH2 and TMDBhelper on the latest versions, when using the workaround here:

@tekker - Please test v5.0.38 (standard release) and confirm that the below settings prevent the crashes: https://github.com/jurialmunkey/plugin.video.themoviedb.helper/releases/tag/v5.0.38

  1. Enable the setting: TMDbHelper Settings > Other > Rebuild service listitems offscreen
  2. Delete your OMDb key: TMDbHelper Settings > API Keys > OMDb API key
  3. Hit "OK" to save your settings and then restart Kodi for the settings to be applied.

I am using an NVIDIA Shield Pro (2019) and have been experiencing very frequent crashes for about two months - which could be mitigated by rolling back to a build from Oct 15th (according to comments in various places), but presumably anything up until Oct 31st.

Glad to see this getting figured out slowly. Once the actual bug gets reported, let's hope for a quick fix. I do miss the additional ratings that I've so gotten used to from this wonderful skin.

Thank you so much to everyone helping to figure this out.

Hi @jurialmunkey,

This resolved the crashing issues for me too on my Nvidia Shield TV Pro 2019 but I too really miss the OMDB ratings. Hope a fix for this comes soon.

Thanks and Regards, ShibajiCh.

hyzor commented 1 year ago

I'm having a hard time understanding what the issue is. As already said in this thread the issue boils down to ElementTree being the culprit. I've experimented with different Python versions 3.9, 3.10 and 3.11, and the only one not crashing is 3.9. I'm building the exact same Kodi Nexus RC2 version on Ubuntu 22.04, changing only the Python library used by cmake. So it must be an issue related to Python?

I'm testing on a minimal setup with Arctic Horizon 2, TMDBHelper (latest with some additional debug lines) and a few widgets. Crash is reproducible by navigating to random episodes that will trigger translate_xml in request.py (with OMDb enabled of course).

So then I started to use Web PDB and inserted a breakpoint right before ET.fromstring(request.content) (which is the cause of the crash), but the stepping process is a bit painful. It looks like something like this:

> /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/request.py(37)translate_xml()
-> request = ET.fromstring(request.content)
(Pdb) > /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/request.py(37)translate_xml()
-> request = ET.fromstring(request.content)
(Pdb) c
> /usr/lib/python3.11/xml/etree/ElementTree.py(1337)XML()
-> parser = XMLParser(target=TreeBuilder())
(Pdb) s
--Call--
> /usr/lib/python3/dist-packages/requests/models.py(818)content()
-> @property
(Pdb) n
> /usr/lib/python3/dist-packages/requests/models.py(822)content()
-> if self._content is False:
(Pdb) n
> /usr/lib/python3/dist-packages/requests/models.py(833)content()
-> self._content_consumed = True
(Pdb) 

Then it ultimately crashes.

Here's also a full Kodi log of a crash https://paste.kodi.tv/giqavayuyu.kodi, but these logs aren't really helpful since the issue seems to be within the Python core.

I'm not making much sense of this, anyone else could try their luck with Kodi's Web PDB https://kodi.wiki/view/HOW-TO:Debug_Python_Scripts_with_Web-PDB to find the root cause. Just make sure Kodi's using Python >= 3.10.

hyzor commented 1 year ago

Here's a full stack trace of the crash.

(Pdb) w
  /usr/lib/python3.11/threading.py(995)_bootstrap()
-> self._bootstrap_inner()
  /usr/lib/python3.11/threading.py(1038)_bootstrap_inner()
-> self.run()
  /usr/lib/python3.11/threading.py(975)run()
-> self._target(*self._args, **self._kwargs)
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/listitem.py(309)_next_readahead()
-> if self._readahead.next_readahead() != READAHEAD_CHANGED:
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/readahead.py(52)next_readahead()
-> status = self._next_readahead()
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/readahead.py(44)_next_readahead()
-> return self._get_readahead(next(self._queue))
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/readahead.py(31)_get_readahead()
-> _item.get_all_ratings()
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/itemdetails.py(133)get_all_ratings()
-> return self._parent.get_all_ratings(_listitem, self._itemdetails.tmdb_type, self._itemdetails.tmdb_id, self._season, self._episode) or {}
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/common.py(158)get_all_ratings()
-> item = self.get_omdb_ratings(item)
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/common.py(127)get_omdb_ratings()
-> return self.omdb_api.get_item_ratings(item, cache_only=cache_only)
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/omdb/api.py(54)get_item_ratings()
-> ratings = self.get_ratings_awards(imdb_id=imdb_id, cache_only=cache_only)
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/omdb/api.py(31)get_ratings_awards()
-> request = self.get_request_item(imdb_id=imdb_id, title=title, year=year, cache_only=cache_only)
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/omdb/api.py(23)get_request_item()
-> request = self.get_request_lc(is_xml=True, cache_only=cache_only, r='xml', **kwparams)
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/request.py(218)get_request_lc()
-> return self.get_request(*args, **kwargs)
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/request.py(228)get_request()
-> return self._cache.use_cache(
  /home/hyzor/.kodi/addons/script.module.tmdbhelper/resources/modules/tmdbhelper/logger.py(51)wrapper()
-> return func(*args, **kwargs)
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/files/bcache.py(71)use_cache()
-> my_object = func(*args, **kwargs)
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/request.py(78)get_api_request_json()
-> response = translate_xml(request) if is_xml else request.json()
  /home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/request.py(37)translate_xml()
-> request = ET.fromstring(request.content)
> /usr/lib/python3.11/xml/etree/ElementTree.py(1337)XML()
-> parser = XMLParser(target=TreeBuilder())

MoojMidge commented 1 year ago

I'm having a hard time understanding what the issue is. As already said in this thread the issue boils down to ElementTree being the culprit. I've experimented with different Python versions 3.9, 3.10 and 3.11, and the only one not crashing is 3.9. I'm building the exact same Kodi Nexus RC2 version on Ubuntu 22.04, changing only the Python library used by cmake. So it must be an issue related to Python?

The issue does not appear to be with ElementTree.py, but rather in _elementtree.c, the C module accelerator that shadows it, so pdb is not going to be that useful. What is interesting is that there does not appear to be any changes to this module between Python 3.9 and Python 3.10, so if it is not crashing in Python 3.9 then it appears that the problem is some other change in Python 3.10 that has caused this regression in _elementtree.c

Are you able to install a debug build of Python 3.10 or 3.11 and get a stacktrace using gdb?

hyzor commented 1 year ago

So I've built Kodi that's using a debug build of Python 3.11.1 now and I managed to get a stack trace of the crash with GDB. I'm afraid we're way beyond the scope of this PR but it's worth posting it here anyway, also I see @jurialmunkey has referenced this thread in xbmc so I hope they also will find this valuable. Not sure I'm making much sense of this myself.

Thread 521 "LanguageInvoker" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f8e0f7fe640 (LWP 238450)]
0x00007f8e440edde2 in _elementtree_XMLParser___init___impl (self=self@entry=0x7f8e61dff1b0, target=target@entry=0x7f8df785be10, encoding=encoding@entry=0x0) at /home/hyzor/Python-3.11.1/Modules/_elementtree.c:3647
3647        self->parser = EXPAT(ParserCreate_MM)(encoding, &ExpatMemoryHandler, "}");
(gdb) bt
#0  0x00007f8e440edde2 in _elementtree_XMLParser___init___impl (
    self=self@entry=0x7f8e61dff1b0, target=target@entry=0x7f8df785be10, 
    encoding=encoding@entry=0x0)
    at /home/hyzor/Python-3.11.1/Modules/_elementtree.c:3647
#1  0x00007f8e440ee38a in _elementtree_XMLParser___init__ (
    self=0x7f8e61dff1b0, args=<optimized out>, kwargs=<optimized out>)
    at /home/hyzor/Python-3.11.1/Modules/clinic/_elementtree.c.h:845
#2  0x000055c06e9e14ac in type_call (type=<optimized out>, 
    args=0x55c070757258 <_PyRuntime+58904>, kwds=0x7f8e2818f2f0)
    at ../Objects/typeobject.c:1112
#3  0x000055c06e96ddd5 in _PyObject_MakeTpCall (
    tstate=tstate@entry=0x7f8e040828c0, 
    callable=callable@entry=0x7f8e440f5e20 <XMLParser_Type>, 
    args=args@entry=0x7f8e80d7eb18, nargs=<optimized out>, 
    keywords=keywords@entry=0x7f8e29bfc230) at ../Objects/call.c:214
#4  0x000055c06e96e191 in _PyObject_VectorcallTstate (tstate=0x7f8e040828c0, 
    callable=0x7f8e440f5e20 <XMLParser_Type>, args=0x7f8e80d7eb18, 
    nargsf=<optimized out>, kwnames=0x7f8e29bfc230)
    at ../Include/internal/pycore_call.h:90
#5  0x000055c06e96e1b9 in PyObject_Vectorcall (
    callable=callable@entry=0x7f8e440f5e20 <XMLParser_Type>, 
    args=args@entry=0x7f8e80d7eb18, nargsf=<optimized out>, 
    kwnames=kwnames@entry=0x7f8e29bfc230) at ../Objects/call.c:299
--Type <RET> for more, q to quit, c to continue without paging--
#6  0x000055c06ea5e11d in _PyEval_EvalFrameDefault (
    tstate=tstate@entry=0x7f8e040828c0, frame=0x7f8e80d7eab0, 
    frame@entry=0x7f8e80d7e980, throwflag=throwflag@entry=0)
    at ../Python/ceval.c:4772
#7  0x000055c06ea639e4 in _PyEval_EvalFrame (throwflag=0, frame=0x7f8e80d7e980, 
    tstate=0x7f8e040828c0) at ../Include/internal/pycore_ceval.h:73
#8  _PyEval_Vector (tstate=0x7f8e040828c0, func=<optimized out>, 
    locals=locals@entry=0x0, args=<optimized out>, argcount=<optimized out>, 
    kwnames=<optimized out>) at ../Python/ceval.c:6435
#9  0x000055c06e96dc02 in _PyFunction_Vectorcall (func=<optimized out>, 
    stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>)
    at ../Objects/call.c:393
#10 0x000055c06ebdd3ca in _PyObject_VectorcallTstate (tstate=0x7f8e040828c0, 
    callable=0x7f8e442d1700, args=0x7f8e29712ec0, nargsf=2, kwnames=0x7f8e29af3a70)
    at ../Include/internal/pycore_call.h:92
#11 0x000055c06ebddc7d in method_vectorcall (method=<optimized out>, 
    args=0x7f8e29712ec8, nargsf=<optimized out>, kwnames=0x7f8e29af3a70)
    at ../Objects/classobject.c:59
#12 0x000055c06e96d742 in _PyVectorcall_Call (tstate=tstate@entry=0x7f8e040828c0, 
    func=0x55c06ebddb24 <method_vectorcall>, 
    callable=callable@entry=0x7f8e2811c710, tuple=tuple@entry=0x7f8e29916d50, 
    kwargs=kwargs@entry=0x7f8e2818e330) at ../Objects/call.c:257
#13 0x000055c06e96db17 in _PyObject_Call (tstate=0x7f8e040828c0, 
    callable=callable@entry=0x7f8e2811c710, args=args@entry=0x7f8e29916d50, 
    kwargs=kwargs@entry=0x7f8e2818e330) at ../Objects/call.c:328
#14 0x000055c06e96db84 in PyObject_Call (callable=callable@entry=0x7f8e2811c710, 
    args=args@entry=0x7f8e29916d50, kwargs=kwargs@entry=0x7f8e2818e330)
    at ../Objects/call.c:355
--Type <RET> for more, q to quit, c to continue without paging--
#15 0x000055c06ea4d5bd in do_call_core (tstate=tstate@entry=0x7f8e040828c0, 
    func=func@entry=0x7f8e2811c710, callargs=callargs@entry=0x7f8e29916d50, 
    kwdict=kwdict@entry=0x7f8e2818e330, use_tracing=0) at ../Python/ceval.c:7357
#16 0x000055c06ea61bcb in _PyEval_EvalFrameDefault (
    tstate=tstate@entry=0x7f8e040828c0, frame=frame@entry=0x7f8e80d7e878, 
    throwflag=throwflag@entry=0) at ../Python/ceval.c:5379
#17 0x000055c06ea639e4 in _PyEval_EvalFrame (throwflag=0, frame=0x7f8e80d7e878, 
    tstate=0x7f8e040828c0) at ../Include/internal/pycore_ceval.h:73
#18 _PyEval_Vector (tstate=0x7f8e040828c0, func=<optimized out>, 
    locals=locals@entry=0x0, args=<optimized out>, argcount=<optimized out>, 
    kwnames=<optimized out>) at ../Python/ceval.c:6435
#19 0x000055c06e96dc02 in _PyFunction_Vectorcall (func=<optimized out>, 
    stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>)
    at ../Objects/call.c:393
#20 0x000055c06e96d742 in _PyVectorcall_Call (tstate=tstate@entry=0x7f8e040828c0, 
    func=0x55c06e96dbae <_PyFunction_Vectorcall>, 
    callable=callable@entry=0x7f8e4416cd60, tuple=tuple@entry=0x7f8e4451f5f0, 
    kwargs=kwargs@entry=0x7f8e2811ce30) at ../Objects/call.c:257
#21 0x000055c06e96db17 in _PyObject_Call (tstate=0x7f8e040828c0, 
    callable=callable@entry=0x7f8e4416cd60, args=args@entry=0x7f8e4451f5f0, 
    kwargs=kwargs@entry=0x7f8e2811ce30) at ../Objects/call.c:328
#22 0x000055c06e96db84 in PyObject_Call (callable=callable@entry=0x7f8e4416cd60, 
    args=args@entry=0x7f8e4451f5f0, kwargs=kwargs@entry=0x7f8e2811ce30)
    at ../Objects/call.c:355
#23 0x000055c06ea4d5bd in do_call_core (tstate=tstate@entry=0x7f8e040828c0, 
    func=func@entry=0x7f8e4416cd60, callargs=callargs@entry=0x7f8e4451f5f0, 
    kwdict=kwdict@entry=0x7f8e2811ce30, use_tracing=0) at ../Python/ceval.c:7357
#24 0x000055c06ea61bcb in _PyEval_EvalFrameDefault (
--Type <RET> for more, q to quit, c to continue without paging--
    tstate=tstate@entry=0x7f8e040828c0, frame=0x7f8e80d7e7d0, 
    frame@entry=0x7f8e80d7e698, throwflag=throwflag@entry=0)
    at ../Python/ceval.c:5379
#25 0x000055c06ea639e4 in _PyEval_EvalFrame (throwflag=0, frame=0x7f8e80d7e698, 
    tstate=0x7f8e040828c0) at ../Include/internal/pycore_ceval.h:73
#26 _PyEval_Vector (tstate=0x7f8e040828c0, func=<optimized out>, 
    locals=locals@entry=0x0, args=<optimized out>, argcount=<optimized out>, 
    kwnames=<optimized out>) at ../Python/ceval.c:6435
#27 0x000055c06e96dc02 in _PyFunction_Vectorcall (func=<optimized out>, 
    stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>)
    at ../Objects/call.c:393
#28 0x000055c06ebdd3ca in _PyObject_VectorcallTstate (tstate=0x7f8e040828c0, 
    callable=0x7f8e442d2990, args=0x7f8e2811c460, nargsf=1, kwnames=0x7f8e294d9950)
    at ../Include/internal/pycore_call.h:92
#29 0x000055c06ebddc7d in method_vectorcall (method=<optimized out>, 
    args=0x7f8e2811c468, nargsf=<optimized out>, kwnames=0x7f8e294d9950)
    at ../Objects/classobject.c:59
#30 0x000055c06e96d742 in _PyVectorcall_Call (tstate=tstate@entry=0x7f8e040828c0, 
    func=0x55c06ebddb24 <method_vectorcall>, 
    callable=callable@entry=0x7f8e2811d730, 
    tuple=tuple@entry=0x55c070757258 <_PyRuntime+58904>, 
    kwargs=kwargs@entry=0x7f8e2811cdd0) at ../Objects/call.c:257
#31 0x000055c06e96db17 in _PyObject_Call (tstate=0x7f8e040828c0, 
    callable=callable@entry=0x7f8e2811d730, 
    args=args@entry=0x55c070757258 <_PyRuntime+58904>, 
    kwargs=kwargs@entry=0x7f8e2811cdd0) at ../Objects/call.c:328
#32 0x000055c06e96db84 in PyObject_Call (callable=callable@entry=0x7f8e2811d730, 
    args=args@entry=0x55c070757258 <_PyRuntime+58904>, 
--Type <RET> for more, q to quit, c to continue without paging--
    kwargs=kwargs@entry=0x7f8e2811cdd0) at ../Objects/call.c:355
#33 0x000055c06ea4d5bd in do_call_core (tstate=tstate@entry=0x7f8e040828c0, 
    func=func@entry=0x7f8e2811d730, 
    callargs=callargs@entry=0x55c070757258 <_PyRuntime+58904>, 
    kwdict=kwdict@entry=0x7f8e2811cdd0, use_tracing=0) at ../Python/ceval.c:7357
#34 0x000055c06ea61bcb in _PyEval_EvalFrameDefault (
    tstate=tstate@entry=0x7f8e040828c0, frame=frame@entry=0x7f8e80d7e610, 
    throwflag=throwflag@entry=0) at ../Python/ceval.c:5379
#35 0x000055c06ea639e4 in _PyEval_EvalFrame (throwflag=0, frame=0x7f8e80d7e610, 
    tstate=0x7f8e040828c0) at ../Include/internal/pycore_ceval.h:73
#36 _PyEval_Vector (tstate=0x7f8e040828c0, func=<optimized out>, 
    locals=locals@entry=0x0, args=<optimized out>, argcount=<optimized out>, 
    kwnames=<optimized out>) at ../Python/ceval.c:6435
#37 0x000055c06e96dc02 in _PyFunction_Vectorcall (func=<optimized out>, 
    stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>)
    at ../Objects/call.c:393
#38 0x000055c06ebdd3ca in _PyObject_VectorcallTstate (tstate=0x7f8e040828c0, 
    callable=0x7f8e442d10d0, args=0x7f8e29b924d0, nargsf=1, kwnames=0x7f8e446bd6a0)
    at ../Include/internal/pycore_call.h:92
#39 0x000055c06ebddc7d in method_vectorcall (method=<optimized out>, 
    args=0x7f8e29b924d8, nargsf=<optimized out>, kwnames=0x7f8e446bd6a0)
    at ../Objects/classobject.c:59
#40 0x000055c06e96d742 in _PyVectorcall_Call (tstate=tstate@entry=0x7f8e040828c0, 
    func=0x55c06ebddb24 <method_vectorcall>, 
    callable=callable@entry=0x7f8e293d5670, 
    tuple=tuple@entry=0x55c070757258 <_PyRuntime+58904>, 
    kwargs=kwargs@entry=0x7f8df72ec710) at ../Objects/call.c:257
#41 0x000055c06e96db17 in _PyObject_Call (tstate=0x7f8e040828c0, 
--Type <RET> for more, q to quit, c to continue without paging--
    callable=callable@entry=0x7f8e293d5670, 
    args=args@entry=0x55c070757258 <_PyRuntime+58904>, 
    kwargs=kwargs@entry=0x7f8df72ec710) at ../Objects/call.c:328
#42 0x000055c06e96db84 in PyObject_Call (callable=callable@entry=0x7f8e293d5670, 
    args=args@entry=0x55c070757258 <_PyRuntime+58904>, 
    kwargs=kwargs@entry=0x7f8df72ec710) at ../Objects/call.c:355
#43 0x000055c06ea4d5bd in do_call_core (tstate=tstate@entry=0x7f8e040828c0, 
    func=func@entry=0x7f8e293d5670, 
    callargs=callargs@entry=0x55c070757258 <_PyRuntime+58904>, 
    kwdict=kwdict@entry=0x7f8df72ec710, use_tracing=0) at ../Python/ceval.c:7357
#44 0x000055c06ea61bcb in _PyEval_EvalFrameDefault (
    tstate=tstate@entry=0x7f8e040828c0, frame=0x7f8e80d7e548, 
    frame@entry=0x7f8e80d7e188, throwflag=throwflag@entry=0)
    at ../Python/ceval.c:5379
#45 0x000055c06ea639e4 in _PyEval_EvalFrame (throwflag=0, frame=0x7f8e80d7e188, 
    tstate=0x7f8e040828c0) at ../Include/internal/pycore_ceval.h:73
#46 _PyEval_Vector (tstate=0x7f8e040828c0, func=<optimized out>, 
    locals=locals@entry=0x0, args=<optimized out>, argcount=<optimized out>, 
    kwnames=<optimized out>) at ../Python/ceval.c:6435
#47 0x000055c06e96dc02 in _PyFunction_Vectorcall (func=<optimized out>, 
    stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>)
    at ../Objects/call.c:393
#48 0x000055c06e96d663 in _PyVectorcall_Call (tstate=tstate@entry=0x7f8e040828c0, 
    func=0x55c06e96dbae <_PyFunction_Vectorcall>, 
    callable=callable@entry=0x7f8e443254f0, 
    tuple=tuple@entry=0x55c070757258 <_PyRuntime+58904>, 
    kwargs=kwargs@entry=0x7f8e2812cbf0) at ../Objects/call.c:245
#49 0x000055c06e96db17 in _PyObject_Call (tstate=0x7f8e040828c0, 
--Type <RET> for more, q to quit, c to continue without paging--
    callable=callable@entry=0x7f8e443254f0, 
    args=args@entry=0x55c070757258 <_PyRuntime+58904>, 
    kwargs=kwargs@entry=0x7f8e2812cbf0) at ../Objects/call.c:328
#50 0x000055c06e96db84 in PyObject_Call (callable=callable@entry=0x7f8e443254f0, 
    args=args@entry=0x55c070757258 <_PyRuntime+58904>, 
    kwargs=kwargs@entry=0x7f8e2812cbf0) at ../Objects/call.c:355
#51 0x000055c06ea4d5bd in do_call_core (tstate=tstate@entry=0x7f8e040828c0, 
    func=func@entry=0x7f8e443254f0, 
    callargs=callargs@entry=0x55c070757258 <_PyRuntime+58904>, 
    kwdict=kwdict@entry=0x7f8e2812cbf0, use_tracing=0) at ../Python/ceval.c:7357
#52 0x000055c06ea61bcb in _PyEval_EvalFrameDefault (
    tstate=tstate@entry=0x7f8e040828c0, frame=0x7f8e80d7e110, 
    frame@entry=0x7f8e80d7e020, throwflag=throwflag@entry=0)
    at ../Python/ceval.c:5379
#53 0x000055c06ea639e4 in _PyEval_EvalFrame (throwflag=0, frame=0x7f8e80d7e020, 
    tstate=0x7f8e040828c0) at ../Include/internal/pycore_ceval.h:73
#54 _PyEval_Vector (tstate=0x7f8e040828c0, func=<optimized out>, 
    locals=locals@entry=0x0, args=<optimized out>, argcount=<optimized out>, 
    kwnames=<optimized out>) at ../Python/ceval.c:6435
#55 0x000055c06e96dc02 in _PyFunction_Vectorcall (func=<optimized out>, 
    stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>)
    at ../Objects/call.c:393
#56 0x000055c06ebddcfc in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=1, 
    args=0x7f8e0f7fd958, callable=0x7f8e44133b70, tstate=0x7f8e040828c0)
    at ../Include/internal/pycore_call.h:92
#57 method_vectorcall (method=<optimized out>, 
    args=0x55c070757270 <_PyRuntime+58928>, nargsf=<optimized out>, kwnames=0x0)
    at ../Objects/classobject.c:67
--Type <RET> for more, q to quit, c to continue without paging--
#58 0x000055c06e96d663 in _PyVectorcall_Call (tstate=tstate@entry=0x7f8e040828c0, 
    func=0x55c06ebddb24 <method_vectorcall>, 
    callable=callable@entry=0x7f8e2811d4f0, 
    tuple=tuple@entry=0x55c070757258 <_PyRuntime+58904>, kwargs=kwargs@entry=0x0)
    at ../Objects/call.c:245
#59 0x000055c06e96db17 in _PyObject_Call (tstate=0x7f8e040828c0, 
    callable=0x7f8e2811d4f0, args=0x55c070757258 <_PyRuntime+58904>, kwargs=0x0)
    at ../Objects/call.c:328
#60 0x000055c06e96db84 in PyObject_Call (callable=<optimized out>, 
    args=<optimized out>, kwargs=<optimized out>) at ../Objects/call.c:355
#61 0x000055c06eb43e96 in thread_run (boot_raw=boot_raw@entry=0x7f8e29b90b80)
    at ../Modules/_threadmodule.c:1082
#62 0x000055c06eac9c38 in pythread_wrapper (arg=<optimized out>)
    at ../Python/thread_pthread.h:241
#63 0x00007f8eae772b43 in start_thread (arg=<optimized out>)
    at ./nptl/pthread_create.c:442
#64 0x00007f8eae804a00 in clone3 ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

jurialmunkey commented 1 year ago

@hyzor - Yeah CPython is definitely out of my wheelhouse but hopefully the above will be useful to someone more experienced in that area. Really appreciate the extended effort everyone is putting in.

One question: I was wondering if you get crashes with the same item everytime or if it appears to be random?

jurialmunkey commented 1 year ago

@SamDW96 @ShibajiCh - Thanks for testing on Android and confirming that the steps resolved the crashes for you. Really appreciated since I can't test myself.

In a roundabout way it's good news that disabling OMDb resolved the crashes. Obviously annoying not to have ratings for the time being but at least it means we're very likely dealing with the same issue on Android as the one on Linux/ELECs and any upstream fix should solve the issue for all platforms.

I am using an NVIDIA Shield Pro (2019) and have been experiencing very frequent crashes for about two months - which could be mitigated by rolling back to a build from Oct 15th (according to comments in various places), but presumably anything up until Oct 31st.

Yeah, really wished I had noticed that PR from 01-Nov for the Python update to 3.11 also included Android but not Windows. In hindsight, very obvious based upon the dates of the nightlies and the fact it continued to work on Windows.

Glad to see this getting figured out slowly. Once the actual bug gets reported, let's hope for a quick fix. I do miss the additional ratings that I've so gotten used to from this wonderful skin.

Yeah, we'll see what happens now that I've reported the bug. Hoping that it can be fixed in Kodi and isn't something waiting on an upstream fix in Python itself.

If it is looking like a "stable" version of Nexus is going to be released without a fix then I'll look at alternative methods for parsing the metadata but ideally it will be fixed beforehand.

Really hoping it doesn't come to that though. ElementTree is an important library in the context of Kodi skins using XML formatting and is used in a lot of other important addons for skin customisation like SkinShortcuts and SkinVariables.

AakashC2020 commented 1 year ago

@SamDW96 @ShibajiCh - Thanks for testing on Android and confirming that the steps resolved the crashes for you. Really appreciated since I can't test myself.

In a roundabout way it's good news that disabling OMDb resolved the crashes. Obviously annoying not to have ratings for the time being but at least it means we're very likely dealing with the same issue on Android as the one on Linux/ELECs and any upstream fix should solve the issue for all platforms.

I am using an NVIDIA Shield Pro (2019) and have been experiencing very frequent crashes for about two months - which could be mitigated by rolling back to a build from Oct 15th (according to comments in various places), but presumably anything up until Oct 31st.

Yeah, really wished I had noticed that PR from 01-Nov for the Python update to 3.11 also included Android but not Windows. In hindsight, very obvious based upon the dates of the nightlies and the fact it continued to work on Windows.

Glad to see this getting figured out slowly. Once the actual bug gets reported, let's hope for a quick fix. I do miss the additional ratings that I've so gotten used to from this wonderful skin.

Yeah, we'll see what happens now that I've reported the bug. Hoping that it can be fixed in Kodi and isn't something waiting on an upstream fix in Python itself.

If it is looking like a "stable" version of Nexus is going to be released without a fix then I'll look at alternative methods for parsing the metadata but ideally it will be fixed beforehand.

Really hoping it doesn't come to that though. ElementTree is an important library in the context of Kodi skins using XML formatting and is used in a lot of other important addons for skin customisation like SkinShortcuts and SkinVariables.

Hi @jurialmunkey,

You're welcome! Always glad to help. 🙂 I'm following this thread and hope to see the issues resolved soon. Thanks for all your effort! Also, thanks to everyone else who are testing and trying to resolve the issues on Nexus! Please let me know if you need any other testing on Nvidia Shield TV Pro 2019 and I can do it for you.

Thanks and Regards, ShibajiCh.

hyzor commented 1 year ago

@hyzor - Yeah CPython is definitely out of my wheelhouse but hopefully the above will be useful to someone more experienced in that area. Really appreciate the extended effort everyone is putting in.

One question: I was wondering if you get crashes with the same item everytime or if it appears to be random?

The crash doesn't happen after the item's XML data has been cached. After some further testing (thank god I'm not working as a tester professionally) it seems that entering some season of a TV show is a major trigger. Could be because so many threads are spawned the bug is quickly provoked.

I've been able to fetch information about multiple movies, but as soon as I browse TV shows and seasons I am crashing (if not cached).

Here's both a Python stack trace and C stack trace where more data is exposed compared to my previous traces. Not sure why they were omitted then. Managing to build Kodi with a debug version of Python is a lot harder than what it needs to be...

Thread 587 "LanguageInvoker" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f2007fff640 (LWP 199693)]
0x00007f1ff01ecde2 in _elementtree_XMLParser___init___impl (self=self@entry=0x7f2018115810, target=target@entry=<xml.etree.ElementTree.TreeBuilder at remote 0x7f1fbc38fa00>, encoding=encoding@entry=0x0).remote 0x7f1fbc38fa00>, encoding=encoding@entry=0x0) at /home/hyzor/Python-3.11.1/Modules/_elementtree.c:3647
3647        self->parser = EXPAT(ParserCreate_MM)(encoding, &ExpatMemoryHandler, "}");
(gdb) py-bt
Traceback (most recent call first):
  File "/usr/local/lib/python3.11/xml/etree/ElementTree.py", line 1337, in XML
    parser = XMLParser(target=TreeBuilder())
  File "/home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/request.py", line 35, in translate_xml
    request = ET.fromstring(request.content)
  File "/home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/request.py", line 76, in get_api_request_json
    response = translate_xml(request) if is_xml else request.json()
  File "/home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/files/bcache.py", line 71, in use_cache
    my_object = func(*args, **kwargs)
  File "/home/hyzor/.kodi/addons/script.module.tmdbhelper/resources/modules/tmdbhelper/logger.py", line 51, in wrapper
    return func(*args, **kwargs)
  File "/home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/request.py", line 226, in get_request
    return self._cache.use_cache(
  File "/home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/request.py", line 216, in get_request_lc
    return self.get_request(*args, **kwargs)
  File "/home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/omdb/api.py", line 23, in get_request_item
    request = self.get_request_lc(is_xml=True, cache_only=cache_only, r='xml', **kwparams)
  File "/home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/omdb/api.py", line 31, in get_ratings_awards
    request = self.get_request_item(imdb_id=imdb_id, title=title, year=year, cache_only=cache_only)
  File "/home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/api/omdb/api.py", line 54, in get_item_ratings
    ratings = self.get_ratings_awards(imdb_id=imdb_id, cache_only=cache_only)
  File "/home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/common.py", line 127, in get_omdb_ratings
    return self.omdb_api.get_item_ratings(item, cache_only=cache_only)
  File "/home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/common.py", line 158, in get_all_ratings
    item = self.get_omdb_ratings(item)
  File "/home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/itemdetails.py", line 133, in get_all_ratings
    return self._parent.get_all_ratings(_listitem, self._itemdetails.tmdb_type, self._itemdetails.tmdb_id, self._season, self._episode) or {}
  File "/home/hyzor/.kodi/addons/plugin.video.themoviedb.helper/resources/lib/monitor/listitem.py", line 215, in _process_ratings
    _details = _item.get_all_ratings() or {}
  File "/usr/local/lib/python3.11/threading.py", line 975, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
    self.run()
  File "/usr/local/lib/python3.11/threading.py", line 995, in _bootstrap
    self._bootstrap_inner()
(gdb) bt
#0  0x00007f1ff01ecde2 in _elementtree_XMLParser___init___impl (self=self@entry=0x7f2018115810, 
    target=target@entry=<xml.etree.ElementTree.TreeBuilder at remote 0x7f1fbc38fa00>, encoding=encoding@entry=0x0).remote 0x7f1fbc38fa00>, encoding=encoding@entry=0x0)
    at /home/hyzor/Python-3.11.1/Modules/_elementtree.c:3647
#1  0x00007f1ff01ed38a in _elementtree_XMLParser___init__ (self=<xml.etree.ElementTree.XMLParser at remote 0x7f2018115810>, args=<optimized out>, 
    kwargs=<optimized out>) at /home/hyzor/Python-3.11.1/Modules/clinic/_elementtree.c.h:845
#2  0x0000563efb1134ac in type_call (type=<optimized out>, args=(), kwds={'target': <xml.etree.ElementTree.TreeBuilder at remote 0x7f1fbc38fa00>}).remote 0x7f1fbc38fa00>})
    at ../Objects/typeobject.c:1112
#3  0x0000563efb09fdd5 in _PyObject_MakeTpCall (tstate=tstate@entry=0x7f1f981023f0, callable=callable@entry=<type
   at remote 0x7f1ff01f4e20>, 
    args=args@entry=0x7f2040891b18, nargs=<optimized out>, keywords=keywords@entry=('target',)) at ../Objects/call.c:214
#4  0x0000563efb0a0191 in _PyObject_VectorcallTstate (tstate=0x7f1f981023f0, callable=<type
   at remote 0x7f1ff01f4e20>, args=0x7f2040891b18, 
    nargsf=<optimized out>, kwnames=('target',)) at ../Include/internal/pycore_call.h:90
#5  0x0000563efb0a01b9 in PyObject_Vectorcall (callable=callable@entry=<type
   at remote 0x7f1ff01f4e20>, args=args@entry=0x7f2040891b18, 
    nargsf=<optimized out>, kwnames=kwnames@entry=('target',)) at ../Objects/call.c:299
#6  0x0000563efb19011d in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x7f1f981023f0, frame=0x7f2040891ab0, frame@entry=0x7f2040891980, 
    throwflag=throwflag@entry=0) at ../Python/ceval.c:4772
#7  0x0000563efb1959e4 in _PyEval_EvalFrame (throwflag=0, frame=0x7f2040891980, tstate=0x7f1f981023f0) at ../Include/internal/pycore_ceval.h:73
#8  _PyEval_Vector (tstate=0x7f1f981023f0, func=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=<optimized out>, 
    kwnames=<optimized out>) at ../Python/ceval.c:6435
#9  0x0000563efb09fc02 in _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>)
    at ../Objects/call.c:393
#10 0x0000563efb30f3ca in _PyObject_VectorcallTstate (tstate=0x7f1f981023f0, callable=<function
   at remote 0x7f20182d5650>, args=0x7f20180da340, 
    nargsf=2, kwnames=('postdata', 'is_xml')) at ../Include/internal/pycore_call.h:92
#11 0x0000563efb30fc7d in method_vectorcall (method=<optimized out>, args=0x7f20180da348, nargsf=<optimized out>, kwnames=('postdata', 'is_xml'))
    at ../Objects/classobject.c:59
#12 0x0000563efb09f742 in _PyVectorcall_Call (tstate=tstate@entry=0x7f1f981023f0, func=0x563efb30fb24 <method_vectorcall>, 
    callable=callable@entry=<method
   at remote 0x7f1fa1ea7170>, 
    tuple=tuple@entry=('https://www.omdbapi.com//?apikey=7a776341&r=xml&i=tt13850522&plot=full&tomatoes=True',), 
    kwargs=kwargs@entry={'postdata': None, 'is_xml': True}) at ../Objects/call.c:257
#13 0x0000563efb09fb17 in _PyObject_Call (tstate=0x7f1f981023f0, callable=callable@entry=<method
   at remote 0x7f1fa1ea7170>, 
    args=args@entry=('https://www.omdbapi.com//?apikey=7a776341&r=xml&i=tt13850522&plot=full&tomatoes=True',), 
    kwargs=kwargs@entry={'postdata': None, 'is_xml': True}) at ../Objects/call.c:328
#14 0x0000563efb09fb84 in PyObject_Call (callable=callable@entry=<method
   at remote 0x7f1fa1ea7170>, 
    args=args@entry=('https://www.omdbapi.com//?apikey=7a776341&r=xml&i=tt13850522&plot=full&tomatoes=True',), 
    kwargs=kwargs@entry={'postdata': None, 'is_xml': True}) at ../Objects/call.c:355
#15 0x0000563efb17f5bd in do_call_core (tstate=tstate@entry=0x7f1f981023f0, func=func@entry=<method
   at remote 0x7f1fa1ea7170>, 
    callargs=callargs@entry=('https://www.omdbapi.com//?apikey=7a776341&r=xml&i=tt13850522&plot=full&tomatoes=True',), 
    kwdict=kwdict@entry={'postdata': None, 'is_xml': True}, use_tracing=0) at ../Python/ceval.c:7357
#16 0x0000563efb193bcb in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x7f1f981023f0, frame=frame@entry=0x7f2040891878, throwflag=throwflag@entry=0)
    at ../Python/ceval.c:5379
#17 0x0000563efb1959e4 in _PyEval_EvalFrame (throwflag=0, frame=0x7f2040891878, tstate=0x7f1f981023f0) at ../Include/internal/pycore_ceval.h:73
#18 _PyEval_Vector (tstate=0x7f1f981023f0, func=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=<optimized out>, 
    kwnames=<optimized out>) at ../Python/ceval.c:6435
#19 0x0000563efb09fc02 in _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>)
    at ../Objects/call.c:393
#20 0x0000563efb09f742 in _PyVectorcall_Call (tstate=tstate@entry=0x7f1f981023f0, func=0x563efb09fbae <_PyFunction_Vectorcall>, 
    callable=callable@entry=<function at remote 0x7f20182d6830>, 
--Type <RET> for more, q to quit, c to continue without paging--
    tuple=tuple@entry=(<BasicCache(_filename='OMDb.db', _cache=<SimpleCache(_win=<xbmcgui.Window at remote 0x7f1fbc29af40>, _monitor=<xbmc.Monitor at remote 0x7f1fbc29af00>, _db_file='/home/hyzor/.kodi/userdata/addon_data/plugin.video.themoviedb.helper/database_v6/OMDb.db', _sc_name='database_v6_OMDb.db_simplecache', _queue=[], _re_use_con=True, _connection=<sqlite3.Connection at remote 0x7f1fbc3df650>, _memcache=False) at remote 0x7f1fa1b473f0>) at remote 0x7f1ff027d7d0>, <method at remote 0x7f1fa1ea7170>, 'https://www.omdbapi.com//?apikey=7a776341&r=xml&i=tt13850522&plot=full&tomatoes=True'), 
    kwargs=kwargs@entry={'headers': None, 'postdata': None, 'is_xml': True, 'cache_refresh': False, 'cache_days': 14, 'cache_name': '', 'cache_only': False, 'cache_force': False, 'cache_fallback': False, 'cache_combine_name': False, 'cache_strip': [('https://www.omdbapi.com/', 'OMDb'), ('apikey=7a776341', ''), ('is_xml=False', ''), ('is_xml=True', '')]}) at ../Objects/call.c:257
#21 0x0000563efb09fb17 in _PyObject_Call (tstate=0x7f1f981023f0, callable=callable@entry=<function
   at remote 0x7f20182d6830>, 
    args=args@entry=(<BasicCache(_filename='OMDb.db', _cache=<SimpleCache(_win=<xbmcgui.Window at remote 0x7f1fbc29af40>, _monitor=<xbmc.Monitor at remote 0x7f1fbc29af00>, _db_file='/home/hyzor/.kodi/userdata/addon_data/plugin.video.themoviedb.helper/database_v6/OMDb.db', _sc_name='database_v6_OMDb.db_simplecache', _queue=[], _re_use_con=True, _connection=<sqlite3.Connection at remote 0x7f1fbc3df650>, _memcache=False) at remote 0x7f1fa1b473f0>) at remote 0x7f1ff027d7d0>, <method at remote 0x7f1fa1ea7170>, 'https://www.omdbapi.com//?apikey=7a776341&r=xml&i=tt13850522&plot=full&tomatoes=True'), 
    kwargs=kwargs@entry={'headers': None, 'postdata': None, 'is_xml': True, 'cache_refresh': False, 'cache_days': 14, 'cache_name': '', 'cache_only': False, 'cache_force': False, 'cache_fallback': False, 'cache_combine_name': False, 'cache_strip': [('https://www.omdbapi.com/', 'OMDb'), ('apikey=7a776341', ''), ('is_xml=False', ''), ('is_xml=True', '')]}) at ../Objects/call.c:328
#22 0x0000563efb09fb84 in PyObject_Call (callable=callable@entry=<function
   at remote 0x7f20182d6830>, 
    args=args@entry=(<BasicCache(_filename='OMDb.db', _cache=<SimpleCache(_win=<xbmcgui.Window at remote 0x7f1fbc29af40>, _monitor=<xbmc.Monitor at remote 0x7f1fbc29af00>, _db_file='/home/hyzor/.kodi/userdata/addon_data/plugin.video.themoviedb.helper/database_v6/OMDb.db', _sc_name='database_v6_OMDb.db_simplecache', _queue=[], _re_use_con=True, _connection=<sqlite3.Connection at remote 0x7f1fbc3df650>, _memcache=False) at remote 0x7f1fa1b473f0>) at remote 0x7f1ff027d7d0>, <method at remote 0x7f1fa1ea7170>, 'https://www.omdbapi.com//?apikey=7a776341&r=xml&i=tt13850522&plot=full&tomatoes=True'), 
    kwargs=kwargs@entry={'headers': None, 'postdata': None, 'is_xml': True, 'cache_refresh': False, 'cache_days': 14, 'cache_name': '', 'cache_only': False, 'cache_force': False, 'cache_fallback': False, 'cache_combine_name': False, 'cache_strip': [('https://www.omdbapi.com/', 'OMDb'), ('apikey=7a776341', ''), ('is_xml=False', ''), ('is_xml=True', '')]}) at ../Objects/call.c:355
#23 0x0000563efb17f5bd in do_call_core (tstate=tstate@entry=0x7f1f981023f0, func=func@entry=<function
   at remote 0x7f20182d6830>, 
    callargs=callargs@entry=(<BasicCache(_filename='OMDb.db', _cache=<SimpleCache(_win=<xbmcgui.Window at remote 0x7f1fbc29af40>, _monitor=<xbmc.Monitor at remote 0x7f1fbc29af00>, _db_file='/home/hyzor/.kodi/userdata/addon_data/plugin.video.themoviedb.helper/database_v6/OMDb.db', _sc_name='database_v6_OMDb.db_simplecache', _queue=[], _re_use_con=True, _connection=<sqlite3.Connection at remote 0x7f1fbc3df650>, _memcache=False) at remote 0x7f1fa1b473f0>) at remote 0x7f1ff027d7d0>, <method at remote 0x7f1fa1ea7170>, 'https://www.omdbapi.com//?apikey=7a776341&r=xml&i=tt13850522&plot=full&tomatoes=True'), 
    kwdict=kwdict@entry={'headers': None, 'postdata': None, 'is_xml': True, 'cache_refresh': False, 'cache_days': 14, 'cache_name': '', 'cache_only': False, 'cache_force': False, 'cache_fallback': False, 'cache_combine_name': False, 'cache_strip': [('https://www.omdbapi.com/', 'OMDb'), ('apikey=7a776341', ''), ('is_xml=False', ''), ('is_xml=True', '')]}, use_tracing=0) at ../Python/ceval.c:7357
#24 0x0000563efb193bcb in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x7f1f981023f0, frame=0x7f20408917d0, frame@entry=0x7f2040891698, 
    throwflag=throwflag@entry=0) at ../Python/ceval.c:5379
#25 0x0000563efb1959e4 in _PyEval_EvalFrame (throwflag=0, frame=0x7f2040891698, tstate=0x7f1f981023f0) at ../Include/internal/pycore_ceval.h:73
#26 _PyEval_Vector (tstate=0x7f1f981023f0, func=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=<optimized out>, 
    kwnames=<optimized out>) at ../Python/ceval.c:6435
#27 0x0000563efb09fc02 in _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>)
    at ../Objects/call.c:393
#28 0x0000563efb30f3ca in _PyObject_VectorcallTstate (tstate=0x7f1f981023f0, callable=<function
   at remote 0x7f20182d6150>, args=0x7f1fa1ea5300, 
    nargsf=1, kwnames=('is_xml', 'cache_only', 'r', 'i', 'plot', 'tomatoes', 'cache_days')) at ../Include/internal/pycore_call.h:92
#29 0x0000563efb30fc7d in method_vectorcall (method=<optimized out>, args=0x7f1fa1ea5308, nargsf=<optimized out>, 
    kwnames=('is_xml', 'cache_only', 'r', 'i', 'plot', 'tomatoes', 'cache_days')) at ../Objects/classobject.c:59
#30 0x0000563efb09f742 in _PyVectorcall_Call (tstate=tstate@entry=0x7f1f981023f0, func=0x563efb30fb24 <method_vectorcall>, 
    callable=callable@entry=<method
   at remote 0x7f1fa1ea4fb0>, tuple=tuple@entry=.remote 0x7f1fa1ea4fb0>, tuple=tuple@entry=(), 
    kwargs=kwargs@entry={'is_xml': True, 'cache_only': False, 'r': 'xml', 'i': 'tt13850522', 'plot': 'full', 'tomatoes': 'True', 'cache_days': 14})
--Type <RET> for more, q to quit, c to continue without paging--
    at ../Objects/call.c:257
#31 0x0000563efb09fb17 in _PyObject_Call (tstate=0x7f1f981023f0, callable=callable@entry=<method
   at remote 0x7f1fa1ea4fb0>, args=args@entry=.remote 0x7f1fa1ea4fb0>, args=args@entry=(), 
    kwargs=kwargs@entry={'is_xml': True, 'cache_only': False, 'r': 'xml', 'i': 'tt13850522', 'plot': 'full', 'tomatoes': 'True', 'cache_days': 14})
    at ../Objects/call.c:328
#32 0x0000563efb09fb84 in PyObject_Call (callable=callable@entry=<method
   at remote 0x7f1fa1ea4fb0>, args=args@entry=.remote 0x7f1fa1ea4fb0>, args=args@entry=(), 
    kwargs=kwargs@entry={'is_xml': True, 'cache_only': False, 'r': 'xml', 'i': 'tt13850522', 'plot': 'full', 'tomatoes': 'True', 'cache_days': 14})
    at ../Objects/call.c:355
#33 0x0000563efb17f5bd in do_call_core (tstate=tstate@entry=0x7f1f981023f0, func=func@entry=<method
   at remote 0x7f1fa1ea4fb0>, 
    callargs=callargs@entry=(), 
    kwdict=kwdict@entry={'is_xml': True, 'cache_only': False, 'r': 'xml', 'i': 'tt13850522', 'plot': 'full', 'tomatoes': 'True', 'cache_days': 14}, 
    use_tracing=0) at ../Python/ceval.c:7357
#34 0x0000563efb193bcb in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x7f1f981023f0, frame=frame@entry=0x7f2040891610, throwflag=throwflag@entry=0)
    at ../Python/ceval.c:5379
#35 0x0000563efb1959e4 in _PyEval_EvalFrame (throwflag=0, frame=0x7f2040891610, tstate=0x7f1f981023f0) at ../Include/internal/pycore_ceval.h:73
#36 _PyEval_Vector (tstate=0x7f1f981023f0, func=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=<optimized out>, 
    kwnames=<optimized out>) at ../Python/ceval.c:6435
#37 0x0000563efb09fc02 in _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>)
    at ../Objects/call.c:393
#38 0x0000563efb30f3ca in _PyObject_VectorcallTstate (tstate=0x7f1f981023f0, callable=<function
   at remote 0x7f20182d6200>, args=0x7f2018454810, 
    nargsf=1, kwnames=('is_xml', 'cache_only', 'r', 'i', 'plot', 'tomatoes')) at ../Include/internal/pycore_call.h:92
#39 0x0000563efb30fc7d in method_vectorcall (method=<optimized out>, args=0x7f2018454818, nargsf=<optimized out>, 
    kwnames=('is_xml', 'cache_only', 'r', 'i', 'plot', 'tomatoes')) at ../Objects/classobject.c:59
#40 0x0000563efb09f742 in _PyVectorcall_Call (tstate=tstate@entry=0x7f1f981023f0, func=0x563efb30fb24 <method_vectorcall>, 
    callable=callable@entry=<method
   at remote 0x7f1fa1e34770>, tuple=tuple@entry=.remote 0x7f1fa1e34770>, tuple=tuple@entry=(), 
    kwargs=kwargs@entry={'is_xml': True, 'cache_only': False, 'r': 'xml', 'i': 'tt13850522', 'plot': 'full', 'tomatoes': 'True'})
    at ../Objects/call.c:257
#41 0x0000563efb09fb17 in _PyObject_Call (tstate=0x7f1f981023f0, callable=callable@entry=<method
   at remote 0x7f1fa1e34770>, args=args@entry=.remote 0x7f1fa1e34770>, args=args@entry=(), 
    kwargs=kwargs@entry={'is_xml': True, 'cache_only': False, 'r': 'xml', 'i': 'tt13850522', 'plot': 'full', 'tomatoes': 'True'})
    at ../Objects/call.c:328
#42 0x0000563efb09fb84 in PyObject_Call (callable=callable@entry=<method
   at remote 0x7f1fa1e34770>, args=args@entry=.remote 0x7f1fa1e34770>, args=args@entry=(), 
    kwargs=kwargs@entry={'is_xml': True, 'cache_only': False, 'r': 'xml', 'i': 'tt13850522', 'plot': 'full', 'tomatoes': 'True'})
    at ../Objects/call.c:355
#43 0x0000563efb17f5bd in do_call_core (tstate=tstate@entry=0x7f1f981023f0, func=func@entry=<method
   at remote 0x7f1fa1e34770>, 
    callargs=callargs@entry=(), 
    kwdict=kwdict@entry={'is_xml': True, 'cache_only': False, 'r': 'xml', 'i': 'tt13850522', 'plot': 'full', 'tomatoes': 'True'}, use_tracing=0)
    at ../Python/ceval.c:7357
#44 0x0000563efb193bcb in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x7f1f981023f0, frame=0x7f2040891548, frame@entry=0x7f2040891188, 
    throwflag=throwflag@entry=0) at ../Python/ceval.c:5379
#45 0x0000563efb1959e4 in _PyEval_EvalFrame (throwflag=0, frame=0x7f2040891188, tstate=0x7f1f981023f0) at ../Include/internal/pycore_ceval.h:73
#46 _PyEval_Vector (tstate=0x7f1f981023f0, func=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=<optimized out>, 
    kwnames=<optimized out>) at ../Python/ceval.c:6435
#47 0x0000563efb09fc02 in _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>)
    at ../Objects/call.c:393
#48 0x0000563efb09f663 in _PyVectorcall_Call (tstate=tstate@entry=0x7f1f981023f0, func=0x563efb09fbae <_PyFunction_Vectorcall>, 
    callable=callable@entry=<function
   at remote 0x7f1ff033d7b0>, tuple=tuple@entry=().remote 0x7f1ff033d7b0>, tuple=tuple@entry=(), kwargs=kwargs@entry={}) at ../Objects/call.c:245
#49 0x0000563efb09fb17 in _PyObject_Call (tstate=0x7f1f981023f0, callable=callable@entry=<function
   at remote 0x7f1ff033d7b0>, args=args@entry=.remote 0x7f1ff033d7b0>, args=args@entry=(), 
--Type <RET> for more, q to quit, c to continue without paging--
    kwargs=kwargs@entry={}) at ../Objects/call.c:328
#50 0x0000563efb09fb84 in PyObject_Call (callable=callable@entry=<function
   at remote 0x7f1ff033d7b0>, args=args@entry=().remote 0x7f1ff033d7b0>, args=args@entry=(), kwargs=kwargs@entry={})
    at ../Objects/call.c:355
#51 0x0000563efb17f5bd in do_call_core (tstate=tstate@entry=0x7f1f981023f0, func=func@entry=<function
   at remote 0x7f1ff033d7b0>, 
    callargs=callargs@entry=(), kwdict=kwdict@entry={}, use_tracing=0) at ../Python/ceval.c:7357
#52 0x0000563efb193bcb in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x7f1f981023f0, frame=0x7f2040891110, frame@entry=0x7f2040891020, 
    throwflag=throwflag@entry=0) at ../Python/ceval.c:5379
#53 0x0000563efb1959e4 in _PyEval_EvalFrame (throwflag=0, frame=0x7f2040891020, tstate=0x7f1f981023f0) at ../Include/internal/pycore_ceval.h:73
#54 _PyEval_Vector (tstate=0x7f1f981023f0, func=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=<optimized out>, 
    kwnames=<optimized out>) at ../Python/ceval.c:6435
#55 0x0000563efb09fc02 in _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>)
    at ../Objects/call.c:393
#56 0x0000563efb30fcfc in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=1, args=0x7f2007ffe958, callable=<function at remote 0x7f20184a05d0>, 
    tstate=0x7f1f981023f0) at ../Include/internal/pycore_call.h:92
#57 method_vectorcall (method=<optimized out>, args=0x563efce89270 <_PyRuntime+58928>, nargsf=<optimized out>, kwnames=0x0)
    at ../Objects/classobject.c:67
#58 0x0000563efb09f663 in _PyVectorcall_Call (tstate=tstate@entry=0x7f1f981023f0, func=0x563efb30fb24 <method_vectorcall>, 
    callable=callable@entry=<method
   at remote 0x7f1fa1ea72f0>, tuple=tuple@entry=().remote 0x7f1fa1ea72f0>, tuple=tuple@entry=(), kwargs=kwargs@entry=0x0) at ../Objects/call.c:245
#59 0x0000563efb09fb17 in _PyObject_Call (tstate=0x7f1f981023f0, callable=<method
   at remote 0x7f1fa1ea72f0>, args=().remote 0x7f1fa1ea72f0>, args=(), kwargs=0x0)
    at ../Objects/call.c:328
#60 0x0000563efb09fb84 in PyObject_Call (callable=<optimized out>, args=<optimized out>, kwargs=<optimized out>) at ../Objects/call.c:355
#61 0x0000563efb275e96 in thread_run (boot_raw=boot_raw@entry=0x7f2018456430) at ../Modules/_threadmodule.c:1082
#62 0x0000563efb1fbc38 in pythread_wrapper (arg=<optimized out>) at ../Python/thread_pthread.h:241
#63 0x00007f2047eaeb43 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#64 0x00007f2047f40a00 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

tekker commented 1 year ago

@jurialmunkey A workaround isn't a solution to the _elementtree, however...

Is there a reason that we can't use JSON instead for the OMDB ratings API call? I couldn't see a major performance hit timing the API call with JSON vs XML.

plugin.video.themoviedb.helper/resources/lib/api/omdb/api.py (line 15:28)

def get_request_item(self, imdb_id=None, title=None, year=None, tomatoes=True, fullplot=True, cache_only=False):
        kwparams = {}
        kwparams['i'] = imdb_id
        kwparams['t'] = title
        kwparams['y'] = year
        kwparams['plot'] = 'full' if fullplot else 'short'
        kwparams['tomatoes'] = 'True' if tomatoes else None
        kwparams = del_empty_keys(kwparams)
        request = self.get_request_lc(is_xml=False, cache_only=cache_only, r='json', **kwparams)
        #request = self.get_request_lc(is_xml=True, cache_only=cache_only, r='xml', **kwparams)
        #try:
        #    request = request['root']['movie'][0]
        #except (KeyError, TypeError, AttributeError):
        #    request = {}
        return request

Because maybe this isn't going to be fixed for a while, as per your recent issue:

https://github.com/xbmc/xbmc/issues/22344#top

MoojMidge commented 1 year ago

Haven't had an opportunity to look into the info @hyzor provided, but in the short term, aside from blocking the c module, the suggestion from @tekker to use the json api seems like a good option. I think there is a problem with the json api in that it sometimes produces an invalid json response, but it is better than nothing.