Open pilotso11 opened 2 years ago
Latest source code includes Ctrl-F2 to capture first set of images. Not yet packed an installed exe.
I've had some success with the hard to read text by using Location = pyautogui.locateOnScreen( Picture, confidence=(.99)) Printing Loacation. Then using location Left -20 Top -20 Width +40 Height +40, as the search region. if pyautogui.locateOnScreen('cargo_full-0.png', region=(2237,480,143,192), confidence=(.99)) is not None: I can then search with high confidence without taxing the process. So far it's been working well. I just wanted to give it a try before looking into building a look up table for the regions. The example I gave was for finding 100% Fuel on the Carrier Services page, anything less than .96 gave me a false positive. Maybe a variable confidence? If we know we're on the right page and the text should be there, maybe step the confidence level?
OCR Might work for this too. It does seem able to recognise the "Max Capacity" text. These are both at > 90% confidence.
If you compare the image below the the one above, you can see an example of how the OCR has a hard time with the "Selected" menu text.
Code is now working without saving any images. It's using a combination of known image locations (1080p, windowed mode only) and text recognition.
release 0.2.1 Ran Main from pyCharm. Tesseract-OCR.zip did not unpack as in other version. Manually extracted. Set secondary monitor to 1920x1080 Set Elite to secondary monitor 1920x1080, windowed, field of view minimum. Screen shape is reporting main screen resolutions. Screen shape: (1440, 3440, 3) Average color is reporting from a location of 635, 0, 100, 10. That location data doesn't look right. Running on main screen at 1920x1080 Set Elite to primary monitor 1920x1080, windowed, field of view minimum. reports shape: (1080, 1920, 3) so that's good But it's still looking at average color at 635, 0, 100, 10. Carrier services is somewhere around (x=778, y=913)
It's finding carrier services using text recognition. It only recognizes the text when it's not selected which is why at the beginning it goes up and down the hud to find it. There is some additional debug that may help pin this down.
Version 0.2.1 run from Pycharm main Set my secondary monitor to Primary @ 1920x1080 Set Elite to 1920x1080 Field of view 50% interface brightness 80% Reporting Screen shape: (1080, 1920, 3) Program unpacked no problems. debugimages were dead on locations. (very nice) Executed 7 jumps. Donate Tritium and Gal map were flawless. Transfer tritium had one failure to find cancel. Almost all problems were failure to find HUD. If your on the main HUB page pressing backspace provides no change to the page.
2022-03-14 18:10:04 - INFO - Filling ship with tritium 2022-03-14 18:10:05 - DEBUG - Press: backspace 2022-03-14 18:10:08 - DEBUG - Press: backspace 2022-03-14 18:10:10 - DEBUG - found CARRIER at 822,317 94x15 2022-03-14 18:10:10 - DEBUG - Looking for SERVICES 2022-03-14 18:10:10 - DEBUG - Press: backspace 2022-03-14 18:10:12 - DEBUG - Press: backspace 2022-03-14 18:10:15 - DEBUG - found CARRIER at 822,317 94x15 2022-03-14 18:10:15 - DEBUG - Looking for SERVICES 2022-03-14 18:10:15 - DEBUG - Press: backspace 2022-03-14 18:10:17 - DEBUG - Press: backspace 2022-03-14 18:10:20 - DEBUG - Press: backspace 2022-03-14 18:10:22 - DEBUG - Press: backspace 2022-03-14 18:10:24 - DEBUG - Press: backspace 2022-03-14 18:10:27 - DEBUG - Press: backspace 2022-03-14 18:10:28 - INFO - Unable to find main HUD
pressing backspace provides no change to the page. Meaning that it’s not actually going back to the main HUD screen at all, or that it’s already on the main HUD and not detecting it?
Did you change ships? It only sets up the hud location on first run. It needs to detect a ship change and reset.
Does your ship have an advanced auto dock installed? (I realize the other text it’s looking for is AUTO LAUNCH- and with no advanced auto dock that won’t be found - needs some other options). On 15 Mar 2022, 02:47 +0000, EuropaSteve @.***>, wrote:
Version 0.2.1 run from Pycharm main Set me secondary monitor to Primary @ 1920x1080 Set Elite to 1920x1080 Field of view 50% interface brightness 80% Reporting Screen shape: (1080, 1920, 3) Program unpacked no problems. debugimages were dead on locations. (very nice) Executed 7 jumps. Donate Tritium and Gal map were flawless. Transfer tritium had one failure to find cancel. Almost all problems were failure to find HUD. If your on the main HUB page pressing backspace provides no change to the page. 2022-03-14 18:10:04 - INFO - Filling ship with tritium 2022-03-14 18:10:05 - DEBUG - Press: backspace 2022-03-14 18:10:08 - DEBUG - Press: backspace 2022-03-14 18:10:10 - DEBUG - found CARRIER at 822,317 94x15 2022-03-14 18:10:10 - DEBUG - Looking for SERVICES 2022-03-14 18:10:10 - DEBUG - Press: backspace 2022-03-14 18:10:12 - DEBUG - Press: backspace 2022-03-14 18:10:15 - DEBUG - found CARRIER at 822,317 94x15 2022-03-14 18:10:15 - DEBUG - Looking for SERVICES 2022-03-14 18:10:15 - DEBUG - Press: backspace 2022-03-14 18:10:17 - DEBUG - Press: backspace 2022-03-14 18:10:20 - DEBUG - Press: backspace 2022-03-14 18:10:22 - DEBUG - Press: backspace 2022-03-14 18:10:24 - DEBUG - Press: backspace 2022-03-14 18:10:27 - DEBUG - Press: backspace 2022-03-14 18:10:28 - INFO - Unable to find main HUD — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were assigned.Message ID: @.***>
The fleet carriers interiors (or the disembark) option broke the macros. So please update to 0.2.2. This has also switched to using known locations for CARRIER SERVICES instead of text recognition - for at least all the ships I own. It is also looking only for LAUNCH not AUTO LAUNCH when needed. It will detect a change of ship and reset the CARRIER SERVICES image as well. Fingers crossed this helps.
Thanks so much! Had to change sleep to 5 at line 422. The GalMap is taking forever to load. After that ran perfect. It's now set to jump with 8 way points. I've probably made 50+ jumps in a circle playing with all the versions. If you plotted them you'd think I'm crazy. Doing donuts in a Fleet Carrier! Edit: 8 Jumps complete, no errors.
yeah! Glad you could get jumps in. Europe evening it was 50mins between jumps last night - everyone playing with their new interiors. I got in just two testing the update.
I will increase the wait time for the galmap.
On Tue, 15 Mar 2022 at 23:58, EuropaSteve @.***> wrote:
Thanks so much! Had to change sleep to 5 at line 422. The GalMap is taking forever to load. After that ran perfect. It's now set to jump with 8 way points. I've probably made 50+ jumps in a circle playing with all the versions. If you plotted them you'd think I'm crazy. Doing donuts in a Fleet Carrier!
— Reply to this email directly, view it on GitHub https://github.com/pilotso11/fc-macros/issues/5#issuecomment-1068580327, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBHPFKI2JAQUI3KJYF7TO3VAEPYRANCNFSM5QCRN4FA . You are receiving this because you were assigned.Message ID: @.***>
I got 13 unattended jumps done today! 0.2.3 has the increased wait for the galmap.
V 0.2.3 Ten more unattended jumps at 1080 Remapped locations.py to my Cutter at 3440 x 1440. Performed ten more unattended jumps. I think we have a winner! Is there a way to script the writing of the locations.py file? Show the Image and have the user try and match the top Left and bottom right points on their screen? Think I might head out somewhere random and do some exploring, because now I can be back from anywhere by morning! Will the Fuel Rats refuel my Carrier? : )
In the same way it’s adjusting for windowed vs full screen mode it could absolutely adjust for running on a second monitor.
Quick option: inputs on the gui to specify the offset. I guess that for you second monitor everything is shifted to the right by 1920 pixels?
More sophisticated: work out which monitor its running on and find top left corner. Then adjust. I already check if E:D is running , it should be possible to get the window coordinates for it from its window handle.
On Fri, 18 Mar 2022 at 01:14, EuropaSteve @.***> wrote:
V 0.2.3 Ten more unattended jumps at 1080 Remapped locations.py to my Cutter at 3440 x 1440. Performed ten more unattended jumps. I think we have a winner! Is there a way to script the writing of the locations.py file? Show the Image and have the user try and match the top Left and bottom right points on their screen? Think I might head out somewhere random and do some exploring, because now I can be back from anywhere by morning! Will the Fuel Rats refuel my Carrier? : )
— Reply to this email directly, view it on GitHub https://github.com/pilotso11/fc-macros/issues/5#issuecomment-1071923768, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBHPFN25XGWM4FBIKEWSNLVAPKGJANCNFSM5QCRN4FA . You are receiving this because you were assigned.Message ID: @.***>
I've been googling looking for a formula or script to to resize a screen location (x,y,w,h) from 1080 to other resolutions. I came across a post by the author of pyautogui saying that it was not possible due to various reasons. (I'll see if I can find the post). I had to manually find all the images for the 1080 locations.py on my 3440 x 1440 screen, recreate them, find them, and edit locations.py with the new (x,y,w,h) data. locations.txt
I think that’s because E:D doesn’t just scale it up. It changes the field of view and paints the scene at a higher resolution.
Not that I have a big monitor to try it on myself :-).
Maybe another option - give you the ability to maintain an alternate set of locations in a config file and a checkbox on the UI to swap them in or out. It’s still a one off setup activity but then just a check box to use. That would be simple to code up. On 18 Mar 2022, 16:17 +0000, EuropaSteve @.***>, wrote:
I've been googling looking for a formula or script to to resize a screen location (x,y,w,h) from 1080 to other resolutions. I came across a post by the author of pyautogui saying that it was not possible due to various reasons. (I'll see if I can find the post). I had to manually find all the images for the 1080 locations.py on my 3440 x 1440 screen, recreate them, find them, and edit locations.py with the new (x,y,w,h) data. locations.txt — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were assigned.Message ID: @.***>
Automatically generating the screenshots and/or a combination of OCR would improve the setup for different HUD's, screens, graphics cards, brightness settings etc.
This will be a bit of a dev blog of my attempts to make this work.
Having played with pyteseract this weekend, OCR is not going to all the way. It struggles with some text written on a slant - the left/right HUD windows, and oddly it struggles with some of the dark text on light background images. For example, on my setup I can't get it to read "CARRIER SERVICES" when selected, or "INVENTORY" when selected, but it is able to read "TRITIUM DEPOT" and "CARRIER MANAGEMENT" when selected.
Inverting the text doesn't help. Select edges doesn't help. Normally it can recognise "INVENTORY" to 90% confidence, but mist rising behind the HUD when docked can reduce that. The text for "TRANSFER" it seems to only get with about 65% confidence.
All of these tests are on grayscale images. I've not had better luck with thresholding the image, either using OTSU or Gausian. Gausian should be better but with the default settings at least turns the text into outlines. Maybe because of the width. There is some promise there. OpenCV Thresholding
Overall I think OCR may have a place in helping to build screenshots, but the screenshots will be better for generally navigating the UI - unless I can come up with a better way of detecting the selected buttons text.
But, step 1:
For the ICON images, hopefully either they work or they are in known positions on the screen.
This is in test code, I will look at add a UI function for these, which saves the images as "image/name99.png" so they wind up as part of the image search and continue refining additional UI elements.