Closed OvermindDL1 closed 5 years ago
As a documentative note, the only values that psutil
exposes are:
╰─➤ python -i
Python 2.7.12 (default, Nov 12 2018, 14:36:49)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import psutil
>>> dir(psutil)
['AF_LINK', 'AccessDenied', 'CONN_CLOSE', 'CONN_CLOSE_WAIT', 'CONN_CLOSING', 'CONN_ESTABLISHED', 'CONN_FIN_WAIT1', 'CONN_FIN_WAIT2', 'CONN_LAST_ACK', 'CONN_LISTEN', 'CONN_NONE', 'CONN_SYN_RECV', 'CONN_SYN_SENT', 'CONN_TIME_WAIT', 'Error', 'IOPRIO_CLASS_BE', 'IOPRIO_CLASS_IDLE', 'IOPRIO_CLASS_NONE', 'IOPRIO_CLASS_RT', 'NIC_DUPLEX_FULL', 'NIC_DUPLEX_HALF', 'NIC_DUPLEX_UNKNOWN', 'NoSuchProcess', 'PROCFS_PATH', 'Popen', 'Process', 'RLIMIT_AS', 'RLIMIT_CORE', 'RLIMIT_CPU', 'RLIMIT_DATA', 'RLIMIT_FSIZE', 'RLIMIT_LOCKS', 'RLIMIT_MEMLOCK', 'RLIMIT_MSGQUEUE', 'RLIMIT_NICE', 'RLIMIT_NOFILE', 'RLIMIT_NPROC', 'RLIMIT_RSS', 'RLIMIT_RTPRIO', 'RLIMIT_RTTIME', 'RLIMIT_SIGPENDING', 'RLIMIT_STACK', 'RLIM_INFINITY', 'STATUS_DEAD', 'STATUS_DISK_SLEEP', 'STATUS_IDLE', 'STATUS_LOCKED', 'STATUS_RUNNING', 'STATUS_SLEEPING', 'STATUS_STOPPED', 'STATUS_TRACING_STOP', 'STATUS_WAITING', 'STATUS_WAKING', 'STATUS_ZOMBIE', 'TimeoutExpired', 'ZombieProcess', '_OPENBSD', '_POSIX', '_PY3', '_TOTAL_PHYMEM', '_WINDOWS', '__all__', '__author__', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', '__version__', '_assert_pid_not_reused', '_common', '_compat', '_last_cpu_times', '_last_cpu_times_2', '_last_per_cpu_times', '_last_per_cpu_times_2', '_pmap', '_pslinux', '_psplatform', '_psposix', '_psutil_linux', '_psutil_posix', '_timer', 'boot_time', 'callable', 'collections', 'cpu_count', 'cpu_percent', 'cpu_times', 'cpu_times_percent', 'disk_io_counters', 'disk_partitions', 'disk_usage', 'errno', 'functools', 'long', 'net_connections', 'net_if_addrs', 'net_if_stats', 'net_io_counters', 'os', 'pid_exists', 'pids', 'process_iter', 'pwd', 'signal', 'subprocess', 'swap_memory', 'sys', 'test', 'time', 'traceback', 'users', 'version_info', 'virtual_memory', 'wait_procs']
Also as note, I have to run it via python auto.py --factorio=/mnt/storage0/Factorio/bin/x64/factorio
because python auto.py --factorio=../../bin/x64/factorio
doesn't work, if it absname
'd the path first then that would fix that I'd imagine.
Oh boy.
I assume you're using linux, which is why you were getting a whole list of problems, nice that you figured out what it took to get it to work at all :p
The taskkill is only there as a fallback, as when factorio is started trough steam, steam spawns a separate process and didn't let me control factorio.
You raise some good points, I'll have a more thorough look at everything when I find some time. :) Thanks for the efforts!
Technically not just linux, but also mac and on some various windows setups as well (like running an enforced standards python). :-)
Can you expand a bit on where it hangs with the multithreading? The queues are thread safe and I never had it hang.
Can you please explain what you meant with "Doesn't work via a file://
schema"? I've had no problem like that.
Its likely that you are using some other cjpeg
on your system to convert from bitmaps to compressed jpegs than the one included since I only included the windows binaries for mozjpeg, is that correct?
I've made some commits on the dev branch, that should include fixes for hopefully all your other issues without creating new ones. But beware, I've changed the environment on the dev branch to python 3.7.1.
Can you expand a bit on where it hangs with the multithreading? The queues are thread safe and I never had it hang.
It hangs on the join calls, didn't look into the other multiprocess states to see where they themselves were frozen but they didn't join after almost 2 hours of waiting, where when I set the count to 1 it processed and join in <3 seconds.
Can you please explain what you meant with "Doesn't work via a file:// schema"? I've had no problem like that.
I tried it last night and the images weren't loading, but tried it again just now and it worked, no clue what's up... >.>
Its likely that you are using some other cjpeg on your system to convert from bitmaps to compressed jpegs than the one included since I only included the windows binaries for mozjpeg, is that correct?
╰─➤ cjpeg -version
libjpeg-turbo version 1.4.2 (build 20180705)
Very recent version as it's updated with the system itself. This is a pretty common program on linux so I'd generally expect it to be available, but it would be useful to add it on the readme as a requirement, say something like libjpeg-progs
is required for apt-based systems or otherwise that libjpeg's programs are required to run. Or can make it optional and can test if it exists by calling it via cjpeg -version
to see if you get a successful return result or not (if not just write out the unoptimized file).
Also you really Really shouldn't include platform-specific executable binaries in your git repository, it will throw a lot of check failures in some cases in addition your repository size is now permanently bloated with non-differencible data.
For 3:
I wouldn't, cjpeg is super common as a pre-installed thing on linux system, just try running the command, if it works then good, if it fails then fall back to just img.save(...)
.
The cjpeg
is actually mozjpeg based on turbojpeg but with better compression, and its cjpeg
is supposed to be a drop in replacement for the commong cjpeg
. I'm only using this to gain a better compression at the cost of more processing time, as this mod is more geared towards actually hosting these maps online. However currently im not 100% happy with the implementation as there is a lot of overhead (spawning one process for every image). However actually compiling a better version of mozjpeg myself is not an easy task so I think i'll just leave it like this for now, as img.save basically uses libjpeg-turbo already.
I hope everything is solved now, let me know if there are any other problems :)
Feel free to reply to this if you still have issues related to this.
I'll try to remember to update to the official branch again, I'm still using my modified fork. I had to add some throttling in it as my CPU was overheating a bit on the image work... ^.^;
/me really needs to upgrade...
The most efficient way to reduce heat is by leaving the program untouched but selecting a different power plan :) Also to clean out the device if you haven't done that in a while, especially if its a laptop :D
Heh, I need to clean out my heatsink is what the issue is, but it's a... larger one than most and compressed air just kind of compacts it in more in the tiny slots among it. Doesn't help that it's 11 years old either... ^.^;
Had a number of issues related to the python scripts and the final generated webpage.
Python script issues
It appears that the python script used is out of date (I am using Python 2.7.12), followed the instructions to
pip install Pillow psutil
and it seems the latest version ofpsutil
does not contain apsutil.BELOW_NORMAL_PRIORITY_CLASS
value, thus the calls topsutil.Process(os.getpid()).nice(psutil.BELOW_NORMAL_PRIORITY_CLASS or -10)
are failing. I first had to comment all those out as I didn't really care about fixing it to the current methods as I'm just trying to get it working.The path separators appear to have
//
used, which is invalid in Python programs and will only incidentally/accidentally work on some limited systems. Either a path separator of/
should be used, which 'works' on all systems Python supports but is still not the recommended way, or usingos.path.join(...)
(which is indeed used in 'most' placed) is the proper way to join paths in all cases. 'Fixed' all those via/
as there were a number of constants around...There really should be more information about what the subprocess queues work status is as they seemed to hang on quite a number of occasions, even waiting over an hour did not see them complete, I had to force the subprocess count to just
1
to get them to work, at which case they worked properly. Did not look into what was happening as forcing the multiprocess count to 1 fixed it, perhaps a race condition or so.Although there is a commandline option of
--factorio=path/to/factorio
it seems to be defaulting checks to a variety of locations, none of which would just be../../bin/x64/factorio
, which is the default executable location for the*.tar.gz
download, thus forcing me to put in it's full path every time.It appears to try killing Factorio with a
taskkill
command, which is not a valid program command, it should instead send an appropriate signal to the process instead, which will work everywhere that factorio runs on.The
Popen
command has invalid argument types passed to it, it is not a shell style command, rather it needs to be passed a list where the first element is the program and each additional element is a command line argument (sans quotes as this is, again, not a shell expansion).The
subprocess.call
calls are being used with shell expansion, but shell expansion mode is not enabled viashell=True
.A Quick and Dirty patch file to fix the above issues is:
Webpage issues
The index.html has a weird
<script src="/cdn-cgi/apps/head/ThhfSzyp6GI_EE-HfsckZnD6ThE.js">
tag at the top, which is a usual cloudflare tag that won't work anywhere else and is causing errors in the generated page.Doesn't work via a
file://
schema but works viahttp://
/https://
, appears to be because of some improper scripting via some third-party library.Notes
Once worked around these issues it seems to work quite well, fantastic idea. :-)