Closed ali-oem closed 3 years ago
Hey @ali-oem !
That's odd, but one thing is evident: you forgot to call bar()
inside your loop.
Put that before anything else.
And please, try to simulate that with a minimum code, like my example on the README. Does it work for you?
Hello? Did you have the chance to test it again? That's very odd, if you're in a Linux, everything should work flawless! Please report back.
Hello? Did you have the chance to test it again? That's very odd, if you're in a Linux, everything should work flawless! Please report back.
I may misunderstood the concept of the package, but i have four functions that should execute one after the other, after each one is completed, 25% should be shown, is it possible with this package or it only works with loops?
Yes, it is surely possible. It's even on the README, I'll cite it here:
It'll work for sure! Here is an example, although a naive approach:
with alive_bar(4) as bar: corpus = read_file(file) bar() # file was read, tokenizing tokens = tokenize(corpus) bar() # tokens generated, processing data = process(tokens) bar() # process finished, sending response resp = send(data) bar() # we're done! four bar calls with `total=4`
It's naive because it considers all steps are equal, but actually each one may take a very different time to complete. Think a
read_file
and atokenize
steps being extremely fast, making the percentage skyrocket to 50%, then stopping for a long time in theprocess
step. You get the point, it can ruin the user experience and create a very misleading ETA.What you need to do is distribute the steps accordingly! Since you told
alive_bar
there were four steps, when the first one completed it understood 1/4 or 25% of the whole processing was complete, which as we've seen may not be the case. Thus, you need to measure how long your steps do take, and use the manual mode to increase the bar percentage by different amounts at each step!You can use my other open source project about-time to easily measure these durations! Just try to simulate with some representative inputs, to get better results. Something like:
from about_time import about_time with about_time() as t_total: # this about_time will measure the whole time of the block. with about_time() as t1 # the other four will get the relative timings within the whole. corpus = read_file(file) # `about_time` supports several calling conventions, including one-liners. with about_time() as t2 # see its documentation for more details. tokens = tokenize(corpus) with about_time() as t3 data = process(tokens) with about_time() as t4 resp = send(data) print(f'percentage1 = {t1.duration / t_total.duration}') print(f'percentage2 = {t2.duration / t_total.duration}') print(f'percentage3 = {t3.duration / t_total.duration}') print(f'percentage4 = {t4.duration / t_total.duration}')
There you go! Now you know the relative timings of all the steps, and can use them to improve your original code! Just get the cumulative timings and put within a manual mode
alive_bar
!For example, if the timings you found were 10%, 30%, 20% and 40%, you'd use 0.1, 0.4, 0.6 and 1. (the last one should always be 1.):
with alive_bar(4, manual=True) as bar: corpus = read_big_file() bar(0.1) # 10% tokens = tokenize(corpus) bar(0.4) # 30% + 10% from previous steps data = process(tokens) bar(0.6) # 20% + 40% from previous steps resp = send(data) bar(1.) # always 1. in the last step
That's it! Your user experience and ETA should be greatly improved now.
I think i have put bar() after each function, so i don't where the repetition comes from
Yes, that seems alright.
But in your first image, all lines show 0/4
, which shows the counter wasn't going up.
Can you make a video of it?
I do really appreciate your patience, thank you !
Ahhh, ok, all those output were really within the 0/4
, no problem then.
In the video we can see your processing gets to 2/4
, great.
The weird problem then is, why would a Linux box do not support ANSI Escape Codes? That does not make sense.
Is this the bult-in terminal? What terminal is that?
A tip: you could improve your displaying. Since you seem to have several "internal" steps within each step, like Converting, Parsing content, Transforming, Merging, Flattening, etc, you could pass the bar
object to the methods, and call bar()
there after each sub-step. With more granular steps, you would give more feedback to the user, with a much nicer effect.
Ahhh, ok, all those output were really within the
0/4
, no problem then. In the video we can see your processing gets to2/4
, great.The weird problem then is, why would a Linux box do not support ANSI Escape Codes? That does not make sense.
Is this the bult-in terminal? What terminal is that?
Humm, it looks like this terminal doesn't support ANSI Escape Codes...
Please enter a python
session and try this:
print('aaaaaaaaaa\rbbbbb\x1b[K')
What does is print?
Humm, it looks like this terminal doesn't support ANSI Escape Codes... Please enter a
python
session and try this:print('aaaaaaaaaa\rbbbbb\x1b[K')
What does is print?
Ok, so your terminal do support ansi escape codes. And this one?
import sys
sys.stdout.isatty()
Ok, so your terminal do support ansi escape codes. And this one?
import sys sys.stdout.isatty()
the output was 'true'
That's...really confusing.
I'm lost. It doesn't make sense. So, your terminal does support ANSI Escape Codes and it is interactive, although it doesn't clear the lines at all... 🤔
Could you try a simpler code? Like this one, does it work?
from alive_progress import alive_bar
import time
for x in 1000, 1500, 700, 0:
with alive_bar(x) as bar:
for i in range(1000):
time.sleep(.005)
bar()
Please also try the new demo mode of 2.0:
❯ python -m alive_progress.tools.demo
I'm lost. It doesn't make sense. So, your terminal does support ANSI Escape Codes and it is interactive, although it doesn't clear the lines at all...
Could you try a simpler code? Like this one, does it work?
from alive_progress import alive_bar import time for x in 1000, 1500, 700, 0: with alive_bar(x) as bar: for i in range(1000): time.sleep(.005) bar()
Hummm, I think I know what is happening....
The subprocesses you're using must be outputting to stdout, right?
I don't know if they are other Python processes or not, but actually it doesn't matter. If the stdout is accessed via another process, that one obviously won't have the alive_bar
print hook. Or actually even if it prints to stderr, the same would happen.
Well, so for this to work, you should receive the other processes output, and print from the main process.
I've simulated three scenarios where other process prints to stdout and stderr, directly or with a heading \r
or \n
:
Since I maintain in alive_progress
a stable cursor position at the end (i.e. I park the cursor at the end, and print a \r
only when a refresh is about to occur), the direct print from another process correctly append to the end of the line.
Only when forcefully print a \r
I can print over the bar, like in your case...
Well @ali-oem, it seems this is not related to alive-progress
at all, so I'm going to close it.
Like I explained above, other processes should not use stdout concurrently to the main process, which has alive-progress
and its stdout hook.
There are a lot of ways to make this work, it will depend on your other processes. You could capture their outputs, and only print them when they finish, this is the easiest one, where you don't even need to modify any other code.
This seems to be a good search to give you ideas: https://www.google.com/search?q=python+multiprocess+synchronize+stdout
Good luck!
I maybe missed up something, but this is the output i get on my script (Ubuntu 20.04)