Closed GoogleCodeExporter closed 9 years ago
Hi arfghie,
I understand the nature of your request and I'm wondering if there's perhaps a
way
of making interfacing with TPC more elegant than parsing the output.
It seems to me you're working on a code that uses TPC to do stuff (a front-end
of
sorts perhaps?). If you can, could you elaborate a bit more on what your code
does
and what language you are using?
I'm thinking if doing something like...
(1) isolating presentation from "mechanics" and creating, say, libtpc (tpc
library),
or
(2) switching to sort of a client-server model
would be the way to go here...
Original comment by kszy...@gmail.com
on 18 Jul 2014 at 8:43
I've got an idea on how to implement single-snapshot feature without
introducing new
command-line switch. This should get you going for the time being.
I'll let you know when I've got something to test.
Original comment by kszy...@gmail.com
on 18 Nov 2014 at 4:58
yes, i have also thought in a special switch that report several things at
once. Example:
temperature
cores usage
pstates in use on each core
but i am unsure about which is the best way.
Original comment by arfg...@gmail.com
on 19 Nov 2014 at 2:26
Perf-cpuusage is probably tad more complex (as it requires more than one data
sample)
but for the time being try the r205.debug10 with something like -temp -CM. When
you
feed it to a pipe or a file, it should report temperature followed by single
snapshot
of current P-states.
Original comment by kszy...@gmail.com
on 19 Nov 2014 at 5:24
ok, i will try it but anyways the -cpuusage it is so useful on that report.
For some reasson, the last clean debug doesnt works the same that the last
release version using -CM. My processe settings are the next:
.dwFlags = STARTF_USESHOWWINDOW Or STARTF_USESTDHANDLES
.wShowWindow = SW_HIDE
Just the same settings that i want for --perf-cpuusage and i am unable to
obtain without 'rtconsole.exe'.
Is it possible that doesnt works the same because it is a debug version ?
The problem is that the process get out of memory.
Original comment by arfg...@gmail.com
on 19 Nov 2014 at 6:53
My build environment hasn't changed for several months now so I suspect it's
related
to something else.
I have few questions for you:
1. Is it TPC or your program that runs out of memory?
2. Does it happen with -perf options or with other options as well (if so, which
ones) ?
3. How much time does it take to run out of memory?
4. Do you explicitly shut TPC down (via TerminateProcess or something similar)
after
you're done using the input?
Original comment by kszy...@gmail.com
on 21 Nov 2014 at 9:52
[deleted comment]
with the new 221 the problem doesnt happends. Surely something related to debug
version and my process flag execution.
Now tpc keeps on memory and it returns data related to pstates usages by each
core.
But, i have tested to run tpc 221 with the two switches at once.
tpc -temp - CM
At least using that from command line, tpc gets into -CM mode and it didnt show
the temp data.
Original comment by arfg...@gmail.com
on 21 Nov 2014 at 10:39
What about questions from comment #6 in context of --perf-* ?
Yes, r221 will not give you a single snapshot of -CM (only debug10 has this
feature
as the code isn't yet inclusion ready).
Original comment by kszy...@gmail.com
on 21 Nov 2014 at 10:54
,,Also, with 221 version i have no problems to obtain the pstates in use for
each core, because now the program keeps on memory and doesnt exit. I dont
know what happened with the last debug version...
But, that fact is what i cant understand, how is it possible that i can
run tpc 221 with a hidden console and obtain its data while tpc is
working and giving
pstates usage data with -CM, and why i can not do the same when tpc is
working
with --perf-cpuusage switch ? like i have said, i need to use
'rtconole.exe'
program in order to retrieve data from tpc while it is working with that
switch
--perf-cpuusage. It should be the same, isnt it ?''
I do not know how to explain difference between debug10 and r221, there were
zero changes in -perf-* area.
And yes, I understand your comment about differences between output of -CM
and --perf-*. I'll take a look at the code and let you know if anything
stands out.
I'll also add one more question:
5. When you run TPC from your application, how many TPC instances are you
running at
the same time? Is it always single instance, or perhaps more than one instance
(say, one for -CM and one for --perf-cpuusage)?
Original comment by kszy...@gmail.com
on 21 Nov 2014 at 11:01
several instances. But... let me explain:
- There is a tpc instance that is always running to retrieve cores usage, the
damnit --perf-cpuusage :)
- If an adaptive power mode is selected, another tpc instance is required to
retrieve the pstate from each core. -CM
- Each x seconds, -temp
- Each x seconds, -htc states
and in a near future i would require more, so, who knows.
But listen, all works fine. The only one problem at the moment, --perf-cpuusage
that doesnt allow me to do the same that i do with -CM. For that reasson there
is an extra process 'rtconsole.exe'.
thx for your attention
Original comment by arfg...@gmail.com
on 22 Nov 2014 at 12:15
Ok, let's focus on -perf-cpuusage with r221 for now.
What is the exact symptom that you get? Is it out of memory condition?
Is it lack of data? Does it happen right away or after some time?
The more details, the better :)
Original comment by kszy...@gmail.com
on 22 Nov 2014 at 2:07
nono, the problem it is that --perf-cpuusage doesnt has same behavior that -CM
has. I need that has the same. That is to be able to get data from it while tpc
is running on background.
Original comment by arfg...@gmail.com
on 22 Nov 2014 at 3:32
Ok, I understand that. But what is the exact problem?
Do you get no data in the pipe after you launch it?
Original comment by kszy...@gmail.com
on 22 Nov 2014 at 12:18
exactly. For reasons that i dont know, tpc allows to get data from pipe while
it is running with -CM. But not with --perf-cpuusage. But see an important
detail, i am running the tpc process with its window hidden, for obvious reason.
Anyways, what about a special switch such as -monitor that throws
-CM
-perf-cpuusage
-temp
-htc data
all at once, and then with a single instance of tpc, all data can be got from
it. It is an idea that i dont know if it is possible to perform.
thx in advance
PD: related to -temp, why all the cores give the same temperature value ?
Original comment by arfg...@gmail.com
on 22 Nov 2014 at 2:07
Ok, I can see why this is happening.
It's because data in the pipe aren't line buffered which makes your code see
data
from TPC only when the pipe is filled to some system-determined level (like
50%).
In other words, you get aggregated, non-realtime data in chunks (not something
you
want).
I'd bet if you give enough time, you will get a big block of data from the pipe.
My test program (attached) returns data after about 45 seconds with default pipe
size on 12-core machine and Windows 7.
The reason you don't see the issue with -CM is because -CM explicitly flushes
stdout stream after printing current P-state information for all cores and once
again after printing statistics.
The solution to your problem could be adding fflush(stdout) to -perf-cpuusage
loop
but I'll explore other options and let you know.
Original comment by kszy...@gmail.com
on 22 Nov 2014 at 3:09
Attachments:
You see same temperature on all cores because there's only one temperature
sensor
in the CPU die.
Original comment by kszy...@gmail.com
on 22 Nov 2014 at 3:11
yes, just to flush(stdout) to -perf-cpuusage it should allow me to remove the
back 'rtconsole.exe' process :)
Original comment by arfg...@gmail.com
on 22 Nov 2014 at 3:12
ok, i am trying to flush the pipe and i obtained the next over 1 minute or
something less:
TurionPowerControl 0.44-rc2+ (trunk-r221) Windows 64-bit
Turion Power States Optimization and Control - by blackshard
Performance counter will use slot #0
Values >100% can be expected if the CPU is in a Boosted State
Node 0 - c0:5% c1:5% c2:5% c3:16% c4:5% c5:5% c6:6% c7:7%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:3% c5:7% c6:6% c7:8%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:2% c5:7% c6:5% c7:8%
Node 0 - c0:4% c1:6% c2:3% c3:3% c4:3% c5:6% c6:7% c7:8%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:3% c5:8% c6:4% c7:8%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:2% c5:4% c6:9% c7:8%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:2% c5:9% c6:3% c7:7%
Node 0 - c0:4% c1:6% c2:3% c3:3% c4:7% c5:5% c6:3% c7:7%
Node 0 - c0:4% c1:6% c2:3% c3:2% c4:9% c5:3% c6:3% c7:7%
Node 0 - c0:4% c1:6% c2:4% c3:2% c4:9% c5:3% c6:4% c7:6%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:9% c5:4% c6:4% c7:7%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:3% c5:7% c6:6% c7:8%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:3% c5:6% c6:7% c7:7%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:3% c5:7% c6:5% c7:7%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:3% c5:7% c6:6% c7:8%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:2% c5:7% c6:6% c7:7%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:7% c5:5% c6:3% c7:7%
Node 0 - c0:4% c1:6% c2:4% c3:2% c4:9% c5:3% c6:3% c7:6%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:7% c5:3% c6:4% c7:7%
Node 0 - c0:4% c1:6% c2:3% c3:3% c4:2% c5:8% c6:4% c7:7%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:2% c5:8% c6:5% c7:8%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:2% c5:7% c6:6% c7:7%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:2% c5:9% c6:3% c7:7%
Node 0 - c0:4% c1:6% c2:3% c3:3% c4:2% c5:6% c6:7% c7:6%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:2% c5:9% c6:3% c7:7%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:4% c5:7% c6:4% c7:7%
Node 0 - c0:4% c1:6% c2:4% c3:2% c4:9% c5:3% c6:3% c7:7%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:3% c5:4% c6:8% c7:7%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:2% c5:8% c6:5% c7:7%
Node 0 - c0:4% c1:6% c2:3% c3:3% c4:2% c5:6% c6:6% c7:7%
Node 0 - c0:4% c1:8% c2:5% c3:4% c4:4% c5:6% c6:7% c7:9%
Node 0 - c0:4% c1:6% c2:4% c3:4% c4:2% c5:4% c6:7% c7:6%
Node 0 - c0:4% c1:6% c2:4% c3:4% c4:3% c5:3% c6:8% c7:7%
Node 0 - c0:4% c1:6% c2:3% c3:3% c4:3% c5:6% c6:6% c7:6%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:5% c5:7% c6:3% c7:8%
Node 0 - c0:4% c1:6% c2:3% c3:3% c4:9% c5:3% c6:4% c7:6%
Node 0 - c0:4% c1:7% c2:4% c3:2% c4:9% c5:3% c6:3% c7:7%
Node 0 - c0:4% c1:6% c2:3% c3:2% c4:9% c5:3% c6:3% c7:7%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:7% c5:4% c6:4% c7:7%
Node 0 - c0:4% c1:7% c2:3% c3:3% c4:3% c5:5% c6:8% c7:8%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:2% c5:5% c6:6% c7:9%
Node 0 - c0:4% c1:7% c2:3% c3:3% c4:2% c5:6% c6:7% c7:8%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:2% c5:6% c6:6% c7:9%
Node 0 - c0:4% c1:6% c2:3% c3:3% c4:2% c5:9% c6:4% c7:8%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:2% c5:10% c6:4% c7:8%
Node 0 - c0:4% c1:7% c2:3% c3:3% c4:5% c5:6% c6:4% c7:8%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:2% c5:7% c6:5% c7:9%
Node 0 - c0:4% c1:6% c2:3% c3:3% c4:2% c5:8% c6:5% c7:8%
Node 0 - c0:5% c1:8% c2:5% c3:5% c4:4% c5:9% c6:7% c7:9%
Node 0 - c0:5% c1:8% c2:4% c3:4% c4:8% c5:5% c6:6% c7:8%
Node 0 - c0:4% c1:7% c2:5% c3:4% c4:5% c5:5% c6:9% c7:9%
Node 0 - c0:4% c1:7% c2:3% c3:3% c4:2% c5:5% c6:6% c7:8%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:2% c5:3% c6:10% c7:9%
Node 0 - c0:4% c1:7% c2:3% c3:3% c4:2% c5:3% c6:9% c7:9%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:3% c5:8% c6:5% c7:9%
Node 0 - c0:4% c1:7% c2:3% c3:3% c4:2% c5:9% c6:4% c7:9%
Node 0 - c0:4% c1:7% c2:3% c3:3% c4:2% c5:7% c6:5% c7:9%
Node 0 - c0:4% c1:7% c2:3% c3:3% c4:2% c5:7% c6:5% c7:10%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:3% c5:9% c6:3% c7:9%
Node 0 - c0:4% c1:7% c2:3% c3:2% c4:9% c5:3% c6:3% c7:8%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:6% c5:3% c6:6% c7:8%
Node 0 - c0:4% c1:6% c2:3% c3:3% c4:2% c5:9% c6:3% c7:7%
Node 0 - c0:4% c1:6% c2:4% c3:3% c4:2% c5:9% c6:3% c7:8%
Node 0 - c0:4% c1:7% c2:6% c3:3% c4:2% c5:4% c6:6% c7:8%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:3% c5:4% c6:10% c7:7%
Node 0 - c0:4% c1:6% c2:3% c3:3% c4:2% c5:9% c6:3% c7:7%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:2% c5:5% c6:8% c7:8%
Node 0 - c0:4% c1:7% c2:4% c3:3% c4:2% c5:9% c6:3% c7
See specially the last line that isnt complete. And why that big amount of time
to throw data ? i performed a pipe flush each 2 seconds, but the program doesnt
gave data until near te minute.
Original comment by arfg...@gmail.com
on 22 Nov 2014 at 3:53
Flushing needs to be done in TPC, not in your app.
The fact that you eventually got the data confirms the theory from comment #16.
Please stand by for a new build :-)
Original comment by kszy...@gmail.com
on 22 Nov 2014 at 5:17
Please try r223: http://darkswarm.org/tpc/testing/r223/
Original comment by kszy...@gmail.com
on 22 Nov 2014 at 5:47
yes !!! now it works :)
just a question related to it. TPC throws several lines, i guess that it is the
last the most recent ?
thx in advance.
PD: just a pleasure to run it without 'rtconsole.exe'
Original comment by arfg...@gmail.com
on 22 Nov 2014 at 9:56
Data are written to the pipe in chronological order in ~1s intervals so yes,
the last
one is the most recent.
Depending on how fast you consume the data, you may or may not get more than one
line of input.
Thanks for testing :)
Original comment by kszy...@gmail.com
on 22 Nov 2014 at 10:25
now with the current changes and fixes, i think that a new release is possible,
with all the versions, x86,x64, linux...
Original comment by arfg...@gmail.com
on 22 Nov 2014 at 10:26
Also, if you're not doing it already, you could consider buffering the pipe
handle in your application using stdio FILE*. It would make working with the
stream
much easier (you could use fgets/fscanf and other stdio functions).
From:
http://stackoverflow.com/questions/1176580/custom-file-type-in-c-c
,,If you're using Windows and you have a HANDLE, then you can use
_open_osfhandle to associate a file descriptor with it, and then use _fdopen
from there.''
Original comment by kszy...@gmail.com
on 22 Nov 2014 at 10:30
well i am not sure if i would obtain a clear benefit to buffering the pipe
handle, like you are suggesting, but anyways, i have to give big thx for attend
my feature request and fix the problem that was the impossibility to obtain
data from tcp with the --perf-cpuusage.
I am performing changes in my code in order to take in consideratin the demoted
boost states and see if i can use them like regular pstates, etc etc.
thx so much
Original comment by arfg...@gmail.com
on 23 Nov 2014 at 3:23
Original comment by kszy...@gmail.com
on 27 Nov 2014 at 5:54
Original issue reported on code.google.com by
arfg...@gmail.com
on 11 Jul 2014 at 1:00