Open vickariofillis opened 1 year ago
Hi, The revent recording mechanism was designed to run with game workloads (e.g. templerun, angrybirds) as it is not possible to automate then with UIAutomator. The benchmark workloads e.g. gfxbench are implemented with UIAutomator as it allows for more reliable and repeatable runs.
Are you able to use the existing configuration of the workloads to execute the required tests rather than relying on revent?
I was looking into having more flexibility in terms of the executed sub-benchmarks. For example, Geekbench supports only the CPU option and GFXBench has by default a subset of all sub-benchmarks. Implementing selecting Compute as well is not as straightforward as I expected*, so I was trying to find a solution via the record-replay route.
Any suggestions for broadening the options of selecting sub-benchmarks? Is there a directory/website with other people's implementations (e.g., Geekbench with Compute)?
Hmm.. I see, the main problem I see with using revent for this is that you will not have a way to extract the scores from the workload results as that can only play back inputs and cannot react to screen elements and wait for the tests to complete, detect results etc.
What errors do you face when trying to use uiautomator? Are you having problems with just the benchmark or obtaining any captures from your device?
I don't care about recording the benchmark-specific scores so that's not an issue. I'm interested in only running the benchmarks.
Regarding UIAutomatorViewer. This is the error I get while trying to capture a device screenshot (UI XML Snapshot).
Unexpected error while obtaining UI hierarchy
java.lang.reflect.InvocationTargetException
Are you having problems with just the benchmark or obtaining any captures from your device?
What exactly do you mean by that? UIAutomator scripts run fine.
In general, I'm interested in using a profiler tool (Snapdragon Profiler or ARM Streamline) to collect device data while automating the running of various benchmarks (and sub-benchmarks).
You can change what GFXBench subtests are running here: https://github.com/ARM-software/workload-automation/blob/master/wa/workloads/gfxbench/__init__.py#L41
I found that the UiAutomatorViewer stopped working if my Android Tools were outdated by the device. Have you tried updating your tools?
You can change what GFXBench subtests are running here: https://github.com/ARM-software/workload-automation/blob/master/wa/workloads/gfxbench/__init__.py#L41
Thanks for the heads up.
I found that the UiAutomatorViewer stopped working if my Android Tools were outdated by the device. Have you tried updating your tools?
Yeah. Unfortunately, I still get an error.
Yeah. Unfortunately, I still get an error.
I would be tempted to suggest doing a clean install of the latest available android tools and SDK just to check that an older version is not being picked up somehow and still causing the error.
Or worst case scenario you could always take a ux dump on the device manually and open the resultant files in uiautomator. (screencap -p > /sdcard/ui_dump.png && uiautomator dump
)
I don't care about recording the benchmark-specific scores so that's not an issue. I'm interested in only running the benchmarks.
I would suggest trying to get the UIAutomator approach to work first however as a workaround, one option would be to create a new Revent version of the workload which would allow the use of the record commands. Although you would still encounter the limitations of revent i.e. only being able to measure for a fixed amount of time rather than being able to detect when a test has completed as the timings of the benchmark / revent playback may not be consistent across runs etc.
To create a new revent workload we have the following command to help with this [1]
wa create workload -k apkrevent gfxbench_revent
This will generate a workload file for you which can then be modified to add the appropriate package name (which can be taken from the existing workload) and parameters (if required). This will allow WA to auto launch the application and you to use the record command as initially attempted.
I tried using the record functionality but I am getting the following error. I got the same error for regardless of the benchmark.
This happens after I press enter to finish recording the teardown stage. I am using the following command to run it.
wa record -a -w gfxbench
I am using the latest version.
wlauto-3.4.0.dev1+bf72a576-py3.10.egg