calpoly-csai / swanton

Swanton Pacific Ranch chatbot with a knowledge graph
MIT License
3 stars 1 forks source link

Text-to-Speech Solution Benchmarking #10

Closed chidiewenike closed 4 years ago

chidiewenike commented 4 years ago

Objective

Generate metrics for the most viable offline/local text-to-speech solution and graph the results.

Key Result

A separate graphs for both memory usage and run time per library.

Details

Run the answer for each QA pair in the given document and get the runtime/memory metrics. Try 3 runs per string and also an average of the three runs per string. Check out the header example below. You could write them out to a CSV and then graph them on Excel. @chidiewenike can help you with the graphing step if needed.

Additional context

Ex Header per library (each column is delimited with a | ): String | (Library)-Runtime#1 | (Library)-Runtime#2 | (Library)-Runtime#3 | (Library)-Runtime Avg | (Library)-Memory Usage#1 | (Library)-Memory Usage#2 | (Library)-Memory Usage#3 | (Library)-Memory Usage Avg|

chidiewenike commented 4 years ago

@gwholland3 If you would like to plot the data using matplotlib, that is a viable solution as well. I figured Excel would be the easiest but matplotlib is widely used so it would be useful to try it out.

gwholland3 commented 4 years ago

Ok, I'll look into matplotlib then

Jason-Ku commented 4 years ago

@gwholland3 Any updates on the metrics?

gwholland3 commented 4 years ago

Yeah, sorry about that. Here are the graphs I got:

Benchmarks

I don't know much about memory usage so not sure if those are good numbers or not.

Also, looks like both functions perform almost identically, even though one of them uses a static function and the other creates an engine instance and calls a method on that. So, not sure if we want to keep both or only use one.

Jason-Ku commented 4 years ago

Thanks Grant, these graphs are really awesome!

We're breaching the threshold of 10000 MB, which is 10 GB. Are you sure that it should be MB (megabytes) and not Mb (megabits)? If it was Mb I think we'll be in the clear.

And if both functions perform identically, I think the one with the static function is the way to go.

gwholland3 commented 4 years ago

I'm not 100% sure, since the memory profiler docs are somewhat poor. In the thirdish paragraph under the API section it uses the units "MB" so I'm pretty sure it returns in megabytes.

However, another potential reason could be my interpretation of the return format of the memory_usage function. It returns a list of memory usages over a certain time interval, and I wasn't sure if that meant that I had to sum the items of the list or take the average. I ended up summing them, so if I was wrong then the memory usage would actually be quite a bit lower than what's shown in the graphs.

Here are the docs: https://pypi.org/project/memory-profiler/ Perhaps you could help me figure out what the list members represent?

gwholland3 commented 4 years ago

New graphs:

benchmarking

Not sure why there's that steep increase in memory right at the beginning...

Jason-Ku commented 4 years ago

Awesome! Thanks for making these really nice graphs, it's super easy to follow what's going on for them.

Not sure what that spike in memory is either but I think we can safely ignore it for now.

But with this information, we know that the TTS should have no problem running with the RPI's memory constraints. 😊