aws-samples / foundation-model-benchmarking-tool

Foundation model benchmarking tool. Run any model on any AWS platform and benchmark for performance across instance type and serving stack options.
https://aws-samples.github.io/foundation-model-benchmarking-tool/
MIT No Attribution
182 stars 27 forks source link

Do eks #110

Closed bainskb closed 3 months ago

bainskb commented 4 months ago

Issue #, if available: with the addition of get_metrics in the base fmbench_predictory.py class file the inference in notebook 3 and furthermore the dependent notebook 4 are not working together correctly.

Description of changes: I have added my deploy and inference scripts for reference. please note that this solution requires the terraform script to be ran and HF token to be at the top. you will be able to complete the inference but fail when trying to call the predictor.get_metrics and further dependent data frames that require the df_metrics_list

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

bainskb commented 3 months ago

Got past the get_metrics issue, worked with @madhurprash to reach an issue in notebook 4. all deployment and inference is fixed.