spcl / serverless-benchmarks

SeBS: serverless benchmarking suite for automatic performance analysis of FaaS platforms.
https://mcopik.github.io/projects/sebs/
BSD 3-Clause "New" or "Revised" License
149 stars 67 forks source link

New benchmarks and applications #140

Open mcopik opened 1 year ago

mcopik commented 1 year ago

In SeBS, we provide a representative set of functions and have developed a set of serverless workflows that will be included in the upcoming release. However, the serverless field is constantly changing, and new types of applications are being "FaaS-ified". SeBS will benefit from new type of functions, new workflows, and from new applications - the last type has not been considered for SeBS yet.

Functions

The current list of functions is available in the documentation.

New ideas (all should be rather simple to implement thanks to a large number of open-source implementations):

Workflows

The current list of workflows is in the PR #88 and in the related thesis. In the PR, we have workflows for MapReduce, video analysis, ExCamera, and ML fitting. The thesis also documents the abstract language we use to specify each workflow.

To extend SeBS, we want to cover new application types and rich workflows with new computational patterns.

Potential new ideas:

Applications

Our benchmark suite contains functions and workflows, but it does not contain full applications not written as workflows. This can be standalone applications offloading certain tasks to serverless, and fully serverless applications.

Rajiv2605 commented 1 year ago

@mcopik Thanks for creating this issue. I have some doubts:

  1. What are the expected number of benchmarks you are looking for in the GSoC period?
  2. At what detail should a GSoC proposal address the adding of new benchmarks?
mcopik commented 1 year ago

@Rajiv2605 It depends on their type - functions should be fairly straightforward, while applications can take a lot of time to port and test for correctness. I think that once you create a schedule with milestones, it will be much more clear.

When it comes to the second question, it should be clear from the proposal how you are planning to approach the transition to SeBS - is their open-source implementation with an appropriate license, do you plan to implement it from scratch, how much work will be involved, do you foresee any potential issues, etc. In the proposal, there's no need for deep technical details about each benchmark, but an assessment that the application is interesting as a benchmark, novel for SeBS, viable technically to be used as benchmark, and an estimation of the difficulty and time commitment.

Rajiv2605 commented 1 year ago

@mcopik Will we be working on our deliverables during the community bonding period or does the coding start only after it? I am confused if I can include the community bonding period to the coding period while deciding upon the milestones and the amount of work in the proposal.

mcopik commented 1 year ago

@Rajiv2605 Working on deliverables during the community bonding period is not required, but it's a great time to do research, work with other libraries/projects and ensure they work as expected.

mcopik commented 1 year ago

Adding one more interesting application - serverless maps from HackerNews.

lawrence910426 commented 1 year ago

I think Black-Scholes from PARSEC benchmark suite could be very interesting for me. I have some experience in Dertivative Pricing & HPC. QuantLib also implemented BS-Model (and similar models).

mcopik commented 1 year ago

@lawrence910426 As I said above, we already have code for that :-) However, other Monte Carlo simulations might be a great addition! If you know some other examples from QuantLib, particularly those with different I/O and computational intensity, and the library's code allows us to integrate example to SebS, then I think this might be a great and interesting addition.