Closed jpfairbanks closed 5 years ago
I think a new way to think of how we can evaluate our model would be to consider the size of the set we can change or the domain of the category theory functions we define? Thoughts?
So far for Phase 1 we have been focused on qualitative methods of evaluation like examples and case studies.
Some paths to showing our methods work:
Examples of previously known modeling techniques that can be built in ModelTools.Transformations.
Proving theorems about contexts or transformations
Empirical results
I'm just thinking ahead to the part where we write a paper on this thing, how do we evaluate that it works?
We are kind of defining the problem that we want to solve from scratch, so if we want to make a quantitative comparison we will need to find a published task and show that we are better than a baseline that is already published.
We could also think of writing more of a theory paper, where we define the problem formally and prove some theorems about algorithms that solve the problems. Based on the 3 main use cases we have some potential theorems.
G(P)
is constructed according to algorithmA
then algorithmF
solves the Metamodel construction problemP
.M
can be generated by running the model in contextC
.M
and implemented in a functionf
, with a known region of good parametersR subset of D
,A
is an algorithm for test ifx
inR
.If we fill in the details of those terms, and prove the three theorems, I think we have a really strong contribution to the CS literature. It doesn't have a quantitative evaluation comparing two implementations or anything, but it defines a problem and provides a solution.