Manu343726 / Turbo

C++11 metaprogramming library
MIT License
111 stars 14 forks source link

Roadmap #3

Open Manu343726 opened 9 years ago

Manu343726 commented 9 years ago

Here's the roadmap I have been discussing with @ericjavier for a true functional language for C++ metaprogramming:

  1. Language Level
    • Algebraic datatypes
    • Typeclasses
    • Bindable/curryable first class metafunctions
  2. Middleware level
    • Sequences, typelists
    • Simple functionality for sequence manipulation
  3. Library level
    • High order algorithms
    • Port of Haskell prelude?

Let's discuss the "hows" here. Please for an in depth discussion of any of that points, open a new issue on the topic.

ericjavier commented 9 years ago

Let me mention some features I would like to see in the first issue you mention above:

About Typeclasses:

About curryable metafunctions:

Manu343726 commented 9 years ago
Manu343726 commented 9 years ago

Today I also have been working on a prototype for algebraic datatypes. Look at main.cpp . Note that there are type traits to do pattern matching on those datatypes, seem to work quite well.

Manu343726 commented 9 years ago

About composition, I forgot I introduced a very questionable specialization on tml::eval to handle function composition using pointer to function syntax:

using result = $(f(*)g);
ericjavier commented 9 years ago

Some thoughts.

I believe that when an user will create a function should not be thinking and worrying about many conventions established by our library. He should just take into account creating a template with a defined type that has the name 'type'.

For example, should not be writing:

struct f
{
    template<typename... Head>
    struct apply
    {
        struct type
        {
            template<typename... Tail>
            using apply = f::apply<Head...,Tail...>;
        };
    };

    template<typename A, typename B, typename C>
    struct apply<A, B, C>
    {
        using type = C;
    };
};

should only write something like:

template<typename A, typename B, typename C> struct f {
  using type = C;
};

and then trust that the type:

using my_func = function_from_template<f>;

is already a first-class citizen with all requirements of the library.

I think the meta-function to compose other functions should be called 'compose' or 'composition', no fancy notation, kept as simple and understandable as possible, or at least have several options.

A question: Are you aware of the work of Abel Sinkovics (@sabel83) ?

I'm still not quite sure how your eval works for delayed evaluations, I think (by the way, which branch should I use with Turbo ?), but let me ask another question:

When you do:

using a = $(f, int); // a is a closure
using b = $(a, char); // b is a closure

you can still pass them as parameters to other functions, in short: Are 'x' and 'y' first-class citizens functions in the library ?

Manu343726 commented 9 years ago

I believe that when an user will create a function should not be thinking and worrying about many conventions established by our library. He should just take into account creating a template with a defined type that has the name 'type'.

That's right, there should be a way to provide all that features out of the box as adapters for classic metafunctions. I have played both with eager metafunctions (like what Eric Niebler's Meta does) and lazy ones (i.e. metafunction classes), and I think that the library should use lazy evaluation by default. Also metafunction classes are easier to play with since they are directly first-class citizens (types) instead of templates. Thankfully writing an adapter to translate metafunctions into metafunction classes is easy:

template<template<typename...> class F>
struct from_metafunction
{
    template<typename... Args>
    struct apply
    {
        using type = $(F<Args...>);
    };
};

One of the problems I'm facing with the adapter approach is: How to handle currying on variadic metafunctions? We can use SFINAE to check if a given instantiation is valid and has a ::type member and then evaluate the metafunction, else returning the closure to continue with currying. But it's possible to check if a template is a variadic template and then disable currying?

I have been playing a bit on the topic: http://coliru.stacked-crooked.com/a/52719bb784a98b13 GCC rejects the code, the second static_assert() fails, meaning it considers a template<typename, typename, typename> to be instantiable with just one int param only. http://coliru.stacked-crooked.com/a/b5abd3d81341e8e2

If this thing finally works is the right way to go I think. Note that curry adapter also raises templates into metafunction classes, solving both problems with one adapter only.

A question: Are you aware of the work of Abel Sinkovics (@sabel83)

Yes, I'm. It's a cool project with almost the same goals as ours, but I find the macro based interface a bit tricky.

I'm still not quite sure how your eval works for delayed evaluations, I think (by the way, which branch should I use with Turbo ?)

eval is just a thing that "parses" expressions (templates) expanding its arguments first prior to evaluation. That was intended for easy metafunction composition riding over the typename ::type pattern. But it evolved a lot from that, and now it's really an evaluation trigger for any expression (Value, metafunction, metafunction class, type ctor, etc) the library can deal with. For lazyness it does not do anything special, if you need lazy computations use metafunction classes instead of metafunctions.

About the branch, I'm sorry I usually experiment directly in the master branch. All other branches are currently outdated, maybe we should clean up the repo and start a git flow workflow or something like that.

... you can still pass them as parameters to other functions, in short: Are 'x' and 'y' first-class citizens functions in the library ?

x y?

ldionne commented 9 years ago

Looking at this discussion going on, I think you might be interested in my previous work on the MPL11. It's basically a port of the Haskell prelude to TMP. Also, it uses lazy evaluation and it has proper type classes. So far, I fail to see how what you are trying to achieve is different, but I might be being thick.

Manu343726 commented 9 years ago

I fail to see how what you are trying to achieve is different, but I might be being thick.

@ldionne maybe that's because we are trying to achieve almost the same thing? ;) What I'm trying here is to compile (no pun intended...) all my ideas about tmp and see how the final result looks like. MPL11 is certainly very similar to our goal, but reviewing your guide and implementation I think we will achieve it in a different way.

It would be great to compare results in the future

ldionne commented 9 years ago

Ok, well then I'm eager to see the solutions you come up with!

Manu343726 commented 9 years ago

Ok, well then I'm eager to see the solutions you come up with!

@ldionne don't worry, I will continue spamming Twitterverse with screen captures and metaprogramming blog posts on the topic

Manu343726 commented 9 years ago

I didn't remember this, the void_t issue is a well known GCC bug.

ericjavier commented 9 years ago

...'the types 'a' and 'b', sorry

Manu343726 commented 9 years ago

Look at the implementation of the curry adapter (From the coliru links above):

template<template<typename...> class Function>
struct curry
{
    template<typename... Args>
    struct apply
    {
        using f = just_t<Function, Args...>;

        template<typename F, bool is_function = has_type<F>::value>
        struct call
        {
            using type = typename F::type;
        };

        template<typename F>
        struct call<F, false>
        {
            struct type
            {
                template<typename... Tail>
                struct apply
                {
                    using type = eval_mc<curry, Args..., Tail...>;
                };
            };
        };

        using type = typename call<f>::type;
    };
};

As you can see, this translates any metafunction (i.e. any template with a type member) into a metafunction class. When evaluating the resulting lifted metafunction (following MPL11 terminology), it just evaluates the original template when possible, or returns a closure instead.
The magic occurs at the closure!

struct type
{
    template<typename... Tail>
    struct apply
    {
        using type = eval_mc<curry, Args..., Tail...>;
    };
};

The type there, the "value" the lifted metafunction is returning, is a metafunction class too. Is not that results from partial applications behave like functions, is that those are true functions.
Note what that closure returns: It just calls the lifted metafunction again, but adding the captured arguments first.

ericjavier commented 9 years ago

Yes, the problem with variadic meta-functions is controversial, there could be the following template:

template<typename A, typename... Cry> struct foo { using type = A; };

... and the corresponding wrapper

using foo_func = make_func<foo>;

... then if we say

using a = foo_func<int> // or eval<foo_func, int<

... then we have an ambiguous situation, we can get 'int', or let's say: an infinite amount of functions that eventually can give 'int' at some point.

We can take by convention that when a type is available we should get it, i think that this approach is pretty reasonable.

The approach I followed with 'yaml' is the following: when a template instance is well formed then I should evaluate it (I mean maybe we don't need to know when a template is variadic or not), note:

/// \brief SFINAE usage to detect when a set of arguments fits a given template.
/// \!NOTE: only fitting lenght.
template<template<class...> class T, typename... Args>
std::true_type test_fitting(const T<Args...>*);

/// \brief FAILURE OVERLOAD
template<template<class...> class T, typename... Args>
std::false_type test_fitting(...);

/// \brief Helper to know when a set of arguments fits a given template.
template<template<class...> class T, typename... Args>
using fitting_t = decltype(test_fitting<T, Args...>(0));

... then when I get the asserts:

 static_assert(!fitting_t<curryable_function, int>::value, "Instantiable?");
 static_assert(!fitting_t<curryable_function, int, int>::value, "Instantiable?");
 static_assert(fitting_t<curryable_function, int, int, int>::value, "Not instantiable?");

... it works as expected. I use the same approach with variadic templates.

About Sinkovics works:

Yes, I'm. It's a cool project with almost the same goals as ours, but I find the macro based interface >a bit tricky.

yeah, I was not referencing code at all, but his papers and thesis, I think we should have this very close looking to the future.

ericjavier commented 9 years ago

you can see the test I added here with name: "fitting_for_manu, first"

Manu343726 commented 9 years ago

yeah, I was not referencing code at all, but his papers and thesis, I think we should have this very close looking to the future.

Thanks, just pinned the document ;)

We can take by convention that when a type is available we should get it, i think that this approach is pretty reasonable. The approach I followed with 'yaml' is the following: when a template instance is well formed then I should evaluate it [...]

That's exactly the same reasoning I'm playing with (See the coliru examples). I'm glad we are agree. I was thinking on a tag to explicitly ask for continuing with currying. Something like:

using is_constructible = curry<std::is_constructible>;
using is_vector_constructible = $(is_constructible, std::vector<int>, Continue);

static_assert($(is_vector_constructible, std::size_t, int)::value, "???");
ericjavier commented 9 years ago

Ok, it seems good to me. A 'continue' tag could help with variadic templates. One question: you also handle placeholders with your closures?

Manu343726 commented 9 years ago

Currently not. But, instead you give me one implementation that handles them in a simple manner, I will not add that functionality (with a potential noticeable overhead) to a feature that will be the basis of our language/library.