Open nikfp opened 1 month ago
This is more or less what I had in mind too. I wanted to take a look at how Phoenix generators do it and use that as a reference for our thing. As for updating existing files I'm not sure yet, but I think it would get us into a rabbit hole of thinking of every edge case, so maybe it would be better to focus on the first pass for now.
I've been thinking about this and I don't know quite where to begin to be honest. I could use some steering on where to point research energy.
I think what would help me is to do some exploratory coding. I can look into mix tasks and also a JSON fetch kind of option, or we can look at how Phoenix does generators. I'm curious if what we are trying to do would be too dynamic for that though.
@waseem-medhat I could use any thoughts you have on the matter now that you've also had some time to think.
Personally, I usually start with implementing the most rudimentary, grug-brained way of doing it, which you could see here.
Implementation details aside, if you don't mind the general idea of "pull a file from somewhere online and shove its contents into a local file" then the basis is already there, and all I need is to document the code and clean it up a bit since the current state is just the result of me messing around until something worked.
We could keep this workflow fixed while working on some UX niceties like allowing the user to write partial chapter/module name (right now it has to be an exact match) and some other interactive prompts like confirm file replacement (that is until we look into code injection), not to mention the actual content itself: the exercises.
Also, now that I think of it, in this particular project injecting an existing module with new code would only happen in the case we added new exercises to a module. On the other hand, a module "reset" is probably a more common and arguably more useful use case. What do you think?
Personally, I usually start with implementing the most rudimentary, grug-brained way of doing it, which you could see here.
I like this approach, it's simple and easy to follow. And it could be extended if needed.
We could keep this workflow fixed while working on some UX niceties like allowing the user to write partial chapter/module name (right now it has to be an exact match) and some other interactive prompts like confirm file replacement (that is until we look into code injection), not to mention the actual content itself: the exercises.
I like the idea of this, were you thinking CLI updates with a fuzzy match?
Also the idea I had with Json is that it could have 3 top level properties.
Also, now that I think of it, in this particular project injecting an existing module with new code would only happen in the case we added new exercises to a module. On the other hand, a module "reset" is probably a more common and arguably more useful use case. What do you think?
I think both of these use cases would be common to be honest. Starting with a single function in a module and then building up would be how I would expect to work.
Given what you've asked and what I've been able to think up, getting the CLI portion right is probably the most important part. That doesn't mean the first iteration needs to be right, but that's maybe where we expend energy to begin. I'm going to look into custom mix tasks with helpful commentary and maybe tab completion and see what I can come up with.
actually, this is going to be easier than I thought. Fuzzy matching and the works is already included.
Elixir school does a good job of explaining here
Now I'm interested in finding a way to dynamically register further completions, so for example if we had a file somewhere it could read in the contents as a list of what it can do, try to match on something close and make suggestions, and then execute when a user gives a match.
OK the wheels are turning. Let me share my thoughts on the individual points...
My grug brain was going for downloading an entire module's functions (and tests) at once, and this is why I thought file replacement would be a much more common scenario than injecting new functions. Do you think this could work? If it doesn't then, yeah, we need to put more energy into the codegen UX.
In the current implementation, the chapter name is given as an argument to a single mix task (spirit.gen
). In that case, I'm not sure if the built-in fuzzy matching will work for us, unless we change our approach from one mix task that takes a command-line arg to another with a mix task per module. (I'm sure the latter approach can be made without duplicating code, but the former somehow makes more sense to me.)
But in general, since they're written in the command the line and not in an interactive TUI that reacts to each keypress, I think fuzzy matching wouldn't add an incredible amount of value compared to a simple case-insensitive partial match, i.e., we could use simple partial matching in our first pass and then add fuzzy matching in a next step.
I agree with the structure in general, but I'm not a fan of putting the Elixir code inside JSON because this way we cannot format or maintain such code easily, and if the JSON file(s) are not on GitHub (e.g., stored in a bucket), then we have no means for other people to submit PRs directly. So, I'd much rather for the module content at least to be in an OSS GitHub repo. If something rudimentary like this needs improvement we could surely work on it.
I'm going to play around with this a little bit and see what I can do. Thankfully, like many other things, Mix is well documented.
@waseem-medhat I spotted that you are pointing to the exercises on your personal GH account, those should probably move to PracticeCraft as well.
@nikfp Done
ok, I have some experimentation and I can use the GH API to get a list of directories in the exercises repo. Then we can recurse and get files from that directory using the same API. I'm using a combo of httpoison and Jason to do it, and I don't think it would be to hard to create the following in the mix task.
I think this gives us a good balance of how to do what we're trying to do and also leaving the door open for flexibility in the future.
Let me know what you think?
LGTM 👍🏻
Let's start working on that right away. Maybe we could put that down in a new issue in the Kanban board (while keeping this one open for future experimentation)? Feel free to start implementing it or assign it to me if you won't have the time.
BTW, I think you can push branches directly without needing a fork. Whatever works for you.
I'm interested in how an approach could work for mix tasks to handle next steps. Having one mix task that can take an argument seems right to me, and then maybe a Json file in the repo that contains all the pieces and instructions needed to inject files into the repo much like a generator would do. Each entry in the Json could have a key related to that task, and a permalink URL to a directory on Github or static hosting, and it could pull down a directory of files and then distribute them into the proper locations as needed.
This doesn't account for injecting code into existing contexts yet though, and I'm not 100% sure how to project that.