Open Raynos opened 11 years ago
I think the MVP should just be a form with the following inputs:
1.) package name 2.) some code
and then whatever other credentials are needed for GitHub and npm...
then it submits to a server, which creates the package.json file, pushes to the GH repo, and submits to npm.
Of course this doesn't deal with dependencies, forking, updating a module... but it makes sense to start with the absolute basics, right?
@williamcotton
Thats our first screen.
The fourth screen talks to github, creates a package.json & talks to npm.
I guess we can skip example & tests for a bare bones MVP.
However having examples and tests is what makes it cool. You can verify the code works in your example (and we put the example code in the README, it doubles as docs). And you can actually test your code. More importantly we prompt you to test your code, a lot of JS developers dont do tests. Having tests means I can focus the example on being an explanation and use the tests to really verify correct ness.
Then with an example, tests & the code I do not need to check it out locally or do anything. the module is done!
Yup, yup, I'm liking it. I'd guess that we'd be able to complete the "name and code" submission pretty darned quick and have plenty of time for examples and demo...
btw, I'm very in line with this whole approach! Here's an article I wrote a couple months ago about lit/corsLit: http://williamcotton.com/another-way-to-publish-code
It has a lot of the same workflow... something that I've given a more catchy name of "Source Flow" (and SourceFlow.io) and even started working on a development workflow, and by "working on" I mean I have this note in Notational Velocity:
the smaller the module, the better abstract and remove create tests that must pass before building create tests that must pass before updating the module
I've also got a few GUI designs that I've drawn on some napkins... but I'm getting way off track here!
Anyways, unfortunately I've removed all the test related stuff from the current project because I didn't like the way it was going, BUT, I can roll back and take a look at how I implemented it with Jasmine.
It should be relatively painless to get it going again in the context of npm-the-wizard!
The 2nd & 3rd screen
Show the idea of having tests have the output side by side.
Some of the more challening parts is how do you want to author & run tests locally. I was thinking we might be able to just run them in the tab and write to a div or run them in an iframe (like jsfiddle).
I want to do a similar thing for examples, like requirebin / jsfiddle / elm ( http://elm-lang.org/edit/examples/Intermediate/Mario.elm ).
But there might be better ways of doing examples / documentation and doing tests.
I've been playing with running code in dynamic workers. It's not a node platform environment though unless someone re-implements node on top of browser APIs. (I know browserify has done some). Have you thought of creating a chrome packaged app instead of a webpage? It wouldn't actually include any of real node, but it could create node packages and even simulate node APIs by using the advanced APIs in chrome apps (TCP server, UDP, FIlesystem access, etc...) I would think that re-implementing node in another platform counts as a node-knockout entry, right?
Oh interesting.
We can run / test / create node applications all LOCALLY in the browser if its a chrome package app because we can just re-implement all of node.
I think just running browser based examples / tests is a lot easier to do in 48 hours.
My main question when I have ideas like this every year with node knockout is that I don't need node to do it. How it is a proper node-knockout entry?
to me it seems like it would be easiest and most in line with the competition if there was a node back-end that communicated with git, github, and npm.
perhaps the client is agnostic to the mechanisms related to publishing, updating, forking, versioning, and details that are specific to git, github, and npm?
I'm suggesting that there might be room to use multiple mechanisms on the back-end related to version control and package management. we can build it with this interface in mind. Our specific contest would be a node project that adheres to this API and uses git, github, and npm...
(getting a bit out there now..)
so like, requirebin... it is built on top of browserify, right? anyways, it allows for one to pull in dependencies... now, the next step is taking what was done in requirebin, and instead of publishing it as a gist, publishing it via npm-the-wizard, you know what I mean? this is basically the workflow that I've discovered with lit and corslit.com, and it is super fluid, fast, and fun... it takes the bookwork out of publishing and consuming code modules!
so like, here's my thought... why not head in this direction? I mean, we can build just npm-the-wizard for the contest, but we could lay the framework for something more expansive by adhering to a basic API... in my mind it should be something that basically follows the scent I've been following with lit.
so like, something that uses npm, git, github, browserify to build like requirebin + jsfiddle + npm-the-wizard would adhere to the same API and workflow that I'm exploring with lit. Does this make sense?
there's one little bit on controversial behavior, and that is that browserify and npm assume a CommonJS style of dependency injections, you know, inline synchronous require(), as opposed to lit which uses an AMD style with async calls to require() with a callback function...
...well, what I see there is this mechanism is abstracted out in to an API...
some sort of dependencies array, like what is in the package.json file for npm or the array in AMD, but also mapping to a local variable that is injected in to the code itself... so instead of explicitly binding the dependency at the top of a node module, the API would be a bit more implied. so then, in the case of modules being stored on npm, our service could write out those require() statements to the top of the module code and then check that in to to git...
... and then for people like me who want to skip git and other unix specific tools, we can just take that API and write it out however we want to our end.
@creationix The knockout rules say that the app must be built using node. If we use node to produce a node packaged module, then we used node to build the app... But in spirit of the event, I think targeting npm directly makes this even more suitable. Also node environment in a chrome packaged app sounds really cool.
@williamcotton We could make this generic enough to extend it to other module systems, package managers and even languages. To avoid getting too caught up in the concerns of AMD and CommonJS, It would also be nice to think about how python modules could be authored through a system like this. I think we are looking to build something that has potential after node knockout, so while we ought to try to keep the scope relatively tight, these ideas are definitely worth thinking about.
well, we should probably try and figure out what a module is!
to me, it's an abstract unit of functionality that has:
a name/unique identifier, some dependencies, some metadata, a unit of [code/computer instructions expecting inputs and having outputs]
as for the name/unique identifier, there needs to be some system of authenticity, right? like a username, or at least some sort of namespace that is owned by a single entity. that way people can trust that namespace/username and therefor hopefully trust the modules under that namespace
for dependencies, they just need to be a list of like minded names/unique identifiers... just a collection of strings... I think things like versions or whatever are best handled semantically... some other process can figure out how to turn that unique identifier in to a the string of code that is returned... I also think they're important enough to warrant being kept separate from metadata.
metadata is just like whatever. I like the idea of it being completely extensible so people can build whatever the heck they want on top of it
as for the code itself... well this is where things start getting exciting :)
npm, rubygems, pip, cpan... all of those have specific contexts...
but let's hold up, are we talking about making a Universal Module?
does then our module definition need a "type"?
ok ok, going WAY out there for a moment... what if the module definition shipped with it's own machine/interpreter as well? haha
aaaand let's come back... what if the module definition was type and language agnostic and it was up to wherever it was submitted to assume that it was legit code and that the dependencies and metadata fit with the semantics of the submitting service? that way all that is being defined is a really loose API and it doesn't drift off in to some weird SOAP/WSDL/RPC type of nonsense.
but wait, lets think about dependencies again... what if they had a URL and a protocol? I guess we're typing them then...
npm://npmjs.org/request
lit://corslit.com/williamcotton/superDraw
rubygems://rubygems.org/rails
or something like this... thoughts? :)
So the list of modules thing is interesting. We (@Raynos & I) were expecting that for small modules we could trivially pull out an array of module names, and assume that the latest version would be fine. The links you have in your example are like, point to this module on this package manager. We could also facilitate git links like we can in package.json using pattern matching. This would make the tool appropriate for private modules hiding on private github accounts for those of the persuasion to do that.
Essentially, if we are going to automate the process of installing a package for the purpose of running tests and examples, then they absolutely must resolve to some uri. What you have gone on to say is that well, we can elicit the fetch/install protocol by prefixing the module identifier uri style. I like it.
ok, and just while I'm thinking about it... what if I included a rubygem as a dependency in my javascript module?
and what if something compiled that rubygem to javascript for me?
and what if that was really hard to do with some rubygems, but I know that, so when I'm writing a rubygem, I make sure it is written in a way that works fine in ruby but also compiles and works fine in javascript? or python? or the JVM?
and what if the source for this tool (out of solidarity and because dog food just tastes so good) was comprised of modules written in a whole bunch of different languages and hosted across all sorts of different package managers and code repositories?
I swear when I sit down to code I'm much more pragmatic than I'm making it seem right now, haha!
Haha well, it would be funky. Suppose that each target language could compile to LLVM byte code, you could configure interop through emscripten bindings. I don't imagine including a ton of stdlib from Java would be a cool move though.
there could be some interesting metaphors to borrow from category theory and functional programming... like, to take a modular function, and "lift" it in to different runtimes... obviously you can't take a rubygem that expects to be able to run in a Unix environment and have it work in a web browser... but if functions were designed to be agnostic of runtimes, it would make them a lot easier to move around! functions could be "lifted" to Unix, or browser, or JVM, or .NET, or whatevs! isn't the barebones requirement for stdlib functions basically just an interface to the runtime/host platform?
Runtime environments tend to be heavy. If you actually transpile code then all the stuff the language provides upfront needs to be ported to the target language, and those portions required by the module included as deps (as browserify does, but browserify is cheap given that the src/dst languages are assumed to be the same). If you were to compile to LLVM bytecode then this would include compiled dependencies, including implementations of things like ArrayList in Java. It's probably impractical to interface through RPC arbitrarily to a hosted environment but not impossible. You can of course interface to Java inside the browser ...eww
But yeah, functionally speaking you would be lifting code through a (lazy) transform into your target environment.
I think transpiling is the ticket! Or maybe there's some sort of other process... like, what if a "UniversalStandardLib" was written for every language, and authors were encouraged to use that? What if data types that adhered to roughly JSON were encouraged? That sort of bridges the gap between compiling and transpiling, right? But yeah, I see what you're saying... like that Ruby to JS translator, Opal...
This can either be optimized for making npm modules really easily or for sharing & forking npm modules easily.