stylewarning / quickutil

The solution to the Utility Library problem.
87 stars 8 forks source link

Ensure Quickutil Scalability and Uniformity (was: The generated utils should be in a separate package) #19

Open lmj opened 11 years ago

lmj commented 11 years ago
(quickutil:save-utils-as "foo.lisp" :with-output-to-file)

generates a file with this at the top:

(defpackage quickutil (:use #:cl) (:nicknames #:qtl))

This causes compilation problems because exported symbols that already exist are not listed. If I use quickutil, it will only be through save-utils-as (and I think this should be the primary use case). This should work smoothly while running inside a single image.

A workaround is to just add the missing symbols, but ultimately I think the generated utils should not belong to the quickutil package, even as a default. There needs to be consistency between the offline and online incarnations, and it doesn't really make sense to have save-utils-as etc. in the offline version.

stylewarning commented 11 years ago

Two things:

First, I put utils via save-utils-as in the quickutil package because I want consistency between using utilize and save-utils-as. I would like it if quickutil or qtl was the canonical package in which these utilities are put in.

Second, the defpackage when the package already exists is a bug, so that will be fixed.

I'm not sure the benefit of allowing the user to choose the package name. Is there any good reason to prefer another name over quickutil? Or is it just for aesthetic purposes?

In the future, extra metadata will likely be added to this generated file, and as such, I see it as beneficial to just keep it named quickutil in perpetuity.

lmj commented 11 years ago

9bde6fce7 removes the toplevelness of defpackage, so you need an eval-when there.

When the online and offline versions are exactly the same, this guarantees that certain problems cannot occur. That is not the presently the case; the online version contains save-utils-as etc. which I consider part of the admin/build API of retrieving utils. They are not utils themselves and do not belong in the final project.

The original title of this bug was something to effect of "make the generated package different, or let the user choose". On second thought, allowing the user to specify the generated package gives rise to a bloat problem, the very thing quickutils aims to prevent! So I would recommend the names be something like quickutils-admin for save-utils-as etc. and quickutils for the generated utils. The downside of re-using the same package is style-warnings for redefinitions when more than one person uses quickutils with a generated file, but this is offset by the upside of reducing bloat.

stylewarning commented 11 years ago

@lmj, Thanks, forgot about the eval-when.

I understand what you mean about the dichotomy between actual utility functions and utility acquisition functions/web API functions. I agree, to an extent. What I wanted to avoid is complexity. I do not think the following would be as nice as soon as Quickutil is a part of Quicklisp:

> (ql:quickload :quickutil)
> (qtl-admin:save-utils-as "~/utils.lisp" :riffle :weave)
> (load *)
> (qtl:riffle '(a b c) '--)

Now the user must remember that the quickutil system actually has two packages, and I am not very fond of breaking the often-existing one-to-one correspondence between systems and packages.

While there is a bit of a theoretical and aesthetic hindrance by only having a single package, quickutil, I'm not yet convinced it leads to any practical problems. Can you think of any?

lmj commented 11 years ago

The generated file would belong to an ASDF system and updated with one call (maybe a custom ASDF op). I wouldn't do any ad hoc loading as you describe. You can still retain utilize etc. as is, but in the qtl-admin package. I do not think utilize is suitable as the primary use case, but it's fine for experimentation during development.

At least one practical problem is the conflict arising from one project using the offline version and another using the online version. Even with the latest change you have a compilation error when the online project is loaded last, caused by the differing defpackage forms. Or one may have just one offline project and decide to bring in the online stuff for development; same error.

lmj commented 11 years ago

A package which dynamically adds/exports symbols as loading progresses is rather against established CL convention. The problems from differing defpackage forms is a consequence of bucking that convention. This is why I instinctively wanted to decide the generated package name -- it creates a "closed" package that aligns with CL idioms.

You can probably work around these issues by handling symbols manually and avoiding defpackage. However considering the potential brittleness of that approach, as a user my inclination is to "take the money and run". Once I have my utils file I can guarantee that it always works by placing it in a separate package of my choosing, bloat be damned.

stylewarning commented 11 years ago

@lmj, after lots of thinking, I decided I agree, it's best to put them in a separate package, even if it increases mental overhead a little bit.

I don't think the transformation is entirely complete; there's probably some other clean up to do, but as it stands, all of the admin functions are in quickutil-client or (somewhat annoyingly) qtlc. Not sure if I like the nickname qtlc, but it does reduce line noise. And then, as discussed, everything else is in quickutil or qtl.

lmj commented 11 years ago

For me the larger concern is the "uncharted territory" of a package scattered across unrelated projects that use the offline setup. The quickutils symbols need to be merged appropriately. I guess you should wrap every util definition in an fdefinition or compiler-macro-function check to see if it already exists in order to avoid style warnings.

If the utils were fixed for eternity then this strategy would be fine. But of course that's not the case, which means that you'll eventually be merging different versions of quickutils across different projects. This gets especially messy if there are unexported "helper" functions; the result could be an invalid state. You're tied to backward compatibility of the internal implementation, which is weird.

These problems are "solved" if the user partitions his utils in his own package, the downside being bloat. If quickutils had its own partitioning scheme then that may amount to a proper solution. Implementation details would be protected inside smaller packages, allowing proper versioning. We still have the "uncharted territory" of a package that expands arbitrarily as loading progresses, but because there are already well-defined subpackages to provide the needed symbols, the internal problems arising from the use of one big package are gone.

stylewarning commented 11 years ago

@lmj, there were actually original provisions to version each utility, and indeed, each utility contains a major and minor version, but currently that data is not being used. It's really tricky, I think. The idea was that major versions would be behavior different, backwards-incompatible changes, and minor versions were just improvements in speed, perf, or readability.

Versioning can perhaps solve the issue of reinstalling already-existing utilities. If we have local version information, we can cross-check. Another way is to keep a local registry of installed utilities, and send that registry to the server so they don't send back utilities you don't need.

But then this leads to a maintenance issue on multiple levels. Do users now have to specify the version they want, or the latest? What if their application depends on version 1.0 of utility X, which has since been upgraded to 2.0.

I'm not sure what you mean by this:

This gets especially messy if there are unexported "helper" functions; the result could be an invalid state.

I think you mean that the client will obtain a utility X which depends on helper H1, and X will be changed in the future to, say, depend on H2, so when the user re-downloads X in the same running image, the user will have both H1 and H2.

I might hedge on the idea that since utilities are rather small and most of the time unchanging (e.g., with-gensyms probably won't change), enforcing the user to either (a) save a local copy for his or her needs or (b) to upgrade and fix incompatibilities as needed might be the next plan.

Have I addressed, to some extent, what you said?

Thanks for the in-depth comments.

lmj commented 11 years ago

Consider two projects that have their own util files. Project A uses util X which uses helper H. Project B uses util Y which also uses H. Project A is updated, modifying H incompatibly (say, adding a parameter). Now Project B won't compile. You either have to additionally version all internal functions, or somehow tie yourself to internal backward compatibility.

I don't think utilize is an appropriate tool except (possibly) for messing around, so I'm not a good person to ask about client/server versioning. I see enough problems with everyone writing out their own util file. Hoping for unchanging code is not realistic.

I would rather see a formal breakup of alexandria into chunks, with additional functionality being added from there. No client/server, just ASDF like everything else. It seems the only reason to write out individual util files (assuming the consequences are somehow managed) is to reduce bloat. But alexandria's memory footprint is still relatively small, and it can only get smaller if it is broken up.

stylewarning commented 11 years ago

Consider two projects that have their own util files.

Okay, so with Quickutil, each project did (qtlc:save-utils-as "A.lisp" :X) and (qtlc:save-utils-as "B.lisp" :Y) respectively.

Project A uses util X which uses helper H. Project B uses util Y which also uses H. Project A is updated, modifying H incompatibly (say, adding a parameter).

What do you mean project A is updated? The project shouldn't touch the utils file. Maybe dependent util H is updated, which does indeed require changing to X and Y, which should get caught by compilation tests we have.

Now Project B won't compile. You either have to additionally version all internal functions, or somehow tie yourself to internal backward compatibility.

I don't see how this would happen.

I don't think utilize is an appropriate tool except (possibly) for messing around, so I'm not a good person to ask about client/server versioning. I see enough problems with everyone writing out their own util file. Hoping for unchanging code is not realistic.

A lot of people don't and I am becoming more convinced that it should probably only be used for development/experimentation. (That is, it is much simpler to use when developing than it is to save utils to a file then load those utils.) I will likely change the way it is advertised as soon as I redevelop the API.

I would rather see a formal breakup of alexandria into chunks, with additional functionality being added from there. No client/server, just ASDF like everything else. It seems the only reason to write out individual util files (assuming the consequences are somehow managed) is to reduce bloat. But alexandria's memory footprint is still relatively small, and it can only get smaller if it is broken up.

There are people who think this idea is better in general, that I should have instead just made lots and lots of ASDF systems, and called myself the Official Curator (TM) of such systems, so there's none of this client-server stuff. As I have written elsewhere, please note that

You can read in more detail the opinions and my responses in this Reddit thread. My principal argument is that (1) categorizing small utility functions is messy and difficult because some utilities have several or ambiguous "primary" categories and (2) supposing we now have 10 or 15 or more different ASDF systems, the user is now burdened with remembering which utilities belong to which systems, and I think that leads to people either making their own "lightweight" utility files or making a single system which depends on all of these subsystems to avoid the bookkeeping.

lmj commented 11 years ago

What do you mean project A is updated? The project shouldn't touch the utils file.

Project A will regenerate its util file whenever it adds or subtracts utils. Or project A could have started with a newer version of H from the beginning, while project B remains unchanged with an older, incompatible H. Either X or Y will be broken.

In general it's not clear that merging older and newer pieces of the quickutil package can be done reliably. Even if some scheme were devised to handle it, whenever old and new parts are combined the result is an essentially unique package, a chimera, that has not been tested as a whole.

ASDF gives you discrete, testable, versioned units. You can still export symbols to the quickutil package, the only difference is that their home package is not quickutil. You don't necessarily have to break up alexandria; it can just be a component among others. Having quickutil export alexandria symbols would be backward compatible with existing alexandria-based projects without causing bloat.

stylewarning commented 11 years ago

@lmj I understand now. This is indeed would be a bigger problem for projects if Quickutil were to be more popular.

In fact, I find it a problem that if project A depends on some utilities, and project B does as well, that compiling A then B will result in recompilation of functions (which leads to ugly output). And then, from there, we run into other issues such as incompatibilities.

I'll have to think about this more. I'm confident there's a solution but I'm not sure what it is. Maybe allowing the user to specify a package would be a beneficial start.

stylewarning commented 11 years ago

I do not think this is perfect, but I do think that it is good enough for a first release, and satisfies the minimum requirements needed to avoid a clash. As such, I'll close this issue. More specific issues can be spawned if need be.

lmj commented 11 years ago

You might have misunderstood; I said earlier, "On second thought, allowing the user to specify the generated package gives rise to a bloat problem, the very thing quickutils aims to prevent!"

In addition to the bloat issue, having an arbitrary number of unmaintained copies of utils scattered across an arbitrary number of projects is a problem in itself. You're unable to make fixes to utils and have them be reflected in projects that use quickutils. You don't want to nag project maintainers, asking them to run save-utils-as and then make a new release.

Sorry but I don't see a good solution apart from going back to ASDF. If you want fine-grained, per-function loading then maybe a specialized loader could be written, but I haven't thought about how it would work.

lmj commented 11 years ago

What if utilize were a local loader? It would serve as a mini asdf:load-system, handling dependencies and tracking what was already loaded. No server component; utils are in local files which are not listed in quickutil.asd. Users place a utilize form at the top of their packages.lisp. This would solve the above problems and would be easier to use than save-utils-as.

stylewarning commented 11 years ago

@lmj Thanks for the continued discussion, I appreciate it!

Regarding issues presented by save-utils-as and separate utility files

I didn't misunderstand you and I realize the different kinds of problems that specifying packages brings: individual fragmentation and duplication. (I don't really like calling it fragmentation because things aren't really getting fragmented per se, it's just that everybody has potentially different versions of the same thing.) Despite this duplication, it seems to work well in practice for systems like Windows and OS X where applications often bundle all of their requirements into their application directory. Sure, it causes duplication and sure, bug fixes/updates don't get distributed as uniformly, but I don't think it is as problematic in practice as it is in theory. If everyone has the same thing, then there are also ill effects, mainly API incompatibilities with updates (unless you have fine-grained versioning).

Regarding ASDF

I think ASDF solves the problem in theory, but in practice it doesn't. For example, teepeedee2 has an issue where it uses a different version of Alexandria than the rest of the world. And since it's not properly versioned, etc., it actually causes issues with the rest of the world that wants to use teepeedee2 in addition to latest Alexandria. This is the same sort of issue that Quickutil has.

Regarding local Quickutil server and utilize

I was thinking about making utilize a purely local thing. In essence, it would be moving the "server" to the client. There would, more or less, still be the client-server interaction, it would just all be local. By doing this, all utils will (in theory) be up-to-date. There's also less of an issue with security vulnerabilities.

The problem I see is that if utilize is just a local thing, then that means at some point the "server" (manifest in whatever way it is) has to be loaded, which means that at least all of the utility source codes have to be loaded (unless the architecture was moved to something disk-based, and things were loaded from disk on-demand). At this point, I personally see two issues:

The original idea actually was to rely on just a utilize form and not have this save-utils-as business, but that was almost universally rejected by everyone who commented on Quickutil, saying that it was too problematic.

Packages are not the perfect solution

I agree that packages aren't the best way to go about things. But after trying it out, I'm actually convinced that having packages is at least better than before. I can totally understand:

Future

I think that you have the right idea about the future direction of Quickutil, especially in your last comment. I do think utilities should be on a per-function basis and not a per-file basis (which is what I understood your comment to mean). If that was the case, they might as well just be ASD systems.

I think that if we can have all of the utilities initialized, the selected ones loaded, then have them uninitialized, that might be a solution that reaps all of the benefits, but a few key decisions would need to be made:

Ultimately, I believe decisions should be ruled by experiment. Even Quickutil itself is one giant experiment to determine a better way to work with hundreds of utility functions.

I think I will re-open this, and rename the issue to reflect issues of scalability.

lmj commented 11 years ago

What's the simplest thing that could possibly work? All we need is for utilize to load a function, given its name. That might entail looking up the corresponding file and loading it directly, or loading a sub-asdf. In either case we have something completely simple which completely works. It solves all the problems discussed, doesn't punt on bad situations, and makes save-utils-as obsolete. Loading a server and all its dependencies just to get with-gensyms and once-only is something that neither users nor maintainers are going to want. It can be difficult chucking an infrastructure that you've written. Rewriting quickutils in an hour may sound like a grim success, but I think it's the next step.

stylewarning commented 11 years ago

@lmj :

It can be difficult chucking an infrastructure that you've written. Rewriting quickutils in an hour may sound like a grim success, but I think it's the next step.

Indeed, chucking away code feels bad, but what's better is better, and it's best to go with the better code. :) In reality, only the boring bits get chucked away (the stuff that constructs URLs and fetches code). The interesting stuff (dependency computation, etc.) remains the same.

Commit d5b1955 is the start to an experiment to make utilities local, but still maintain the benefits of the current server-based method.

More or less, the following is done:

  1. utilize will load the entire repository of utilities, as well as the utility dependency computation functions.
  2. The queried utilities and their dependencies will be fetched, though the fetching is now local.
  3. The resulting, fetched code will be compiled and loaded into the current image.
  4. The repository of utilities, as well as utility dependency functions will be unloaded. This way, the image is not bloated and only what needs to be loaded is loaded.

The proof-of-concept in the last commit seems to work. I can put, albeit under a different name, (qtlc:utilize :foo :bar) in a source file, and it will do the above steps, and the image will be as pristine as it was, except for the newly added utilities.

The plan is to flesh this out and make it the default behavior. This seems like it will be the most viable plan. Since the utilities are locked up in an ASD system, everyone stays synchronized, yet by doing this loading-unloading scheme, only the necessary utilities are loaded. That way we get the benefits of an ASDF system, and the benefits of lightweight utility inclusion. The only negative I can think of, right now, is that it's an expensive operation to execute. (It is essentially loading an entire ASD system, getting everything it needs, then throwing the system away. This happens for any and every subsequent utilize.) However, fixing fundamental architecture problems at the cost of a single "wasted" ASDF load is a good deal.

With this unified server-client, it actually makes it easier to do some bookkeeping. For example, I can now easily keep track of which utilities were loaded, which means I can avoid re-loading them on subsequent calls to utilize. This is actually very important because it avoids messy redefinition of utilities across (ASDF) system boundaries.

I'll continue with this route and see where it brings us.

lmj commented 11 years ago

I guess you're assuming that delete-package or asdf:clear-system frees memory, which isn't the case. Even if that worked, you still have a lot of complexity and excessive loading there. Why not do the simple thing of looking up the file or asd of each util and loading that?

stylewarning commented 11 years ago

I guess you're assuming that delete-package or asdf:clear-system frees memory, which isn't the case

You are mostly right. If you unbind symbols and whatnot first, the collector should clean things up.

you still have a lot of complexity and excessive loading there

I would say it is only slightly excessive. All ASDF would do is compile and load everything, and keep it there. I choose to remove it. No comment yet on the "lot of complexity" yet. :)

Why not do the simple thing of looking up the file or asd of each util and loading that?

Then what differentiates it from Yet Another Utility Library, except that it only loads ASD files you need as opposed to utilities you need? I think if the granularity is going to be ASD files, might as well just have the user specify the ASD files from the start.

stylewarning commented 11 years ago

@lmj Do you have any objection to closing this issue? Is the original issue solved, modulo performance?

lmj commented 11 years ago

You're still attached to this infrastructure, but you have to let it go. Rather than thinking of it as a wasted effort, consider it an oyster that produced a pearl. Now it's time to extract the pearl.

It wasn't obvious at the outset that a simple loader that accepts util names was all that was needed, but now it is. Either have it load individual files or asds, it's your choice. Or you might want to generate a defsystem form on the fly when a file is needed.

Loading the entire code base and attempting to dump it with each utilize is just nuts! Space may not be freed on some implementations, and even if it were, multiple utilize calls causes shared internal functions to be duplicated, which is the bloat problem all over again, the problem that quickutils was supposed to solve.

In addition, M-. on a util opens a temp file. Users should be able to easily fix bugs at the source and have those fixes be reflected the next time lisp is launched.

Also, you're robbing users of the advantage of fasl files. Spinning up the compiler should only be done when necessary, since it can be time- and space-consuming. And those who want to use a concatenated fasl can't use quickutils.

What makes quickutils different is a loading abstraction which lets users to specify what they need without having to know how to load it. It encourages people to contribute because their contributions will not cause bloat for others. If it is simple and efficient then people will use it because there is no downside.

On the other hand if quickutils involves all this rigmarole, people will instinctively avoid it (I certainly will). I want the smallest possible thing to happen inside utilize. Making it big and complex removes any appeal that it might have had; I would prefer to copy the functions myself.

stylewarning commented 11 years ago

It wasn't obvious at the outset that a simple loader that accepts util names was all that was needed, but now it is. Either have it load individual files or asds, it's your choice. Or you might want to generate a defsystem form on the fly when a file is needed.

I'm not sure I agree with this.

multiple utilize calls causes shared internal functions to be duplicated, which is the bloat problem all over again, the problem that quickutils was supposed to solve.

This is not true. It keeps track and only loads what hasn't been loaded. If you loaded A which depends on B in a call to utilize, and you load C which also depends on B in a different call to utilize, it will not load B again. If, after all that, you decide to load B, it won't reload it, but it will export it since you explicitly requested it.

Loading the entire code base and attempting to dump it with each utilize is just nuts!

That's not so nuts to me. It's not compiling all of the utilities. It's just populating a database. In the future, I don't see why this couldn't be streamlined, by not relying on ASDF to command the loading of that database.

Besides, if it's an abstraction and the user doesn't see it, what's the issue? Isn't that the entire point of abstractions? Or does it just devolve into an argument about aesthetic appeal?

In addition, M-. on a util opens a temp file. Users should be able to easily fix bugs at the source and have those fixes be reflected the next time lisp is launched.

This is a negative, I agree, though I am pretty strongly against monkey-patching third-party libraries. I've been guilty of it, and it's bitten me back in production. The proper way would be to isolate the code (easy with utils), and just have your own version in your source code, and contribute your bug fixes back so they can be updated in the next release.

Also, you're robbing users of the advantage of fasl files.

Yes, unfortunately that is true, if you use utilize. But fortunately there's save-utils-as as an alternative solution if you're actually relying on stuff like fasl file concatenation.

Spinning up the compiler should only be done when necessary, since it can be time- and space-consuming.

I'm not sure it's being done unnecessarily at all. From my project QSolve which uses Quickutil:

CL-USER> (ql:quickload :qsolve)
To load "qsolve":
  Load 1 ASDF system:
    qsolve
; Loading "qsolve"
;;; Clearing QUICKUTIL-UTILITIES system...
;;; Unloading QUICKUTIL-UTILITIES.UTILITIES...
;;; Unloading QUICKUTIL-UTILITIES...
;;; Collecting trash...
.
....
(:QSOLVE)
CL-USER> (ql:quickload :qsolve)
To load "qsolve":
  Load 1 ASDF system:
    qsolve
; Loading "qsolve"

(:QSOLVE)

Reloading the library did not cause the compiler to spin up unnecessarily.

What makes quickutils different is a loading abstraction which lets users to specify what they need without having to know how to load it. It encourages people to contribute because their contributions will not cause bloat for others. If it is simple and efficient then people will use it because there is no downside.

There are more absolutely key things that make it different:

What you are suggesting with separate files/systems eliminates all of this. Either you are suggesting: large files that contain many related utilities, or files that only contain a single utility. The size of the files determines the granularity of what can be used. If we use large files with many utilities per file, then we have precisely the same thing as everyone else. The only difference is that we can say what individual things we want, but it'll still load the entire file.

Dependencies now turn into file dependencies, and very quickly, we lose the entire first benefit above.

So that means we need one utility per file/ASD. This means the overhead of adding a utility is adding a new file/ASD system. So we have iota.asd which defines, say QUICKUTIL.UTILITIES.IOTA, and iota.lisp which is the :file component of that ASD. This means the number of files scales linearly with the number of utilities. There are already 200 of them, which means there would be 400 files. The ASDs would be needed for dependency computations.

So now we have a 200+ (at the time of writing!) ASDF systems, and if a user loads 10 utilities, they are then loading 10+ systems. But we still have a problem, the second benefit above. Where do we store metadata, such as useful categories?

We can decouple the idea of a category, and keep separate systems, say, QUICKUTIL.CATEGORY.MATH or QUICKUTIL.CATEGORY.ALEXANDRIA, and every time we add a new utility, we must add to those appropriate category systems. Versions can be dictated fine with ASDF. What about other metadata? I thought about having different kinds of utilities inspired by SBCL/CMUCL's defknown/vop policies, like :fast utils, :space utils, etc.

Lastly, the website would be much more difficult to use. If users can't reasonably discover utilities, they won't use them whatsoever. Issues with the website would include:

This is relatively easy to fix, provided our systems' files only include the utility code and nothing else (no meta data, no nothing). We could just lookup the file associated with a utility/system, slurp it, cache it, and display it. Making this a simple thing imposes a limitation on the file names of the utilities.

Just call DOCUMENTATION!

No, that doesn't work. Call documentation on what? Is it a function, macro, what? What is the actual symbol we need to extract documentation on, and how do we determine it?

But wait, there's an even bigger problem. Some utilities actually are composed of several symbols, and some utilities actually provide several symbols. Which doc string do we use?

We could use the :long-description field of the system as documentation. But what about the popular case, where that documentation corresponds precisely to the doc string of the function/whatever. Do we duplicate, or do we start doing weird magic to extract the system documentation at compile time and inject it into the function definition?

This is actually a bigger problem, not just with the website either. How do we actually know what symbols should be exported? Do we just put an export at the bottom of the file? (But we don't want extra garbage being shown in the source code on the website.) Do we have a new file called exports which exports the symbol, and now our ASD contains two files?

We could put unexported functions in one file, have the "main" file contain all of the exported forms, and what would at least tell us what not to export.

But Lisp is flexible, and we can define stuff with macros. So it's not easy to scan the source code for symbols to export. We would actually need to do minimal compilation in order to determine this.

In all, it just seems like a messy problem with this approach.

The obvious solution is to add :depends-on systems to the ASD file. But we'd need to extract those to show it on the website, and be able to pretty print them.

Actual utility systems would be namespaced like follows: QUICKUTIL.UTILITIES.<util name>. Our :depends-on would conceivably look like this:

:depends-on (quickutil.utilities.with-input-from-file
             quickutil.utilities.with-output-to-file
             quickutil.utilities.copy-stream)

That is tiresome. Also, we might as well chuck out the utilize form and just have the user add the utility dependencies directly to their ASD.

We also lose an additional benefit that Quickutil currently offers: it knows what you need as a dependency at compile-time and what you need as a dependency at run-time.

My question to you is: do you really still think the file/asd approach is really better, just so we get M-.-to-the-original-source, FASLs-from-utilize, and a more efficient loading process under the hood, at the great expense of a fantastic website, an extensible/flexible system, and a lot of work reversing/rewriting code and makings 100s of utility files? To me, personally, it seems shortsighted.

lmj commented 11 years ago

Your response as a whole is very far off base, as if you were responding to someone else. I never said to dump the web interface or the categorization or the dependency tracking, as you appear to have assumed throughout. I have only advocated a small, simple approach to loading. A "loading abstraction which lets users specify what they need without having to know how to load it" would obviously require load information, and not just be a collection of asd files.

Because of this, the parts of your response pertaining to ASDF are not relevant. I said that you could load files or asds or generate asds on the fly, your choice. ASDF would only be used as a portability layer for loading. If it's easier to avoid ASDF then do so. But even if you did use it, the scheme you outline wouldn't make sense because you would already know the dependencies.

The general issue is that you have server code which has been retrofitted as client code. If you scrap the server mentality and start again from the perspective of what's best for a client, the result will be very different.

Either you are suggesting: large files that contain many related utilities, or files that only contain a single utility.

The two extremes are not the only options, of course. I would expect smallish files but not as far as one file per utility. Choosing the granularity isn't a concern because it can be adjusted at any time. Even one file per utility wouldn't be a problem. Remember we can skip ASDF entirely or automate its use.

In addition, M-. on a util opens a temp file. Users should be able to easily fix bugs at the source and have those fixes be reflected the next time lisp is launched.

This is a negative, I agree, though I am pretty strongly against monkey-patching third-party libraries. I've been guilty of it, and it's bitten me back in production. The proper way would be to isolate the code (easy with utils), and just have your own version in your source code, and contribute your bug fixes back so they can be updated in the next release.

Having M-. work doesn't equate with anything that anyone should be against. The proper way is to clone a repository in ~/quicklisp/local-projects so that patches are ready-made. This is also useful for purely debugging purposes, say, adding some logging info to verify that the library is working correctly. This is quite difficult with quickutil because one must locate the original function, edit it, and even then utilize won't see the change. Also, C-c doesn't work, and the errors/warnings/notes locations are removed.

Placing metadata in comments and docstrings would be the typical solution, but keeping defutil can work too, e.g.

(defmacro defutil (code)
  (with-standard-io-syntax
    (read-from-string code)))

(defutil "(defun foo () 99)")

The downside is that error/warning locations are still gone (same as current behavior), but at least C-c works.

Also, you're robbing users of the advantage of fasl files. Spinning up the compiler should only be done when necessary, since it can be time- and space-consuming.

I'm not sure it's being done unnecessarily at all.

Every time utilize is called from a previously loaded project the compiler spins up unnecessarily.

You've made several references to loading all utilities (e.g. "why not just load them all into the image anyway?") which I took to mean (per the CL terminology) loading the functions into the image. But you were referring to loading the utility database. I also didn't realize that internal helper functions were part of the dependency graph, thus there would be no duplicate definitions. I concede those two points due to my misunderstandings, however they don't affect the big picture which is still in the realm of "crazy".

Suppose you carry a toolbox in your car. I offer to replace your toolbox with one that is about the same size and weight but magically produces any tool you want. Since there's no downside, you'll certainly take the magic toolbox. Everyone will want to trade in their toolbox for a magic one!

But what if the magic toolbox is the size of a car? Now very few people will choose it. The target consumer has changed from car owners to ocean liner owners and the like.

So who is the target for quickutil? Who are you trying to win over? One case is a guy with his own small collection of utilities. If he's smart then he'll be wary of a replacement that carries a dramatic increase in size and complexity. As explained earlier, save-utils-as is a poor solution long-term, and provides little incentive in this case.

There are a number of inconsistencies throughout your responses. Smallish or medium-small files are out because one must shave off every last unused function, yet no concern is given to drastically bigger footprints from dependencies. Fasls are defeated, requiring the compiler to run on every load (large time/space hit on SBCL), and the solution proffered is to use save-utils-as. But, as discussed, that invites several problems, one of which is bloat, yet when this is pointed out suddenly bloat becomes a non-issue, "[not] problematic in practice as it is in theory". But what about the shaving off of every last unused function that was held up as so important? It's hard to keep track of the rationalizations.

The reason I was interested in quickutil is because I have a few utilities that I think everyone can benefit from. But in order to use them, people shouldn't be forced into a heavyweight system like quickutil. I would rather release a small asdf.

It is possible for quickutil to have its cake and eat it too -- to be lightweight while retaining the current functionality. It's unfortunate that I was unable to convey this point.

Please remove my name from the readme. I don't want someone saying, "WTF? Why is quickutil doing all this stuff? I just want with-gensyms!" and then see my name associated with it.

adlai commented 3 weeks ago

Reading the above conversation is painful; please try including names, or github handles, of people being quoted when quoting blocks of text, similarly to how email clients do. If you're averse to having complaints directed to someone who said something disruptive, you could make up strawman names using something like Wikipedia's Random Page scrambler and choosing some name that is not already used in the conversation.