darakshan / gcfg

Automatically exported from code.google.com/p/gcfg
Other
0 stars 0 forks source link

Writing/setting gcfg files #6

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
(Thanks for writing this module! Quite useful.)

The ability to _write_ configuration files is essential for most non-trivial 
config files. As the complexity of clients increase, the ability to set 
particular config values programmatically is more likely to be required.

It would also be very useful to provide a key lookup facility, in order to 
implement get/sets like
`git config foo.bar`
`git config foo.bar baz`

Thanks!

[Opinionated aside: might get more contributions on github.]

Original issue reported on code.google.com by j...@benet.ai on 9 Jan 2014 at 12:30

GoogleCodeExporter commented 9 years ago
Thanks for the feedback! And sorry for the delay.

Actually I have already started working on this, little by little. My plan is 
to loosely adapt the go/scanner, go/parser and related packages, that would 
provide better separation between phases of processing (i.e. parsing, creating 
internal representation, setting values into struct), thus making manipulation 
through the internal representation and other enhancements easier. (Although it 
turns out that it may not take me as far as I would have hoped, see e.g. 
https://code.google.com/p/go/issues/detail?id=6884 .)

And in case you're wondering, I'm fully aware that basing it on the go/* 
packages is an overkill; part of the reason for using those is simply to learn 
more about parsers. But at the very least it already provides much better error 
handling support than my initial regexp-based implementation.

[I will consider moving the repository after the major part of this refactoring 
is done, although in that case I'll probably use my domain http://speter.net/ 
for the import path (with the actual repo hosted by github or google code). One 
disadvantage of github is that it doesn't provide subrepositories, which can be 
used for versioning (e.g. exp/v1/v2), implementations in different languages, 
etc. under a single project. I'm not a fan of flat namespaces :/ ]

Original comment by speter....@gmail.com on 22 Jan 2014 at 2:19

GoogleCodeExporter commented 9 years ago
> My plan is to loosely adapt the go/scanner, go/parser and related packages, 
that would provide better separation between phases of processing (i.e. 
parsing, creating internal representation, setting values into struct), thus 
making manipulation through the internal representation and other enhancements 
easier.

Sounds like a good plan. I would probably write it in a similar way.

Though it bears mentioning that one common use of ini config files is to edit 
them manually. Some of the placing/ordering of the elements may be intentional 
from users. Some users add comments. Therefore, if it's not too much trouble, 
writing config changes in-place (getting/setting keys could follow the logic:

1. find the first place this key should be in (e.g. the section).
2. if place not found, add place to the end
3. set key in the place.

Or, more concretely:

parts := strings.split(key, ".") section := parts[0] name := parts[1]

sec_offset = cfgfile.offset_for_section(section) // at end of file, if not found. name_offset = cfgfile.offset_for_name(section, name) // at end of section, if not found. cfgfile.writeLine(name_offset, fmt.Sprintf("%s = %s", name, value))

This may be more complicated/edge-case-ridden than foreseen-- but fits what 
people expect: ini files preserving manual edits (placing, comments, etc.). It 
also is probably easier than go/parser.

> (Although it turns out that it may not take me as far as I would have hoped, 
see e.g. https://code.google.com/p/go/issues/detail?id=6884 .)

That bug doesn't look nice :(

> And in case you're wondering, I'm fully aware that basing it on the go/* 
packages is an overkill; part of the reason for using those is simply to learn 
more about parsers. But at the very least it already provides much better error 
handling support than my initial regexp-based implementation.

Totally fair :). Parsers are fun!

> I will consider moving the repository after the major part of this 
refactoring is done, although in that case I'll probably use my domain 
http://speter.net/ for the import path (with the actual repo hosted by github 
or google code). 

Hm, go for it-- though a word of caution: if the import path is on your own 
site, people will tend to fork your code and import it from somewhere with 
stronger guarantees of availability. Even in github, I already do that for lots 
of code (fork the repo, and use my fork as the import path), because it's very 
unclear how long the software we write will stick around at the specified 
paths. One of the downsides of go's very nice URL-based imports is that 
versions don't get cached in some central repository with strong availability 
guarantees. For google, this usually just means google code (i.e. not a 
problem), but for others' packages, we have to wonder how long that code will 
be available at that particular url. Someone's random domain doesn't 
off-the-bat inspire multi-year confidence. (Honestly, the go team could solve 
this by taking VCS hashes as part of the import path, running some centralized 
server (like go-doc) that ensures all the `repo@hash` imported are available 
there. gopkg.org or something.)

> One disadvantage of github is that it doesn't provide subrepositories, which 
can be used for versioning (e.g. exp/v1/v2), implementations in different 
languages, etc. under a single project. I'm not a fan of flat namespaces :/

Yeah, versioning is really critical, and go has punted on it majorly. The 
discussion list is divided on what to do. An easy fix: place each incompatible 
version as subdirectories in the repo (repo/v1, repo/v2, etc). The other option 
is repo.v1 repo.v2. The real solution (which needs to be implemented either in 
go-get or an independent service is to allow repo@version_ref

Original comment by j...@benet.ai on 22 Jan 2014 at 5:48

GoogleCodeExporter commented 9 years ago
Yes, it has been my intention to do in-place change of values whenever possible 
(as opposed to newly "dumping" the entire struct as the Encoders in encoding/* 
do). This is what an AST is supposed to make much easier (among other things), 
although I will probably have to deviate from the approach of the go/* 
packages. The actual logic for the simplest case (changing a single value) will 
likely roughly follow what you outlined, but there are many details to work out 
(most importantly handling comments on value deletion, multivalued variables, 
and multiple files -- which I also intend to add support for). On the other 
hand, if you find that a simpler logic works for your use case, feel free to 
use a patched version in the meanwhile.

Regarding repository location / versioning: I understand the sentiment, but I 
agree with the Go team that in practice it is not something that could or 
should be solved in general by the Go distribution or a single "blessed" 
centralized repository. I personally find the "go get" scheme ingenious, but at 
the same time I don't think it should be taken for more than what it is: just a 
convenient tool. Use it if it fits your use case; otherwise use (or combine it 
with) something else.

"Vendoring" (cloning third party software into own repository or a repository 
on the same server) is a common approach that can provide (perceived) 
improvements in the level of control on changes, availability, operational 
efficiency etc. If someone find that vendoring suits their use case better than 
plain go get, I encourage them to do so by forking. (Though there is a reason I 
used the word "perceived;" I feel that the trade-offs are often not properly 
considered with such decisions. For example, with distributed VCS's such as git 
or hg, I find availability much less of an issue as it is often made out to be; 
based solely on the availability argument, it would almost always be less work 
to start vendoring if / when the original repository has actually become 
unavailable.)

Versioning (and how it cannot be solved in general) has also been discussed 
thoroughly on the go-nuts list, so I'll just mention what is relevant for gcfg. 
Changing import path is a popular approach, but it has two notable issues: 
first, package state is not shared among imports, and second, exported 
non-interface types will not be interchangeable. Gcfg is not affected by these 
because it doesn't keep state and doesn't export non-interface types, and I 
intend to keep it so. This means that the only real downside from a potential 
future change of import path is that a program importing multiple versions of 
gcfg (through transitive dependencies) will be slightly bloated.

The great thing about "go get", in addition to it "just working" in many 
scenarios, is that even if some people end up needing to use something else 
instead of (or in addition to) it, it encourages all developers to think 
thoroughly about stable APIs and to keep code organized in a standard 
structure, which benefits everyone in the long run.

Original comment by speter....@gmail.com on 29 Jan 2014 at 12:14